Wednesday, October 30, 2013

Musical Memory in Musicians

 Even with the newest technology, the memory capacity these powerful machines are incomparable to our brain. In Jourdain’s book, Music, The Brain, and Ecstasy, musical memory is discussed in both the composition and performance section. As a pianist, I always have to memorize my music. Memorizing music was considered an important step in the learning process. I was also expected to retain the memory for an extended period of time to ensure that was ready for performance.
 
Memory is divided into three sections: sensory, store term and long term memory. Sensory is environmental information we receive from our visual, auditory, and kinesthetic senses. If we need to remember this sensory memory, it gets transferred to our short-term memory. Our short term memory controls our everyday functions. In terms of music, sight reading and improvisation happens here as new information is retained and repeated. When the information in short term memory is rehearsed and elaborated, it is stored alongside with our existing knowledge in our long term memory compartment. Long term memory can retain information for an unlimited capacity and duration of time. It is responsible for procedural knowledge, semantic knowledge and episodic memories. In terms of musical memory, procedural knowledge allows the performer to make music by coordinating movements to produce sequences of note such as scales and arpeggios. Semantic memory is the knowledge of this particular sequence of notes which represent the pattern; while episodic memory is specific associations to the melody of the piece.
 
In Williamon’s book, When Musical Excellence: Strategies and Techniques to Enhance Performance, many studies were conducted to solidify which memory methods are most effective for musicians when memorizing music. It goes on by saying how most people have the same memory structure. Musicians who have superior memory attribute this to using highly effective strategies when learning and retaining this information in the memory. Some useful strategies include improving memory in its most general sense, preventing age-related memory loss, enhancing study skills and using mnemonics by associating meaningless information with the material to be remembered.
 
In Hansen, Wallentin & Vuust’s 2012 article, Working memory and musical competence of musicians and non-musicians, there have been many studies trying to link language and music. However, they have come to the conclusion that there is not enough significant evidence to prove that music and language memory are linked. A study was conducted where musicians and non musicians were given a sequence of four syllables and asked to determine the temporal order of two syllables (Jakobson, Cuddy & Kilgour, 2003). Results indicated that musicians were less likely to confuse the order of the syllables. The study suggests that this may have to do with pitch and sub-vocalization being involved. The study followed up by memorizing a word-list given to musicians and non-musicians preventing them from using sub-vocalization. The results indicate that there were no significant differences in verbal memory scores.
 
There are many positive reasons to develop our musical memory. Firstly, as soloist, we will not have to turn pages and be able to monitor the physical aspects of our performance. There seems to be a stigma that when music is memorized, it will enhance the musicality of the piece. Memory also seems to guarantee a more thorough knowledge of the piece and intimate connection with the music. It also allows the performer to better connect to the audience.
 
The first strategy is memorizing by rote. The use of kinesthetic memory, which is by repeating it until it can be played automatically or by “feel”. Pianists do this by practicing hands separately using the same fingerings each time to develop kinesthetic memory. Professional musicians use rote memory to over learn their music for performance because kinesthetic is very vulnerable. Students learning using rote memory must be careful as it can be done without conscious awareness, which makes it unreliable because when there is a slight distraction during the performance, kinesthetic memory can be lost.
 
Another strategy musicians use to aid them with memory is visualization. Many musicians admit that they “know where they are on the page”. A study was conducted by Nuki (1984) to see which method of memory, visual, aural, kinesthetic or a mixture of them all, was most effective when piano and compositions were first to sight read and then memorize a piece of music. The results indicated that when students reported using a visual strategy were significantly quicker when memorizing the piece.
 
Memory by ear is also a strategy. Musicians need a conceptual framework before aural memory can be used. This can be completed through repetition through playing, singing or imagining the sound. Aural memory is developed when auditation occurs, which is when sound can be heard in the inner ear before beginning to practice
 
Analysis of music provides the musician with a roadmap for the piece. Analysis can help musicians chunk the music into sections. These sections can be further broken into smaller phrases and practiced through repetition. After these phrases have been mastered, they can be linked again into the sections. A study, Rubin – Rabson (1937), was conducted where the experimental group were asked to analysis their music before listening to it and memorizing it and the controlled group were asked to listen and memorize it. The results indicated that the experimental group memorized the pieces the quickest. Analysis of the music also helps with visual memory.
 
When I teach with my piano students, I always encouraged them to memorize their music at the final stages of polishing their music. Besides the fact that it is a tradition, I believe that memory facilitates the learning experience and produces a well balanced final performance. It allows students to utilize a variety of memory techniques that can be easily transferred to other school subjects and areas of life. We should be promoting the development of memory skills in the brain through musical experiences.
 
Reference
 
Hansen, Mads, Mikkel Wallentin, and Peter Vuust. "Working memory and musical competence of musicians and non-musicians." Psychology of Music 41 (2013): 779-793.
 
Jourdain, R. (1997). Music, the brain, and ecstasy: How music captures our imagination. New York: W. Morrow.
 
Williamon, A. (2004). Musical excellence: Strategies and techniques to enhance performance. Oxford: Oxford University Press.

Tuesday, October 29, 2013

Musical Savants, Emotions and Creativity


Musical Savants, Emotions and Creativity

            Music has fascinated humankind since the beginning of time - from the remarkable ways in which we interpret various sounds to the wonders of people creating incredible compositions. Many scientists have been drawn towards a particular syndrome which relates to brain functions and loosely ties itself to the arts. This phenomena is known as Savant Syndrome, which is caused by brain damage to the left hemisphere (which is responsible for speech/language and reasoning skills). This deficiency can be caused by other disorders (such as autism and schizophrenia) or by a head injury (i.e. stroke, concussion). The brain rewires itself with the help of the corpus callosum (white matter in the brain) and makes new connections, allowing an individual to "tap into dormant abilities" (Dr. Darrold Treffert, Acquired Savants Open Up, HuffPost Live, 2012). Paradoxically, this syndrome allows a select few people in the world to exhibit extraordinary musical abilities. Many can repeat a song on their instrument(s) after hearing a tune once in their lives, improvise on a melody extremely fluently and/or compose songs that are universally cherished. The affected individual is identified as a savant, who is defined as "a person who displays an unusual (or exceptional) aptitude for one particular type of mental task or artistic activity despite having significant impairment of other areas of intellectual or social functioning" (Oxford English Dictionary, 2013). Thus, there is a "remarkable coexistence of deficiency and superiority" concerning the individual (Gururangan, Savant Syndrome: Growth of Empathy and Creativity, 2010, Pg. 1).  Robert Jourdain's general depiction of musical savants is, at best, stereotypical and at worst, erroneous. He implies that musical savants "lack complex emotions and creativity" by stating that they have minds that are "incapable of the full range of human experience" and that they "lack a cognitive hierarchy that can juxtapose ideas and meld them into new ones" (Jourdain, Music, The Brain, and Ecstasy, 1998, Pgs. 200-201). However, numerous research shows that musical savants not only have the same emotions as neurotypical people, but that they are more than capable of being creative with music.
            There are two examples which exemplify the ability of musical savants to understand and express complex emotions and be creative simultaneously. The first example involves Derek Paravicini, a jazz pianist who is also autistic and blind (Gururangan, Pg. 2). His "inability to filter out noise limits him  to communicating and comprehending (language) skills at a childish level (Gururangan, Pg. 4). Dr. John Sloboda conducted an experiment in 2003 in which Paravicini was asked to play a piano piece in three different ways to signify different emotions - happy, sad and angry (Ibid); this experiment was done to assess the "emotional relevance" and "understanding of emotional cues" of his playing. David played the first variation of the piece by playing it in a major key with an allegretto tempo and a staccato articulation. He demonstrated the second example by playing the piece in a minor key with a slower tempo and a legato articulation. Paravicini struggled with playing the third example with an "angry" tone. However, he was communicating his frustration by "grunting under his breath," which signified that he knew that his third variation of the piece did not reflect the intended angry emotion. Furthermore, he was also able to identify the intended emotions of a piece played differently thrice by another musician (Gururangan, Pg. 5). Jools Holland, a musician, concluded that the playing ability of Paravicini is "an extension of his personality and his feelings," proving that he (and other musical savants) can comprehend and express emotions - if there is an outlet (Ibid). Paravicini has superb improvisatory skills in which he "rearrange(s) elements of tonality, meter and phrase structure with subtle transformations and introduce(s) new material from pieces of similar style (Ibid). Out of all music genres, David delved into jazz - "the playground of the innovative" which allows "artistic freedom (with some structure). Paravicini has demonstrated creativity consistently in experiments and in his performances, showing that all humans have creative abilities.
            The second example involves Derek Amato, a former corporate sales trainer who had a concussion when he landed head first into the shallow end of a pool when he was 40 years old; he was trying to catch a football over the water. After being admitted to the hospital and treated for the concussion, he was sent home. He was able to sing and play the piano at a professional level almost overnight; he was an acquired musical savant (Chan, Michael,  Severson Dave, and Collins, Rocky, Ingenious Minds: Derek Amato, 2012). His friends mentioned that Amato plays very passionately, and that he "did not have a gift in music" prior to the accident. However, his hearing was impaired, and his memory never returned to his former state - causing worry about his career longevity. Frequently, he mentioned about music constantly banging in his head, making life emotionally disturbing. Interestingly, Amato had several concussions prior to the latest occurrence, and it was observed through an MRI (magnetic resonance imaging) that music (which was initially used to try to calm him down) was further aggravating his brain and causing great sadness. Doctors discovered through an MRI that Amato had developed small white masses on various parts of the left hemisphere of the brain as a consequence to his concussions. He was told that he could take anti-epileptic drugs to "reduce the volume" of the music cerebrally, but Amato refused this treatment. He decided that he would rather connect emotionally with people through music than possibly return to his former life. Despite these challenges and lack of formal musical training, Amato was able to learn about musical techniques intuitively and apply them to form his compositions. He believed that music came to him "in pictures," in that he saw "black and white blocks moving from left to right" when he is playing the piano; it is believed that he developed synesthesia (a blending of two senses) as a result of the accident. Through the tragic event, the elevated emotions and tapped creativity have allowed Amato to enjoy life in a new and profound way that he could not have predicted; he considers it to be a blessing and a curse.
            Despite Jourdain's claim on musical savants lacking certain aspects of humanity and attempting to reduce them to mechanical beings, both Derek Paravicini and Derek Amato have shown that musical savants (born or acquired) have the capacity to understand "serious" emotions and foster creativity through their art. In short, these gentlemen have proven the most obvious point - that they are human as well.

Works Cited
Chan, Michael,  Severson Dave, and Collins, Rocky. Brain and Intelligence - Ingenious Minds: Derek Amato, Science Channel, Web. Sept. 20, 2012, Retrieved Oct. 21, 2013, http://science.discovery.com/video-topics/brain-intelligence/ingenious-minds-derek-amato.htm.

Gururangan, Kapil. (2010) Savant Syndrome: Growth of Empathy and Creativity, Berkeley Scientific Journal, University of Callifornia.

Jourdain, Robert. (1998) Music, the Brain, and Ecstasy.  Avon Books.

OED: Oxford English Dictionary, Savant (n.), Oxford University Press, http://www.oed.com/view/Entry/171449?redirectedFrom=Savant+Syndrome#eid271973695.

Treffert, Dr. Darold.  Acquired Savants Open Up, HuffPost Live, Web. Dec. 05, 2012, Retrieved Oct. 21, 2013, http://live.huffingtonpost.com/r/segment/brain-injury-trauma-hidden-talents-neuroscience/50be0e0f2b8c2a5d9b000769.

Music, the Brain, and Ecstasy: Personality Type and Musical Preference

         Jourdain’s, Music, the Brain and Ecstasy, was a very informative and thought provoking book. Being a musician, and knowing little about the brain and it’s functions, this book did a great job of describing how the brain receives and processes sound, and therefore music, and the ways in which it affects us in everyday-life. Jourdain touched on the subject of music preference primarily in chapter 8 from pages 259-264, and describes the different possibilities of musical preference as the following: “a biological predilection” (259); “our individual musical personalities” (259); “social symbolism” (262); “social solidarity” (263); musical preference “imprint” (263); and “habit” (264). The one that particularly interests me is the idea of “musical personalities”. Although Jourdain’s definition of “musical personalities” may have more to do with simply our musical preferences, it made me think of individual’s actual personality types. Does an individual’s personality type correlate to their musical preferences?  Does an extroverted personality tend to prefer certain styles of music, where introverts prefer others? I know that our preferences for anything, including music, are a product of a life-long experiences and social surroundings, but could our innate preferences have anything to do with the way we are wired; our personalities? Later in the book, Jourdain talks about mood and music’s affect on that, but doesn’t go further into this idea of personality types and musical preference.
            When considering the idea of correlation between personality types and music, I was reminded of a past experience with personality testing. While doing my masters degree we did a few sessions on the Myers-Briggs personality test, and the instructor talked to us about how a large percentage of musicians are ENFP (extravert, intuitive, feeler, perceive), which is what category I happen to fall into, and subsequently so did a large percentage of the class. Obviously personality plays into types of career choices we make, but then it must also play heavily into the choices we make, and ultimately our preferences in terms of musical selections. I am not saying that Myers-Briggs assessment is the only way to define personality types, simply a way to identify differences in personalities and what sparked my thoughts of researching this possible correlation.
            In speaking about musical preference, Jourdain states, “Before all else, people use music for mood enhancement. Psychologists have long known that different personality types are attracted to different kinds of drugs, legal and illegal”. He then goes on to briefly connect this to music saying how each genre relates to different actions, for example: “hard rock as the frenzied rush of cocaine” (261). If personality types are attracted to different types of drugs, then what about genres of music? Jourdain really doesn’t discuss this question at all. In chapter 10, Jourdain speaks about the origins of music and how music is seen as optional in today’s society, but that this was not always the case. He states that our innate need for music is seen throughout history and cross-culturally indicating, “music is something that humans come by fairly easily” (305). If music is something that seems to be a part of our DNA as humans, as is our personality, is there any association between these two things?

            Looking specifically at the brain, we know that like music, the personality has links to many different areas of the brain, but it is primarily stored in the frontal lobe. As Jourdain says, “in all aspects of music perception, several parts of the brain are at work” (84). We know that the right side of the brain dominates in tonal and melodic perception (in most cases), and that the temporal lobe with the auditory cortex is involved in our perception of tone as well, on both sides of the brain. As I was thinking about this, I figured that other people have already wondered this, so I took a look through recent literature on this subject. The most valuable resource I found was an article that was published in the Journal of Music Therapy in 2005 called “Personality and Music Preferences: The Influence of Personality Traits on Preferences Regarding Musical Elements”. This article was about a study done by PHD student Kopacz Malgorzata in Poland, in order to draw some conclusions about correlations, if any, between personality types/traits and musical preferences. Malgorzata defines musical preference as “the act of choosing, esteeming, or giving advantage to one thing over another through a verbal statement, rating scale response, or choice made from two or more alternatives" (Malgorzata, 217). This question of individual preference and personality goes back to discoveries made by psychologist Hans Eysenck, who was instrumental in the developing research in the areas of personality and intelligence, with his categories of personality dimensions: extroversion, neuroticism, and psychoticism(1958) (Malgorzata, 217).

            The research in the article I read is based on previous work done by James McKeen Cattell, a prominent American psychologist of the early 1900’s whose focus was on studying the psychology of individual differences. In this study, a sample group of 145 students were chosen and all completed the same personality trait test followed by a musical preference questionnaire that was made specifically for this study, and finally each participant was asked to indicate their favorite piece of music. After compiling all of the results and analyzing the musical elements of the most preferred pieces of music, the study showed that some personality traits such as liveliness, social boldness, and extroversion (among others), all had a direct influence on preferences of musical elements. Reading through the study, there were many different musical elements discussed including timbre, rhythmic elements, melodic contour, and harmonic elements. One of the interesting findings was that individual’s who were extroverts generally chose music with a higher number of melodic themes as opposed to introverts. Also, those individuals with a greater sense of boldness and adventure also liked a higher number of melodic themes, as well as faster tempos and irregular rhythms, while the individuals with tendencies towards being shy and socially timid liked a smaller number of melodic themes, slower tempos, and regular rhythms. 

           Regarding this area of study, it would also be interesting to see how the parts of our brain that are in command of our personality engage, if they do at all, when one is involved in music or listening to music. Ultimately music is an emotional and creative expression, and does involve our very beings and personalities to take part in that experience, but how does that experience really look on the inside of the brain? Although this is a short review of the question of correlation between personality and musical preference, it is definitely an interesting subject to think about. The work of individuals in the area of music psychology and therapy are continuing to make further advances in this area of study, which is very interesting. It makes me wonder if my preferences and 'choices' in music are really my own, or are at least in part predetermined by my personality.



Works Cited:

Jourdain, Robert. Music, the Brain, and Ecstacy. HarperCollins, United States. 1997. 

Malgorzata, Kopacz. “Personality and Music Preferences: The Influence of Personality”. Journal of Music Therapy. 42/3 (Fall 2005):216-239.

Perception of Timbre in Acoustic and Electroacoustic Music of the 20th and 21st Century

            In his book “Music, the Brain, and Ecstasy,” Robert Jourdain, in an exciting and accessible way, illustrates and demystifies some important aspects of music and the ways we experience them. Jourdain is determined to inform his readers not only on processes behind consuming music as a listener, but also on dimensions such as composing and performing. As a performer of 20th and 21st century music, what I have found interesting is Jourdain’s designation of timbre as an important organizing force in works of many contemporary composers. However, Jourdain does not elaborate to a great extent on timbre in his book. How some composers understand timbre and utilize timbral properties of sounding models in their music will be a central question of this paper. Special attention will be given to the writings of the eminent scholars and works of renowned contemporary composers, such as Pierre Boulez. Before we start a debate on composers’ perspectives of timbre, a fundamental question should be answered: what is timbre?
            A basic role of auditory perception is to suggest the possible source of sound. Timbre, which is very often described as the “colour of sound,” is commonly perceived as a primary feature in this process.Even though the term “colour” in correlation with sound might be controversial for some, Campbell L. Searle advocates a strong resemblance between visual and auditory perception. According to him, colour-related information processed through the neural processing chain is likely to be apprehended by a very limited number of perceptual channels. He states that, for example, a carpet in three or four shades of the same colour is hardly distinguishable from a carpet in fifty or sixty shades of that same colour. A similar concept can be applied to music; when a note is executed, numbers of over-tone frequencies are also being generated. The brain automatically processes them into a single pitch and this is how we perceive it.
            To a large extent the above conception of colour perception parallels the artistic values of timbre, which Pierre Boulez argues in “Timbre and Composition – Timbre and Language.” He gave a presentation on this topic at the Institut de Recherche et Coordination Acoustique/Musique in Paris in 1985. For him, timbre is a sound synthesis. From my personal perspective and experience as a performer, the term “sound synthesis” can be understood as a twofold concept. As previously illustrated, a single tone is actually perceived as a synthesis or fusion of a fundamental tone frequency and its overtones. The same approach can be applied to the group of same or different sounding bodies creating a sound texture. Namely, a string section performing a figure or a passage in unison will be perceived as one source of sound because our brain will categorize and group together all of the same or similar auditory stimuli. Furthermore, clusters and blocks created from diverse timbres will also be perceived as a single structure or texture. This is what Jourdain was referring to when he said that timbre is one of the strongest organizing criteria in the writing of contemporary composers. If we look at “Polymorphia,” an orchestral work written by the polish composer Krzysztof Penderecki for 48 string instruments, we will notice organized timbre serving the function of expression. Even though some instruments do not even produce pitched sounds, our brain will register constellations of different timbres as united textures without even trying to classify or discriminate some of them. This approach to composition is called sonorism and it was introduced in the middle of the 20th century.
            Christiane Ten Hoopen, from the University of Amsterdam, has raised another important issue on perception of timbre in composition. In her work “Issues in Timbre and Perception,” she discusses connection between perceived sound and its source in electroacoustic music. Ten Hoopen states that in electroacoustic music, listeners are often uncertain about identifying sound sources due to the prevalent use of electronic gears and installations. Electroacoustic music prompts reassessment of one of the fundamental functions of timbre (identifying the source of sound), and supports the Boulez’ belief that sound sources or instruments should ideally be approached in neutral manner. Furthermore, Ten Hoopen underlines the importance of prior information such as program notes or the title of the work in identifying the implicated sound sources. In my opinion, the influence of prior information proposed by Ten Hoopen is directly linked to experience and memory of listeners, but it is also dependent on the context of its reception (performance/listening). Specific styles of music involves specific instruments characteristic for that style. Instruments such as cembalo or viola da gamba will most likely be recognized while listening to baroque music. Even though these instruments can now be reproduced with electronic instruments, our experience and memory will unfailingly direct our auditory perception to hear their acoustic predecessors.
            Manipulation of the auditory perception of the listener is an important element in the writing of contemporary composers. Modification of sound sources, but also their fusion into the larger formations, is something that composers today constantly investigate and try to implement into their personal language. Electroacoustic music offers vast possibilities in that sense and it is not surprising that some composers do not even reach tonal or tempered solutions. It is certain that timbre occupies an essential space in the writing of composers today.





Reference


Boulez, Pierre. “Timbre and Composition – timbre and language.” Contemporary Music Review. Vol. 2, No. 1 (August 2009), doi: dx.doi.org.10.1080/07494468708567057.

Jourdain, Robert. Music, the Brain, and Ecstasy. New York: Avon Books, 1998.

Patil, Kalish., Pressnitzer, Daniel., Shamma, Shihab., Eljilali, Mounya. “Music in Our Ears: The Biological Bases of Musical Timbre Perception.” PLOS Computational Biology, Vol. 8, No. 11 (November 2012), doi: 10.1371/journal.pcbi.1002759.

Ten Hoopen, Christiane. “Issues in Timbre and Perception.” Contemporary Music Review, Vol.10, Part 2 (1994).

Music and the Capacity for Love

                
“If music be the food of love, play on.”- William Shakespeare
                Evolution is a highly efficient process. Every aspect of our physiology, and often our behaviour, has an advantage for survival. Even the appendix, thought to be a vestigial accessory of the intestine, has involvement in probiotic maintenance1. With that in mind, gaining the ability to not only hear, but to cognitively process music requires a phenomenal biological investment, as outlined in multiple chapters of Robert Jourdain’s “Music, the Brain, and Ecstasy.” Jourdain elaborates (pg 307-308) on the theory that music developed for social interaction, and in order to solidify social bonds or mediate conflict. This argument is logical; however, language developed for mostly the same reasons, and is equally, if not more efficient in allowing social interaction and settling conflict. Evolution is a never-ending balancing act between the benefit of a new capability against the energy cost of producing it, and a secondary method of social bonding, or happenstance effect from language processing, seems by far too weak a motive to maintain a cognitive system that is specific and elaborate enough to experience music. There has to be a better reason, and in this essay I will argue that music increases the human capacity to experience and express love. For these purposes, the term love will encompass affection for offspring, partners, and towards community members.  I will begin by outlining why love is a biological necessity that is important enough to drive the evolution of music appreciation, and then briefly describe from a neural plasticity standpoint, how activating the circuitry involved in music may enhance the experience of love.  
                There is no doubt that a loving environment fosters the healthy psychological development of children, but recently there has been a linked made between this benefit and BDNF (brain derived neurotrophic factor)2. BDNF is a protein that is broadly expressed in the central nervous system3, and serves to enhance the survival of neurons and promote neurogenesis4. In children that have wanted for cuddling and love, this gene will be permenantly down-regulated, and in women it will also affect the BDNF expression of their offspring2,4. Clinically, down-regulated BDNF causes susceptibility to major depression, bipolar-disorder, and schizoprenia2, while normal expression supports learning, memory, and stress managment4. Clearly, love is of vital importance in early brain development, and if music can help facilitate the expression of love in a community or towards offspring, this would offer a strong advantage. Later in life, love and the ability to express and experience deep connections with others, become crucial for success within a group, and for finding a mate to raise children with. Emotions in general influence motivation (pg 311) and decision making (pg 309), but love is unique in that the expression and experience thereof can directly impact one’s ability to mate and the viability of those offspring.  In support of the evolutionary intention of music appreciation, consider Jourdain’s description of the cognitive complexities of melodic anticipation (chapter 3), and sense of meter (chapter 5). The ability to distinguish animal calls and hear is of clear advantage (pg 2-3), but melodic and rhythmic appreciation have no evident impact on human survival; yet, these abilities have been conserved to varying degrees in almost every member of the human race (pg 286), and illicit an emotional response unlike that of any visual stimulus (pg 328). Jourdain mentions (pg 308) explicitly that music somehow, undeniably embodies emotion, and if love is so critical to our survival, a system designed to enhance our experience of it would be worth the biological investment.
                The old adage, “practice makes perfect,” comes to mind when Jourdain explains how music practice reinforces the neural pathways required for performance, while neglect degrades them (pg 225). This is an example of synaptic plasticity, which is the activity-based change in synapse-mediated connectivity between neurons5. Essentially, the more you use a synaptic pathway, the stronger it becomes. If the experiences of music or love overlap in neural circuitry, exposure to one could theoretically strengthen the response and plasticity of the other.  In support of this theory, Jourdain mentions that music pleasure is associated with endorphin release (pg 327), which is also implicated in feelings of love6; although this does not prove identical neuronal circuitry, it suggests similarity based on release of the same neurotransmitters. In addition, music, much like love (it is better to have loved and lost than to never have loved at all), has the ability to make sad experiences seem dignified or “worth it” (pg322). This emotional phenomenon is highly unique to both love and music, and may imply similar neurological pathways.
                Much like the theory of muscular representation (pg 325), the conclusion that music might increase the capacity to experience and express love is highly speculative. However, there is currently no satisfying rationale as to why human beings evolved the ability to appreciate music, and given the significance of love in our development and survival, this would be a strong evolutionary incentive. In addition, there are anecdotal and hormonal indications that the experiences of love and music share similar neurological pathways.  Oscar Wilde felt that music could propel an individual to emotional intensities beyond their life experiences; while Jourdain writes that the meaning evoked in music is only what one personally brings to it (pg 322). I argue that the truth is somewhere in between, and that if the experience of love and music are not neurologically similar, it is uncanny how comparable they are in creating a unique sense of ecstasy.
(Word count: 919, excluding references)
References
1.       Bollinger, R.R. et al 2007 Journal of Theoretical Biology 249(4): 826-831
2.       Roth, T.L. et al 2009 Biological Psychiatry 65(9): 760-769
3.       Conner, J.M. et al 1997 Journal of Neuroscience 17(7):2295-2313
4.       Sullivan, R. and Lasley, E.N. 2010 Cerebrum 17:1-13
5.        Ho, V.M. et al 2011 Science 334:623-628.
6.       Hawkes, C.H. 1992 Journal of Neurology, Neurosurgery, and Neuropsychiatry 55:247-250

                

The Perception of Rhythm: Localized of Bilateral?

In both educational discourse and popular psychology, the notion of left brain and right brain dominance has been prevalent since the early 1980’s. A ramification of this thinking was the idea that some people are more “creative” while others are more “logical.” In the field of music, it is important to recognize what parts of our brain are active while we perform and listen to different aspects of music. Jourdain suggests that rhythm is lateralized in the left hemisphere of the brain, while pitch perception is seated in the right side of the brain. This idea seems to be an oversimplification and reductionist view of the matter. In reality, both hemispheres of the brain play a role in rhythmic perception, which is further complicated by the amount of musical training one has.
In his chapter on rhythm, Jourdain states that in contrast to melody,  “rhythmic skill favours the left brain” (149). Furthermore, he asserts that rhythmic perception and harmonic perception are favoured by different sides of the brain so that some musicians are naturally better at tonality, while others are natural better at rhythm. This thinking is analogous to the older notion that some people are more right brain dominant, while others are more left brain dominant. While it may often be the case that some people are more naturally talented at rhythm than others, it could be suggested that this has to do in large part with deficits in their musical training, their specific musical enculturation, and a tendency in music education to emphasize rhythmic training and melodic training separately.  Furthermore, several recent studies show that rhythmic perception utilizes both hemispheres of the brain, and so this issue may not be as simple as Jourdain suggests.
In the study Passive Rhythm Perception in Musicians, it was discovered that in both the musician and non-musician test groups, a basic network for processing the quantized rhythms was activated. This “may reflect an innate musical competence that is independent of training” (Limb et. al., 386). This finding also helps to clarify previous studies regarding the contributions of the right hemisphere to rhythm processing. It was found that formal musical training does not lead to a decrease in right-sided activity in terms of rhythmic processing, but that formal training activates additional areas of the brain. Thus, musicians “utilized an analytic mode of processing concentrated in the left hemisphere” (Limb et. al., 388). This is corroborated by Jun who found that “playing music professionally develops analytical processes in the left hemisphere, whereas other individuals process music in their right hemispheres” (Jun). Thus, musical training shows a heightened use of the left hemisphere in the perception and performance of rhythm.
The brain’s perception of rhythm can also be dependent on what music we are listening to and whether that music has a constant groove or not. For example, “studies have pointed to regions in the brain, such as the basal ganglia and supplementary motor areas, which are activated during listening to music with a beat structure versus music without a regular beat structure” (Phillips-Silver, 299). In other words, the innate heartbeat of music triggers areas in the brain that music without a regular beat does not. Furthermore, some researchers have studied the connections between movement and music, and the effect of that movement on our rhythmic perception. The goal of one such study was to explore the effects of movement on our bilateral perception of rhythm. Based on the fMRI data that was collected, it was found that “a bilateral network of motor areas is activated when rhythms are perceived, even when no movement is made” (Brett & Grahn, 902). These findings are supported by a number of other studies which confirm that “a bilateral network of motor areas mediate perception of rhythm in addition to rhythmic production” (Brett & Grahn, 902). As a result of these findings, it is clear that rhythm is not localized to just one side of the brain. The perception and production of rhythm is complex and utilizes the whole brain in a fluid way depending on the types of rhythm, their complexity, and a person’s level of musical training. Later in his chapter on rhythm, through a discussion of how musical perception is altered as a result of brain injuries, Jourdain clarifies his position by stating “rhythmic ability is clearly much less localized than harmonic skill” (Jourdain, 151). This further supports the idea that our perception of rhythm activates more areas of the brain than traditional thinking may suggest.
Based on this research, it is clear that rhythm is perceived in a bilateral way, in both hemispheres of the brain. The way in which the brain perceives rhythm is further complicated by the metrical complexity of the rhythm, the existence of a regular beat structure, and how much musical training an individual has. Recognizing the fact that rhythmic and melodic perception and performance aren’t localized to one hemisphere of the brain has strong implications for music education. In terms of skill acquisition, tonal abilities and rhythmic abilities can’t be seen as separate and distinct. Melody and rhythm must be taught in a holistic way and not seen as two distinct entities that some have natural aptitude for, and others do not. Connecting music to movement in a non-Western way allows for rhythm to be grounded, internalized and made a whole-body pursuit – not just a cognitive function of the brain.


Works Cited

Grahn, Jessica A., and Matthew Brett. "Rhythm and Beat Perception in Motor Areas of
            the Brain." Journal of Cognitive Neuroscience 19.5 (2007): 893-906.

Jourdain, Robert. Music, The Brain, And Ecstasy. New York: HarperCollins, 1997. Print.

Jun, Passion. "Music, Rhythm and The Brain." Brain World. N.p., 7 Mar. 2011. Web. 26
          Oct. 2013. <http://brainworldmagazine.com/music-rhythm-and-the-brain-2/>.

Limb, Charles J., Stefan Kemeny, Eric B. Ortigoza, Sherin Rouhani, and Allen R. Braun.  
           "Left Hemispheric Lateralization of Brain Activity during Passive Rhythm      
            Perception in Musicians."
 The Anatomical Record Part A: Discoveries in 
           Molecular, Cellular, and Evolutionary Biology
 288A.4 (2006): 382-89

Phillips-Silver, Jessica. "On the Meaning of Movement in Music, Development and the  
            Brain." Contemporary Music Review 28.3 (2009): 293-314. Web. 24 Oct. 2013.






Music, the Brain, and Pleasure: An Update of Robert Jourdain


Robert Jourdain's Music, the Brain, and Ecstasy is an excellent primer on how the human brain perceives and processes music, covering the musical experience from sound waves reaching the eardrum all the way to the emotional catharsis of a symphony or string quartet. However, since the book's publication in 1997 there has been considerable research not only into the workings of the brain, but also into the neuroscience of music specifically. In this essay I will summarize some of this research in order to update Jourdain's writing on emotion and pleasure in music.

Jourdain (1997) deals with the subject of pleasure in music in the last chapter of the book, titled "Ecstasy." In this chapter, he sets out to answer the following three questions: "First, how is it that music elicits emotions from us? Second, how is it that music gives us pleasure? And third, what is happening in our brains when music leads us to the threshold of ecstasy?" (p. 304) As Jourdain explains, these three questions are intertwined; all are related to anticipation and its fulfillment - or not. Drawing on then-current research, Jourdain explains that emotion stems from the oldest part of the brain, the limbic system, and is important in reasoning (pp. 308-309). Ultimately, emotion is a way for our nervous systems to figure out what activities are most important, a way of directing attention among incoming experiences (p. 309). We plan activities and carry them out by anticipating the results and then satisfying that anticipation (or trying to). Emotions occur when the result does not line up with our anticipation. If the result is better than we expected, we experience a positive emotion; if the result is worse, we experience a negative emotion. Similarly, pleasure is experienced when our expectations or anticipations are fulfilled (p. 312). Unfulfilled expectations cause stress as the brain tries to reassess the situation in order to make a better prediction next time; however, even greater pleasure can be caused by delaying or subverting expectations in order to heighten the sensation of resolution when it finally occurs (p. 319). Music is filled with instances of anticipation and resolution, and as such it makes sense that it should be able to generate both emotions and pleasure in the listener. A well-known example of this is Richard Wagner's opera Tristan und Isolde, whose prelude sets up one unresolved tension after another - tensions that continue for almost the length of the opera, culminating in a tremendous resolution in Isolde's final aria. It is an incredibly evocative depiction of ecstasy, the subject of Jourdain's third question. Of course, the word "ecstasy" is inherently subjective and next to impossible to measure in a scientific way. Jourdain asserts that, "By providing the brain with an artificial environment, and forcing it through that environment in controlled ways, music imparts the means of experiencing relations far deeper than we encounter in our everyday lives... In this perfect world, our brains are able to piece together larger understandings than they can in the workaday world, perceiving all-encompassing relations that go much deeper than those we find in ordinary experience... It's for this reason that music can be transcendent." (p. 331)

Recent research into the science of pleasure and emotion may shed some new light on these ideas. Jourdain notes that, "remarkably, 'pleasure' is a concept seldom encountered in neuroscience or even in psychology. There's hardly a book written on the subject." (p. 315) Happily, this is no longer quite as true as it was in 1997. An iimportant development is that the role of dopamine in musical pleasure is now better understood. A number of studies have examined the role this neurotransmitter plays in music listening, linking musical pleasure to the brain’s reward system. One important study was by Benovoy, Dagher, Larcher, Salimpoor and Zatorre in 2011. In this study, researchers used both PET and fMRI technology to scan subjects who were listening to either a favourite, pleasurable piece of music. Using PET to trace the presence of dopamine and fMRI to determine the timing of dopamine release, the researchers found that chills associated with high levels of pleasure at the climax of a piece of music coincided with the release of dopamine in the nucleus accumbens. Additionally, the researchers found that dopamine spiked about 15 seconds before the emotional climax of the piece as the subject anticipated the most pleasurable section of the music; however, this dopamine activity took place in the caudate (Benovoy et al., 2011). 

This research is intriguing for several reasons. First, it directly links dopamine activity to the experience of musical pleasure, which had not been previously demonstrated. Second, it illustrates that anticipation and fulfillment constitute two different types of musical pleasure that occur in different parts of the brain. The study supports Jourdain's assertion that "...the deepest pleasure in music comes with deviation from the expected... Isn't this contradictory? Not if the deviations serve to set up an even stronger resolution” (Jourdain, 1997, 319). If dopamine is released each time the listener expects a resolution, only to find that the music takes a different turn, the final resolution will be that much more intense, as in Tristan. The study also answers a question raised in the book: "...why we continue to find music expressive after we have heard a piece a few times and know where its expressive deviations will fall." (Jourdain, 1997, 313) Clearly, familiarity with a piece allows the listener to anticipate the most pleasurable moments, releasing dopamine and creating the added pleasure of anticipation. (As an aside, I wonder if this may be why so many musicians and listeners - myself included - become attached to one particular recording of a piece. If we are anticipating certain expressive gestures and don't hear them because the performers have made a different choice, might our anticipation be unfulfilled, leaving us to experience a negative emotion?)

Benovoy et al. (2011) dealt with dopamine activity while listening to a piece of music that the subjects already knew well and loved. But what about unfamiliar music? It is clear that, while familiarity with a piece may increase pleasure in hearing it, unfamiliar music can have the same pleasurable effect. Salimpoor et al. (2013) conducted another experiment to find out what happens in the brain when a subject listens to music that he or she has never heard before. According to Salimpoor and Zatorre (2013), "[i]t has been proposed that all individuals have a 'musical lexicon', which represents a storage system for... information about the relationships between sounds and syntactic rules of music structure specific to their prior experiences” (p. 10434). Or, in other words, throughout the course of their lives, individuals learn the musical rules and structures of their particular culture, allowing them to form expectations and anticipations even when hearing a new piece of music, as long as it falls within their own cultural tradition. In this study, Salimpoor et al. (2013) used fMRI to monitor neural activity while subjects listened to a previously unknown piece of music. They then asked the subjects to rate the desirability of that piece of music by giving them a chance to purchase it in an auction model, so that higher bids demonstrated higher desirability or musical pleasure. They found that highly-rated music coincided with increased activity in the mesolimbic system, particularly the nucleus accumbens - the same area of the brain that was associated with dopamine release upon fulfillment of musical expectations in the previous study. Salimpoor and Zatorre (2013), in a review of studies on musical pleasure, conclude that the cortical system is able to make sense of music's structure and rules and therefore make predictions, while the older striatal dopaminergic system creates the emotional, pleasurable response to music in the brain's reward system. Thus humans are able to take pleasure in an abstract aesthetic reward such as music.

Jourdain’s description of musical pleasure as a series of expectations or anticipations followed by fulfillment is essentially correct; however, the role of dopamine in the neural workings of musical pleasure was not yet understood at the time his book was published. Recent research confirms Jourdain’s assertions and helps us to understand how the abstract pleasure of music taps into our evolved rewards systems.




Works Cited


Jourdain, R. (1997). Music, the brain, and ecstasy: How music captures our imagination. New York, NY: Harper Perennial.


Benovoy, M., Dagher, A., Larcher, K., Salimpoor, V.N., and Zatorre, R.J. (2011). Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nature Neuroscience, 14(2), 257-262. http://dx.doi.org.myaccess.library.utoronto.ca/10.1038/nn.2726


Salimpoor, V.N., van den Bosch, I., Kovacevic, N., McIntosh, A.R., Dagher, A., and Zatorre, R.J. (2013). Interactions between the nucleus accumbens and auditory cortices predict music reward value. Science 30(6129), 216-219. doi: 10.1126/science.1231059.

Zatorre, R.J., and Salimpoor, V.N. (2013). From perception to pleasure: Music and its neural substrates. 
PNAS 110(2), 10430-10437. doi: 10.1073/pnas.1301228110