Tuesday, December 13, 2011
Sunday, December 11, 2011
Read more: http://www.oprah.com/health/Oliver-Sacks-Finds-the-Bond-Between-Music-and-Our-Brains
Written by: Oliver Sacks, MD, FRCP
Oliver Sacks, MD, the noted neurologist and author, describes the profound bond between music and our brains and how the simple act of singing can be good medicine
Dr. Oliver Sacks states that music has cultural and community relevance for human beings, it brings people together. However, he notes that music not only fundamentally creates a social bond, it literally also shapes the brain. Perhaps musical activity involves many parts of the brain (emotional, motor and cognitive areas), even more than what is used for language, suggest Sacks.
When music has been applied in Dr. Sacks practice in neurology, he states that he has seen patients with Parkinson’s disease who were initially non-responsive, become alert when music is applied in treatment. People with aphasia, which is a loss of the use of language most commonly caused by stroke, retrieve words, in song, they could not otherwise utter. He has viewed people with Tourette's syndrome, who may be distracted by physical and sometimes verbal tics, able to find means of managing or by-passing their tics through music, and people with extreme forms of amnesia, unable to remember what happened to them a few minutes ago, able to sing or play long, complicated pieces of music, or even to conduct an orchestra or choir. He also notes that in Alzheimer's disease and other types of dementia sufferers are able to respond to music when no other treatment is able to reach them.
In closing, Dr.Sacks says that the profound bond between music and our brains and the simple act of singing can be good medicine at any age.
As a music therapist interested in the various means of evoking memories and responses through music, I found the article quiet intriguing. The examples that Dr. Sacks provided as to the various responses of patients with diverse diagnoses responding to music treatment is astonishing. Furthermore, this article written by Dr. Sacks, a practicing neurologist, provides more credibility to the value of music as treatment. The value for me comes from the cross-disciplinary practice; practitioners, aside from music therapist, who also see the value in music as treatment.
Moreover, I believe that at this time, in the field of music therapy, further cross-disciplinary research must continue, in order to bring the value of music therapy to the mainstream.
Wednesday, December 7, 2011
Hodges, D. (2010). Can Neuroscience Help Us Do a Better Job of Teaching Music? General Music Today, 23(2), 2-12.
This article looks at how music education can move beyond the beginning stages of applying neuroscientific findings. Hodges illustrates a 3-stage basic model of the neuroscience learning cycle: Sense, Integrate, and Act which incorporates concepts such as active learning, activation of reward centers, pattern-detecting brain, plasticity, neural pruning, multisensory learning, and memory.
When engaged in a musical activity, ‘Sense’, a component of the proposed learning cycle model, involves “raw auditory, visual, and tactile sensory information” (Hodges, 2010) where we cannot yet make any sense of it. The “Integrate” component then brings meaning to the information while the “Action” component responds to this information that has been transformed into a meaningful musical experience. Sometimes, however, “Action” comes first through initiated learning, where students discover concepts through their own actions.
A closer look at some of the elements in the learning cycle:
Active Rather Than Passive Learning: During this process, audiomotor networks and motor networks are active. Brain imaging studies show that even in the absence of overt behaviours, these motor systems are active.
Learning Activates Reward Centers: Learning activates areas in the reward system pathways which release hormones (serotonin and dopamine) while also monitoring autonomic and cognitive processes.
Neural Pruning: This "primary mechanism of plasticity" comes into play when learning music from a different culture. On one hand, if the synapses are not utilized they are “pruned away;”on the other hand, active engagement, repetition, and reinforcement can equally strengthen these neural connections. So, just as children may grow up learning two languages at home, the same needs to happen in the music classroom where students are developing sensitivities to different styles so that the 'pruning" effect is less of an issue.
Learning is Multi-sensory: Though each sensory organ has its own main zone (e.g., vision –occipital region), convergence zones also exist where information from the different senses are integrated.
Hodges illustrates how Bloom’s framework for learning developed years ago is integrated with the learning cycle presented at the beginning of the article: Concrete experience (sensory cortex-->parietal lobes), Reflective observation (Back integrative cortex-->temporal lobes), Abstract hypothesis (front integrative cortex-->frontal lobes), and Active testing (Motor cortex).What is important to note is the multisensory learning that takes place in the brain indicated by the arrows. In addition, all the components discussed about plasticity, neural pruning, active learning, etc., should be integrated in this cycle. To keep all these components in mind may seem complex, but as Hodges explains, it is necessary to "have a more thorough understanding of how the brain works" so that educators are not only looking at effective strategies and best practices, but at the neuroscientific research that explains 'how' and 'why'.
I agree there needs to be a greater connection between the neuroscience and the music education world. If more educators increase their knowledge and awareness on effective strategies and best practices by looking at what is really happening inside the brain, the teaching and learning process will be that much more rich. The findings about brain plasticity, musical training and memory,or neural pruning for example are all elements that directly affect the children in the classroom. It seems, particularly from the Colloquy sessions at U of T that there is a steady movement in integration and collaboration of new ideas, research, in music and health/neuroscience. I hope the same can happen for teachers in the classroom, including music educators.
Schlaug, G., Forgeard, M., Zhu, L., Norton, A., Norton, A. and Winner, E. (2009), Training-induced Neuroplasticity in Young Children. Annals of the New York Academy of Sciences, 1169: 205–208.
Retrieved from: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3005566/
Because studies have shown that professional musicians who start music training before age 7 have a larger anterior corpus callosum than non-musicians, suggestions have been made that due to such music training, plasticity in the CC may occur in early childhood. To test this theory, researchers examine the impact of what 29 months of instruments music training would have on the sub areas of the corpus callosum.
The study divides 31 children (5-7yrs old) into three groups: high-practicing, low-practicing, and controls. 18 of the children attended half-hour private or semi-private instrumental lessons, while 13 children represented the control group where no instrumental training was received. At the beginning and end of the study, the children underwent high-resolution T1-weighted MR brain scans, and also completed a 4-finger fine motor-skill sequencing task.
After approximately 29 months of observation, the results show that, for the high-practicing group, there was a difference in the anterior mid-body of the corpus callosum, including an improvement in their motor-skill sequencing task. By contrast, children in the low-practice and control groups did not show any difference in the CC. These results provide evidence that rather than preexisting differences, early, intensive, and prolonged music training affects the size of the larger anterior CC area.
This study reinforces what we have discussed in class about brain plasticity and the importance of the midline crossing. The fact that early music training affects the size in the corpus callosum, mainly because of the midline crossing, has great meaning for music educators in terms of pedagogy. Would similar results occur if we compared children who participate in multi-sensory approach learning in the music classroom, with a music specialist, versus children who do not have a separate music class, but still learn music through their homeroom teacher? Would there be a significant difference in the size of the subarea in the corpus callosum if we examine the results of those who had music training outside of school and those who only participated in a multi-sensory approach music classroom?
Additionally, the fact that brain plasticity occurs across the lifespan, opens the door for more exciting research on music and rehabilitation. As we have seen in the colloquy, the need for collaboration between musicians, educators, and all those in the medical/rehabilitation field is so important. I would also be interested to see more studies of adults and seniors who learn to play an instrument later on in life. Taking what we know about brain plasticity and the power of music, I would hope that the result is a positive one.
Tuesday, December 6, 2011
Music Therapy Interventions for Improving Fluency Among People Who Stutter
by Erika Shira
This article by Erika Shira, is a music therapist’s overview of why music can be effective in treating people who stutter.
Shira purports that music therapy in general is an effective means of evoking neurological changes because of the way that participating in interactive music stimulates multiple areas of the brain simultaneously. The brain functions most optimally when multiple areas are working together, as areas that work particularly well can compensate for areas that work less efficiently, all the while "teaching" the less developed areas how to rework themselves to function better. When a person participates in live music, the brain must process sound, vibrations, movement, emotional states, and sequential patterns that are processed by the brain in the same way as language.
It is widely recognized that dysfluency is a multi-facetted disorder, which can include psychological, motor and auditory processing issues to name a few. A music therapist must first determine if an individual’s stutter is primarily anxiety related or if there is motor difficulty. This can be difficult as most individuals who struggle with dysfluency will likely manifest anxiety during speech.
The benefit of music therapy is that through musical expression both the primary aspects of dysfluency (anxiety and motor difficulty) can be addressed. For anxiety, musical expression can be a confidence builder. Most individuals who stutter are able to sing without dysfluency and if given a composition exercise where a story is conveyed in the first person, one gets the experience of fluent self-expression through song. For motor related difficulty, Shira compares stuttering therapies to gate therapies. In it’s simplest form, gate therapy is the rehabilitation of walking through the use of a metric beat – an even pulse that the patient would aim to walk to to rehabilitate their gate to even, regulated intervals. Music therapy takes that one step further and will play live music to the gate, accompanying the patient and matching their gate – even or not. While the individual is walking to a familiar song, Shira explains that “the rhythm of the song is processed in the temporal lobe, the order of the melody is processed in the frontal lobe and language areas, the lyrics are processed in the language areas, the personal meaning of the song is processed in the emotional areas, and so forth. With these areas all working together, the individual is very aware of when he or she is walking unevenly, as this causes the song to be played with pauses and hesitation. The brain wishes to correct the song, and the other areas of the brain work together with the motor cortex to better coordinate the person's movements.”
Treatments for stuttering work in a very similar manor, with initial sessions devoted to the singing of familiar songs to solidify that through song, fluency is possible. Songs might be sung alone or with vocal support (and family can easily be included in treatment), and eventually advance to the composition of first person story-telling and even sung, improvised dialogue. The goal is to eventually move to a more spoken style of singing, and then to remove the accompaniment and pitches resulting in normal speech based on the sung approach.
In my experience as a vocalist, I have encountered several individuals with debilitating stutters who are miraculously free from dysfluency in song. Though I understood this was a fairly broad phenomenon I have wondered if and how it might be applied through music therapy and if those therapies can lead to greater fluency in speech. Knowing that music and language both elicit complex neurological activity and also share a lot overlap in the active centres of the brain has left me suspecting that music therapies are full of potential to mitigate dysfluencies. This article does not address the research that indicates stuttering is strongly linked to auditory processing issues but I have to wonder if this too could be addressed through music therapy. Currently the auditory-based interventions for stuttering such as delayed auditory feedback don’t cure the stutter, but merely manipulate auditory feedback so the individual no longer hears themselves in real time. I have to wonder if it might be possible to train the ear using music therapy to resolve some of these processing issues. I know Tomatis considered this possibility and I will continue to look for research that looks at auditory processing therapies (not just interventions) and their effect on fluency.
Your Brain on Improv by Dr. Charles Limb
In this video, Dr. Charles Limb explains that it is possible to scientifically explain how brain activity relates to music-making. Using functional magnetic resonance imaging (fMRI), it is possible to obtain blood oxygen level-dependent (BOLD) imaging of active areas of the brain. When an area of the brain is active, blood flow increases in that area; that blood flow causes change in the concentration of deoxyhemoglobin. This change in deoxyhemoglobin content can be detected by BOLD-fMRI, and thus it is possible to determine which parts of the brain are more or less active depending on the amount of blood flowing through them. Dr. Limb summarizes his findings by showing three experiments, in which the subjects were asked to either improvise or perform a memorized melody or a text.
First experiment. During the fMRI, subjects (all jazz performers) were required to either play a memorized melody or improvise a new one on a MIDI keyboard connected to a computer. Both memorized and improvised melodies were played over the same harmonic progressions. The resulting BOLD-fMRI contrast maps showed that some areas of the brain are activated and others de-activated when the subjects are improvising versus when they are playing memorized melodies. The study results showed that when improvising, the area of self-monitoring turned off while the autobiographical or self-expressive area turned on. A hypothesis made by Dr. Limb is that being creative implies that one area of the frontal lobe goes up in activity, and another down.
In the second experiment, the subjects were asked to improvise. The results showed that the activity in Broca's area (usually associated with language) increased. Dr. Limb suggested that there might be a neurologic basis for the notion of music as a language.
In the third experiment, the subjects were asked to either memorize or improvise a rap over some cue words. In both cases, improvisation and language areas were activated. When free-styling with closed eyes, visual areas and major cerebellar-motor coordination activity was activated.
The video offers an overview of some important issues related to brain activity in music-making and music creativity. I was particularly interested in the concept highlighted by Dr. Limb that musical creativity might entail the simultaneous activation of certain areas of the brain and de-activation of others, as a mechanism to prevent the interference of inhibitions during the creative process. This seems to suggest that, in solo piano jazz improvisation (we did not see any other types of instrumental interaction or ensemble situation in the experiments), the brain recognizes and classifies rhythmic and melodic patterns as either known or new, and reacts accordingly by activating or shutting down certain areas. However, improvisers do not “invent” music; they have the ability to combine, in a strikingly short time span, a multitude of melodic and rhythmic patterns, and harmonic progressions that are already known to them. I wonder whether, by using the word creativity, Dr. Limb intends to describe this fast combination of memorized patterns, or a more inclusive activity that includes factors not highlighted in the experiment. This point remains slightly unclear to me.
In the third experiment, I found it interesting that free-styling with closed eyes activates both visual and coordination areas of the brain, on top of language areas. As for most performers, for free-styling rappers too visualizing one’s own body seems an important element. I wonder whether the activation of these areas during free-styling is related to the practice of improvisation or whether it has more to do with the lack of sight. As a performer, I find it useful to visualize my body perform at the keyboard, as this helps me to acquire precision and to have a more secure approach to the instrument.
Huron, David. "Science & Music: Lost in music." Nature (22 May 2008), 453, pg. 456-457. Web. 6 December 2011.
"Linguists know how fast languages disappear. Musical cultures may be an order of magnitude more fragile. It will be many centuries before the whole world speaks Mandarin. Meanwhile Western music has swept the globe faster than aspirin...We have perhaps just a decade or so before everyone on the planet has been brought up with Western music or its derivatives."
Homogenization of music across the world makes the task of a cognitive neuroscientist studying music more difficult. Do we perceive certain intervals as dissonant because of how our ears have been trained by environmental (cultural) stimuli, or are the intervals inherently dissonant to our ears because of our biological make-up? Is major naturally perceived as “happy” and minor naturally perceived as “sad?” If everyone in the world is exposed to the same styles of music, we won’t have any way to compare different musical languages to find ways to answer these sorts of questions. We won’t be able to discern whether “a behaviour [is] an innate cognitive disposition, or just an artefact of westernization.”
Other rich musical cultures are alive and well throughout the world, but people in countries such as China and India are constantly exposed to Western music, which infiltrates the music of those cultures. If we can study the music of those cultures in their non-Westernized forms now, we will probably gain more insight into the question of nature vs. nurture in the cognitive neuroscience of music.
In the same way some conscientious growers choose to plant heirloom seeds instead of the homogenized varieties more easily available, we should do what we can to encourage musicians across the world to maintain their musical cultures, and not only for the benefit of cognitive neuroscientists. Just as I often learn more through collaboration with a colleague with a different perspective from my own, the world benefits from the diversity of its cultures. Variety is the spice of life, as the old adage goes, and I’d rather not live in a bland world.
Monday, December 5, 2011
Beatriz Ilari, and Linda Polka. “Music cognition in early infancy: infants’ preferences and long-term memory for Ravel”. International Journal of Music Education 24. 1 (2006).
In this study, Ilari and Polka challenge the belief that infants are passive listeners with limited perceptual and cognitive skills for music. This assumption can be seen in the types of music found on musical recordings for babies, toys and videos such as Baby Einstein. The music is typically simple, with clear distinctions between melody and accompaniment, basic I-V-I harmonies and predictable forms. The music on infant-directed CDs is mostly limited to short and simple pieces from the Baroque and Classical periods (such as Bach Minuets) and are highly repetitive. These resources influence parents, who continue to play simplistic music to their young children.
Previous research used the Headturn Preference Procedure (HPP) to determine that infants prefer high-pitched singing over low-pitched singing. Similarly, two studies have found that babies generally prefer the piano over other timbres which was shown in heart deceleration levels when listening to piano music.
Based on what was already known about infants’ listening preferences and abilities, Ilari and Polka studied infants under the age of 8 months and their responses to and long-term memory for two pieces from Maurice Ravel’s Tombeau de Couperin. The pieces chosen for the study were the Prelude and the Forlane. Subjects were played both pieces, with half of the babies listening to the solo piano version while the other half listened to the orchestral version. This part of the study used HPP to determine the babies’ preference for either the Prelude or the Forlane. They found that infants listening to the orchestral version preferred the Prelude over the Forlane, while the infants in the piano group did not present a reliable preference.
Part 2 of the study was designed to test infant memory for complex music. Infants were again split into two groups; one group listened to the piano version of the Prelude for ten consecutive days while the other group listened to the piano version of the Forlane. Following a 14-day period where the infants did not listen to any Ravel, the researchers brought the babies back into the lab where they played both pieces to see if the babies showed recognition for their designated piece. The results were clear and conclusive: babies listened significantly longer to the familiar piece than the unfamiliar piece.
This study challenges the current practice of exposing infants to overly simplistic music such as lullabies, folk melodies and repetitive music from the Baroque and Classical periods. It also raises further question of what aspect of music is being stored by infant brains. What part of a particular piece of music serves as a trigger for later recognition?
I found this study incredibly illuminating. I, too, am guilty of underestimating the cognitive abilities of babies in terms of music listening. I volunteer in the nursery at my church, where we have a collection of “baby CDs”. I find them difficult to listen to because they often feature Mozart sonatas or symphonies played on synthesized instruments such as xylophones and celestas. After reading this study, I will now bring music that is stimulating to babies as well as enjoyable for me.
The findings of this study also challenged the way I think about the important first lessons with a new beginner. In beginning piano, we also focus on teaching folk melodies because they are likely to be familiar to children. While the physical abilities of children limits teachers to teaching these simple songs, I now believe that we should supplement lessons with “music acculturation” CDs. That is, collections of more complex music, perhaps in simple timbres such as the piano, that stimulate children’s listening by challenging them to hear beyond basic four-bar phrases and simple harmonies.
My hope is that the findings of this and other related research will begin to change the music listening options that are currently available to parents of newborns. While it would also be more enjoyable listening for the parents, I wonder if early exposure to complex music will affect the music learning styles and abilities of children at a later age.
The Journal of Stuttering Therapy, Advocacy & Research Vol. 3, Iss. 1: “Stuttering: A
Look at the Problem.” by: George G. Helliesen
This article, which introduces a series on understanding stuttering, provides an
overview of what science and speech therapy practitioners know about the cause and
treatment of stuttering. Published in 2006, it provides an overview of the latest research in the field of speech disorders among other valuable information.
The overview includes recent studies conducted by Dr. Christine Weber-Fox and Dr. Anne Smith of Purdue University (2004), Dr. Ann Foundas (2005) of Tulane University, and Dr. Dennis Drayna (2004) at the National Institute of Deafness and Other Communication Disorders, though these are just a few of the studies in the literature on stuttering. Though many studies have begun to focus on organicity as the primary cause of stuttering, all of these studies agree that stuttering emerges from complex interactions among factors including genetics, language processing, emotional/social aspects and speech motor control. These factors can vary in significance from patient to patient or even in a single patient over time. Dr. Foundas (2005) in particular “believes that developmental stuttering is a complex motor speech disorder with a strong genetic link and that different therapies may benefit different biologically specific types of stuttering.”
Within this overview of studies it is clear that dysfluencies observed in individuals who stutter may be reduced under a number of conditions including choral reading (where a group reads aloud in unison) and altered auditory feedback (AAF). Therapy using delayed auditory feedback (DAF) is a vital part of Van Riperian therapy and enhances the client’s oral proprioceptive feedback, which is used in teaching a stutterer to monitor the movement of their speech articulators. This decreases dependency on auditory feedback, thus helping to maintain appropriate fluency. This greatly helps a stutterer change focus from listening to their speech production to feeling the movement of their articulators as they are speaking. This is one of the primary therapy techniques used to help a stutterer maintain control over their stuttering and decrease dysfluencies. According to Van Riper (1973), “In terms of servo theory, since speech seems to be
automatically controlled by feedback and there seems to be some real evidence that some failure in the auditory processing system produces the basic disruptions, we train the stutterer to monitor this speech by emphasizing proprioception thus bypassing to some degree that auditory feedback system.”
Helliesen concludes the article by pointing out that timing of therapy is also crucial. A candidate must be “ready” for therapy and have support to stick with their program. Programs related to stutter correction often elicit a rediscovery of self, are difficult and teach “controlled speech” which will have to be used continually to maintain any degree of control, and it may not be pleasant at times. The benefits however, are measurable, but only evident over time.
I found this article very helpful in my search for understanding stuttering and the music therapies related to its treatment. Though I have been aware that music is often a tool in the approach to easing dysfluency, I didn’t know why until now. As a singer much of this makes perfect sense. The idea that DAF is simply a way of decreasing dependency on auditory feedback clarifies several points for me. Though research is still ongoing, evidence leans to confirming that stuttering is, at least in part, a disorder of the auditory processing system, and research of treatments further corroborates that addressing the ear leads to successful revision of dysfluency. I am also interested in the concept that DAF is just one method of reducing dependence on auditory feedback. This explains why playing music to obscure ones voice (a la The King’s Speech) is also effective.
In the study of singing, one learns early on that there are many pitfalls to listening too intently to oneself. In fact, many voice pedagogies advocate blocking or delaying auditory feedback so a singer is not dependant on their ears to assess their sound but rather puts the emphasis on sensation. This seems to parallel the therapeutic strategies for stuttering, which might explain why some stutterers find freedom from their dysfluency in song. Could it be that they have learned to because less dependent on auditory feedback while singing?
In some respect I have more questions after reading this article then I did before, however I feel confident that this overview of stuttering has set me on a clearer path to understanding the possible benefits music therapies have in its treatment.
Sunday, December 4, 2011
Source: The Mind of an Artist
Retrieved: December 2, 2011, from Podcast from the Library of Congress with Michael Kubovy and Judith Shatin
This video from the Library of Congress features cognitive psychologist Michael Kubovy and composer Judith Shatin speaking about the mind of the artist, and how composers incorporate extra-musical elements in their compositions. Both Michael Kubovy and Judith Shatin are from the University of Virginia.
Professor Kubovy spoke first, and his focus was on meaning in music. According to Kubovy, this topic has a long history, and a tarnished one at that, since music with extra-musical connotations are often considered less than pure. Kubovy proposed just the opposite - that musical works without extra-musical connotations are extremely unlikely to work.
Language priming experiments show that there is an associative network between meanings in our brain. If concept A has a close association with concept B, our brain’s processing response from A to B is faster. For example, when one hears the word ‘cat’ followed by the word ‘meow’, our brain processes ‘meow’ quickly because ‘cat’ and ‘meow’ have a close association. In a sense, by saying ‘cat’ the brain has been primed to hear the word ‘meow’. If the brain heard the word ‘cat’ followed by the word ‘refrigerator’, the processing of ‘refrigerator’ would be slower because there is not a clear association between ‘cat’ and ‘refrigerator’.
Kubovy went on to speak about event-related potentials, or ERPs. The n400 is a component of ERPs that is elicited by unexpected stimuli, and indicates the amount of processing the brain had to do given the previous context. Kubovy explained that in a language priming experiment, an ERP of n400 or more means the brain did more processing on a word because it was not expecting that word, as in the ‘cat’ example above. An ERP of n400 or less means the brain did less processing because it was expecting the word, as in the cat meow example.
Scientists in Germany did a priming experiment with music. They took a word and primed it with two types of music. The word in question was ‘wideness’, and the first piece of music to precede it was a piece by Strauss. The second piece of music to precede it was an accordion piece. The n400 was less for the Strauss priming than with the accordion music, meaning that there was some association in the minds of the subjects between ‘wideness’ and the music of Strauss.
Experiments like the one above suggest that music and language are more closely related than one might think, which makes sense considering that brain areas activated by language and music overlap quite a bit. Composer Judith Shatin followed this discussion by speaking about her own compositions and how these issues relate to her work. Her feeling is that whenever one is listening to music, shapes and ideas come to mind. Sometimes sounds can imitate things in the natural world. For example, in Prokofiev’s Peter and the Wolf, a flute is used to represent that character of the bird. Why is this, and why does this association seem natural to listeners? Is it due to the register of the flute being similar to the register of many bird songs? There is much to consider here. She continued by playing selections from her own works that in her mind exemplify associations between language and music. The audience listening seemed to agree on the extra-musical associations of her pieces, making it clear that the music language connection is a tangible and important one to consider from a compositional perspective.
As a performer, these ideas ring very true to me, since many extra-musical ideas are brought to my mind every time I play. These ideas can range from associations with tangible things, such as a bird or the wind, to more abstract concepts, such as rates of acceleration or rhetoric devices. Finding the meaning in the music you are performing and communicating that meaning to audiences is, in my opinion, one of the most important tasks of a professional performer.
Yet it is inevitable that at some point musicians will disagree on the meaning of a particular passage, and whenever this occurs I find it very curious. It leads me to believe that many, perhaps most associations are built more from life experiences than from quantitative properties of the music. I often wonder about the most basic musical associations, and whether or not they are natural associations or the result of repeated hearings. A perfect example would be the concept that major music is happy and minor music is sad. Is this really a natural association? If you could somehow find a person who had never heard music, would they react with happy emotions to major chords/keys? Is it even possible to study such a thing? For example, infants may be blank musical slates but they do not possess the language and cognitive skills necessary to communicate the idea of happiness. When I consider the major/minor question, it makes me wonder if I am finding meaning in music or projecting my own meaning onto music.
Saturday, December 3, 2011
Pereira, Carlos Silva, João Teixeira, Patrícia Figueiredo, João Xavier, São Luís Castro, and Elvira Brattico. "Music and Emotions in the Brain: Familiarity Matters." PLoS One 2011; 6(11): e27241.
The goal of this study was to understand which regions of the brain are involved in music appreciation. Using a listening test and a functional magnetic resonance imaging (fMRI) experiment, the researchers wanted to know how familiarity in the brain correlates with music appreciation. The subjects that were chosen for this study had no formal musical education, but described themselves as ‘music lovers’, listening to music on a daily basis. First, the subjects participated in a listening test, in which they listened to pop/rock song extracts and decided if each song was familiar or unfamiliar and if they liked it or not. Based on this test, a unique set of stimuli was selected for each participant, containing music in four different conditions: familiar liked, familiar disliked, unfamiliar liked and unfamiliar disliked, and was presented during an fMRI session.
Brain activation data revealed that broad emotion-related limbic and paralimbic regions as well as the reward circuitry were significantly more active for familiar music compared to unfamiliar music. Smaller regions in the cingulate cortex and frontal lobe, including the motor cortex and Broca's area, were found to be more active in response to liked music when compared to disliked one. The study concluded that familiarity is a crucial factor in making the listeners emotionally engaged with music, as revealed by fMRI data.
Music is omnipresent in our society, and it represents a multi billion industry. One of the reasons behind this success is the ability of music to convey emotions. This study is very interesting because it proves how familiarity of a piece of music increases the emotional response in our brain. The more you hear a song, the more it increases the blood oxygen level in emotion related regions of the brain. This conclusion correlates the findings of a previous study by Blood and Zatorre that reported a correlation between increased intensity of felt chills when listening to favourite pieces of music.
In my personal experience, I have found that I have the deepest emotional response to songs that I know. One could think that by knowing a song very well, it becomes predictable, and consequently there is nothing new and exciting to hear anymore. On the contrary, I think that by knowing every part of a song, the brain does not have to focus on analysing new data, but it can focus on the enjoyment of the piece, which can sometimes lead to a more powerful emotional response that appears in the form of chills or goose bumps. Some studies have also shown that patients with severe brain conditions such as dementia or Alzheimer’s have strong brain activation responses when hearing familiar music.