Tuesday, November 13, 2012

Music Therapy & Emotions for Depression, Stress & Mental Health


References:

Using Music Therapy for Rehabilition & Education: Using Keyboard and Piano

Music Therapy & Emotions for Depression, Stress & Mental Health Issues


Hope Young is the founder of the Center for Music Therapy in Austin Texas. The two Youtube videos I’ve referenced are her describing some of the methods that might be used in two different music therapy situations.

In “Music Therapy & Emotions for Depression, Stress & Mental Health”, Young goes over the use of music therapy in the context of therapy for depression. She notes the importance of assessing the patient’s musical taste, possible musical associations, and generally figuring out where they’re coming from emotionally. She presents a hypothetical situation: we as viewers have just woken up in a psychiatric ward having tried to commit suicide.

She describes the types of severe depression that are most commonly associated with those who are suicidal; either experiencing what she calls the “empty tin man” feeling, which is depression so severe that the patients can’t feel anything, or experiencing intense anger that they have turned inwards upon themselves. Young says that it is extremely important to begin where the person is – in other words, you shouldn’t play a happy song to a depressed person. If the person is sad, she says she might start in a minor key, something slower, and if they’re anger, something more aggressive, paired with activities like banging a drum or throwing things against the wall to the music.

Young then says if a person doesn’t want to talk, she may just play something, improvising in a style they like, singing something to them that they might relate to or have associations with. She feels it can be just as important to not necessarily speak and just let the music connect to people. From this point on in the session, Young says she may not play another note, or might become an instrument for the patient and help them change the words to a song to help describe how they feel.

Young says that the job of the music therapist can also be to give people a way to organize their emotion. She describes the ability of music to open a lot of emotions, and the idea of using the music as a “container” to keep the emotion for them and prevent them from becoming overwhelmed. From an opened state, she says she might use the music to help close the container again, or possibly leave it open if there is another therapist/doctor coming to work with the patient afterwards on deeper issues.

“Music Therapy for Rehabilitation & Education” is a very similar video short describing some of the particular ways a keyboard instrument might be used in various rehabilitation contexts. The importance for her lay in the ability to not only help people regain their fine motor control using the specific movements and weights, but in allowing people to improvise. She shows how asking people to improvise on only white or black keys will always sound pleasing, and will be a motivating and rewarding experience. She believes that it would be a mistake to separate the human need for motivating, emotional, and rewarding experiences from the physical need for rehabilitation.

An interesting trick was that she showed how little songs might be used as memory devices to help patients remember important details about their day, or important information like their phone number. She plays an example, setting the words “My name is Mary and I’m in room 3” to the tune of Mary Had a Little Lamb.

Reflection:

Even though they were not the most scientific videos regarding music and its relation to the brain that I’ve come across, I thought it was interesting to just see a little bit of the physical work that a music therapist might do with a patient in a session. Although I’m very aware of what music therapists might do in a vague sense, I really have no clear bearing on exactly what it would look like in a session, taken out of academic context. I especially found this in watching the video with the snippet about treating a patient with severe depression. I think her most important points revolved around the fact that we are ultimately human and we need to find ways to connect to thinks in emotional ways, and this is especially true for those experiencing devastating depression.

To sum it up, this was my favourite quote, from the “Music Therapy for Rehabilitation” video: “You are human, you are naturally musical, and if anyone tells you otherwise you send them to me.” I would imagine many people think that music therapy wouldn’t work for them because they’re not musical, and I think there is more than enough evidence showing otherwise, so perhaps it’s just a matter of changing that perception.

Congenital Amusia and Dyslexia


Congenital Amusia and Dyslexia:
Questions and Possibilities

In Robert Jourdain’s book “Music, the Brain, and Ecstasy”, the term amusia is defined as “referring to any upset in perceiving, comprehending, remembering, reproducing, reading, or performing music.” [1] He goes on to explain the neurological divide between receptive amusia, which refers to a difficulty in listening/comprehending, or following pitch contours, and expressive amusia, which is related to the inability to reproduce musical patterns or sounds.[2] What we would more commonly refer to as tone-deafness, Jourdain would place into the category of receptive amusia, since it is not so much a trouble of pitch production as a difficulty in hearing pitch contours and relations. In scholarly practice, this affliction is commonly known as congenital amusia.
            Jourdain suggests that “[congenital amusia] may even be a musical counterpart to dyslexia (disordered reading),” citing that it seems to be proving itself a hereditary disorder, and that both congential amusia and dyslexia appear more often in males.[3] Later on in the book, he tells the unfortunate story of famous composer Maurice Ravel, who suffered from progressive left-hemispheric damage in his last years and subsequently lost his ability to string words together correctly, to read, and later, to write music. [4] His condition is referred to as aphasia, which is a general term describing the loss of general linguistic abilities. Jourdain describes this perceived connection between language and music, and notes that although amusia (typically thought to be caused by right-hemispheric deficiency) and aphasia (centered in the left-hemisphere) seem to be unconnected, “the two temporal lobes communicate fiercely, and failure on one side can make the other stumble. [There] are also aspects of language that rely on the right brain.”[5]
            Although this was a subject that had not been too thoroughly explored at the time Jourdain is writing from in the late 1990’s, and he acknowledges the other theories that point away from such conclusions about the relatedness between congenital amusia and language disabilities, there has been much research done recently regarding this connection. Most prominently, a research study coming out of Harvard’s Department of Neurology, conducted by Psyche Loui, Kenneth Kroog, Jennifer Zuk, Ellen Winner, and Gottfried Schlaug entitled “Relating Pitch Awareness to Phonemic Awareness in Children: Implications for Tone-Deafness and Dyslexia” (2011.) Phonemic awareness is defined as “the ability to process and manipulate spoken words made up of individual sounds or phonemes,” and it is one of the characteristics that identifies children with dyslexia.[6] This study in particular examined the correlation between pitch-awareness, which the researchers have defined as a combination of pitch perception and production, and phonemic awareness.[7] The results of this study showed an association between pitch-awareness and phonemic awareness, and the researchers suggest that this points at a connection and possibly common basis between dyslexia and congenital amusia.[8]
Some earlier researchers in the field strongly disagree with even the premise of these studies, and argue that they are too general in their definition of phonemes. José Morais from the Free University of Brussels, Belgium, argues that musical tones and the phonemes produced in language are entirely different: musical tones being just sounds, and phonemes being “abstractions of the units into which language might be broken down.”[9] The other half of Morais’ argument, however, relies on a lack in the “quality of published empirical studies” and his suggestion that the field itself may be to blame for a lack of discretion in which studies are deemed legitimate. [10]
            Despite Morais’ obvious qualms with the entire line of questioning, his statement from 2010 cannot discredit Loui and his colleagues’ aforementioned study on the relationship between “pitch awareness” and “phonemic awareness”. Most recently, Loui and Schlaug conducted a study this past year entitled “Impaired learning of event frequencies in tone deafness”, which found that people who suffered from congenital amusia also had difficulty with learning event frequency.[11] Conditional probability, in this particular context, has to do with our ability to distinguish the probability that one thing follows another (which is important in our understanding of speech), whereas event frequency has more to do with learning words, or being able to form a sense of tonal centre in music.[12] This study concluded that the tone deaf having an impaired sensitivity to event frequency, when added to the findings of other studies regarding the congenital amusia and conditional probability, suggested strong links between language learning and music learning abilities.[13]
            Although the research is relatively recent and there has not yet been enough work done to draw any clear conclusions about the exact nature and reason for the connection, there is definitely evidence that a connection between language and music processing abilities exists to some degree. Whether this will have major implications for helping those struggling with learning disabilities as devastating as dyslexia remains to be seen, but researchers seem to remain hopeful. Music therapy is still commonly used in helping those with dyslexia manage their disability and seems to thrive despite Morais’ somewhat scathing titling of his public release from “Music and Dyslexia” – “Music Therapy Fails Dyslexics”[14]
            In conclusion, it seems to my perspective that there is yet much work to be done to discover the nature of the connections, if it turns out that they exist, between music and language. One overarching theme does come out of all of the indecision, however – music is a great stimulator of many parts of our brain and we are only just beginning to understand its potential as a neurological tool.


[1] Robert Jourdain, Music, the Brain, and Ecstasy. (New York: HarperCollins, 1997) 286.
[2] Robert Jourdain, 287.
[3] Robert Jourdain, 113.
[4] Robert Jourdain, 290-291.
[5] Robert Jourdain, 291.
[6] Psyche Loui, et al., Relating Pitch Awareness to Phonemic Awareness in Children: Implications for
    Tone-Deafness and Dyslexia. (Frontiers in Psychology, 2011) 1.
[7] Psyche Loui, et al., 2.
[8] Psyche Loui, et al., 4.
[9] José Morais, Music and Dyslexia. (Int. J. Arts and Technology, 2010) 177-194.
[10] José Morais, 177-194.
[11] Psyche Loui and Gottfried Schlaug, Impaired Learning of Event Frequencies in Tone Deafness.
     (New York, Ann. NY Acad. Sci., 2012) 358.
[12] Psyche Loui and Gottfried Schlaug, 358.
[13] Psyche Loui and Gottfried Schlaug, 358.
[14] Morais, José. "Music Therapy Fails Dyslexics." EurekAlert. N.p., 8 Apr. 2010.



Works Cited

Jourdain, Robert. “Music, the Brain, and Ecstacy.” New York: HarperCollins, 1997.

Loui, Psyche, and Gottfried Schlaug. "Impaired Learning of Event Frequencies in
 Tone Deafness." Annals of the New York Academy of Sciences 1252 (2012):
 354-60. The Music and Neuroimaging Laboratory. Web. 22 Oct. 2012.
 <www.musicianbrain.com>.

Loui, Psyche, Kenneth Kroog, Jennifer Zuk, Ellen Winner, and Gottfried Schlaug.
"Relating Pitch Awareness to Phonemic Awareness in Children: Implications
 for Tone-Deafness and Dyslexia." PMC: US National Library of Medicine. N.p.,
 30 May 2011. Web. 18 Oct. 2012. <http://www.ncbi.nlm.nih.gov>.

Morais, José. “Music and Dyslexia.” Int. J. of Arts and Technology Vol. 3 (2010): 177-
194.

Morais, José. "Music Therapy Fails Dyslexics." EurekAlert. N.p., 8 Apr. 2010. Web. 20
 Oct. 2012. <http://www.eurekalert.org>.

Monday, November 12, 2012

Four Applications of Embodied Cognition

Four Applications of Embodied Cognition
Article: Davis, J. I. (2012). Four Applications of Embodied Cognition. Topics in Cognitive Science, 4(4), 786–793.

This article discusses the concepts of embodied cognition from four view points by seven different authors. These viewpoints discuss embodied cognition (how the body and environment shape the mind) and its relation to or influence on our understanding of the legal system, art and literature, architecture and music cognition. For this blog post I will discuss 2 of these views: 1) Literature and Art and 2) Music Cognition. While the first view does not have to do with music, it provides some interesting points in regard to literature and art.

1) Embodiment In Literature and Visual Art - by Ellen Esrock

Esrock discusses how reading and viewing has traditionally been seen as "fundamentally non-bodily". She discusses how recent scholarship on embodied cognition now considers how the body is involved in the acts of reading and viewing art. Esrock starts the article with a great example of how we might have a "bodily response" to what we read or view. For example, when reading about a streamstress' hand sewing, we may "feel" physical tension in our hands. Or when viewing a piece of art depicting a woman embroidering, we may "feel' the fabric or the needle moving through the fabric. Esrock says that this experience of feeling what we read or view is called "transomatization", as sort of "bodily immersion" into the text or piece of art. It's as if we mimic what we read or view.

Esrock goes on to discuss how embodied cognition looks at emotions as "bodily". She says that current studies are looking at empathy in literature and art - how it is that we can be have an emotional response when reading about or viewing a depiction of an emotional moment. Esrock goes on to say that literary and visual studies of embodied cognition, which examine areas of "human throughout, emotion and behaviour", may be helpful to other disciplines. She concludes by saying that teaching "embodied" subjects how to read and view literature/art may help them develop "cognitive and affective skills in other areas of life".

2) Embodied Music Cognition - by Leon van Noorden and Marc Leman

Noorden and Leman start their article by saying that "embodied music cognition sees music experience as based on perception and action". They discuss the idea that movement and music are intertwined and that in many cultures music and dance are not considered separate arts. The authors discuss that it is through movement that people may find meaning in music. This contrasts with traditional approaches to music cognition, which look at musical meaning from a "disembodied" approach - one that is only based on perception with little consideration of how the mind, body and environment are interconnected. Noorden and Leman say that ongoing research is now interested in how the human body is implicated in the creation of meaning in music. They go on to discuss a few interesting examples of this.

Noorden and Leman mention that embodied music cognition may help us understand how music impacts social interactions. They discuss recent studies that showed that children move more synchronously with music when they dance in a group, as opposed to when they dance individually. They also discuss the phenomenon of "resonant perception-action coupling", which occurs around the tempo frequency of 2 Hz (120 bpm). Studies have observed how resonance around 2 Hz causes changes in walking. Studies have also shown that children around 3 - 4 years of age can only synchronize with music when it is played at around 2 Hz. Other studies have also shown that 2Hz is best frequency for rocking babies to sleep. Interestingly, Noorden and Leman explain that by 5 years of age and older, children start to synchronize at more varied (faster and slower) tempi than 2 Hz. The authors state that it is as if "the older children learn to put brakes on the resonator".

Noorden and Leman also discuss how technology has made use of the concept of embodied music cognition. They discuss how the program DJogger, used on personal music players, offers digital music that matches the tempo of one's walking or running. The assumption is that synchronizing one's walking or running to music is motivating and stimulating (perhaps this synchronizing induces a sense of flow when running or walking, making it easier to coordinate movements or continue the activity for a longer period of time?). Noorden and Leman mention another example of technology using the concept of embodied cognition. The Sync-in Team game uses synchronization and entrainment in a "social music interaction game". They don't elaborate on this point, but they do say that programs such as these were shown to create a sense of "presence and flow".

Noorden and Leman conclude their article by commenting on how embodied music cognition may change our understanding of music and meaning. Instead of the traditional approach of focussing on how meaning is derived from a "perception-based" analysis of musical content, embodied music cognition looks at how musical meaning is formed through perception and action. For example, many people move when they hear music - this is a way in which we derive meaning in music. Noorden and Leman conclude their article by saying that embodied music cognition may impact how we understand social cognition, by using "concepts of movement and emotion synchronicity or entrainment".

Response:

I thought these two articles were a good introduction to the concept of embodied cognition. I thought the idea that we "feel" what we read or view to be very interesting. I can't say I have every felt powerful emotions when reading or viewing art, but I can definitely say I have been moved to tears many times when viewing films; however, I think a large part of this may be due to the music that accompanies the visuals in film. It would be interesting to re-watch a film I have had an emotional response to before, but without the music. I wonder if I would have the same reaction, or if it is music that is mostly responsible for my emotional response.

In the second article, I found the idea that we can find meaning in music through movement to be quite interesting. I find that I have a preference for music that has a "groove" to it. Sometimes it feels as though I cannot help but bob my head or feel like I'm almost part of the music. I feel this even more-so when I actually play music with a groove. I suppose this has to do with the concept of entrainment and synchronicity, where that sense of "flow" or "presence" occurs. Nooden and Leman mention how programs like Djogger create this sense of "presence and flow". I think the idea of jogging to music that matches your speed is a great idea. I certainly have a very hard time jogging to music when I'm not matching it's beat. I wonder if this is my musician brain reacting or if we all have a natural impulse to synchronize with a beat (?).

Overall, I thought these articles were interesting overviews of the concept of embodied cognition. However, I thought there were some ideas that both articles could have elaborated on. One concept I would have liked to know more about is the Sync-in Team game and exactly how it uses entrainment in a "social music interaction game". I also would have appreciated more background on how embodied music cognition can help us understand social cognition.

Can anyone share any knowledge or thoughts on this topic?

Music Therapy, Alzheimer's and Post-Traumatic Stress


Music Therapy, Alzheimer's and Post-Traumatic Stress
Podcast from the Library of Congress Music and the Brain Series
URL: http://www.loc.gov/podcasts/musicandthebrain/mp3/loc_musicandthebrain_clair.mp3
Host: Steve Mencher
Guest: Alicia Clair (Prof. of Music Ed. and Music Therapy at the University of Kansas)

In this podcast, host Steve Mencher talks to Alicia Clair about how she uses music therapy with her Alzheimer's patients, as well as with veterans with post-traumatic stress disorder and traumatic brain injuries.

Mencher first brings up the fact that music therapy in the US received a "boost" in the 1940's when it was used as successful treatment for veterans coming home from WWII. He asks Clair about this and she goes on to discuss that music was used as a daily regimen of treatment for soldiers who had shell shock and other various disabilities due to the stress of war. At this time, they had full music programs, orchestras and choirs in which patients participated. Although therapists didn't know how music was impacting their patients on a neurological level, they did know that music therapy was helping to relieve stress and increase social engagement among the patients.

Mencher asks Clair what was going on in the brains of these veterans, in what we now know as post-traumatic stress disorder. He asks: what was happening to them and how was it helping? Clair answers that when people hear music, it automatically "dampens" their autonomic nervous system. In response, breathing may deepen and heart rates may slow down. There may even be a release of muscle tension. Music gives patients a "space" where they can be free of anxiety, where they can "let go".

Clair continues to discuss how when people have traumatic brain injuries, psychologists need to go back and do a lot of "remapping". They may do physical therapy where they use music to entrain rhythm and facilitate motor movement. They may also use music in what she calls "tension control training".

Mencher asks Clair what "executive function training" is. Clair responds that executive function has to do with higher-order decision making and judgement, which is very important to daily life. It also has to do with being able to control impulses. She mentions that although Veterans may recover on a physical level, they may not have full cognitive function that will allow them to go back to work or interact normally with family and friends.

Mencher asks how music helps with the "remapping" of the brain, and what "remapping" is in this context. Clair says that the brain is plastic, it has the potential to change. If there is an injury to one part of the brain that results in a loss of physical movement, the brain can re-wire itself to use a different area of the brain for that lost function. However, Clair says this takes a lot of work. She continues on to say that music probably has the greatest impact on the rehabilitation of motor movement (walking, speech etc), as it activates the motor centre of the brain and helps patients synchronize their movements.

Clair then discusses the music she uses with her patients. She says that quite often music therapists will compose or improvise music for specific purposes and with certain tempos in mind. She says the music used in therapy is often live, but that they also prescribe recorded music to patients to practice at home. Clair continues on to say that sometimes the most "successful" music used in therapy is the music patients listened to in their early adult years (about age 15 - 23). She says that hearing this music elicits associations which can be visual, olfactory, auditory or emotional. While Clair says that this music can be the most "successful" music for therapy, she also says that it can sometimes get in the way of therapy, depending on what the therapist is aiming to achieve (perhaps the associations and emotions that are conjured up hinder other cognitive rehabilitation).

Mencher continues on to ask Clair about her work with Alzheimer's patients in the late stages of the disease. She discusses how music can be a helpful way for her patients to engage socially. This can be done through dancing, movement, or singing in groups or with family members. Clair also discusses how music can be used to decrease stress in Alzheimer's patients, who often experience confusion or are easily disturbed by change in their environments. She mentions that if stress levels are low, patients more likely to adapt to change without difficulty. They are also more likely to engage in "procedural memory" (how to put on pants or a shirt). When stress is high, these habitual memories are suppressed. Since music helps dampen the autonomic nervous system, it reduces stress and increases quality of life for her patients.

Mencher concludes by asking Clair if she has anything else she would like to share with the audience. She responds by commenting that even if a caregiver or family member has no musical training, he/she should try to use music with the Alzheimer's patient. Clair mentions that singing instructions is actually more effective than giving them verbally. This is because the patient's brain processes melodic instructions easily than without the melodic component. Clair concludes that using music is the "most endearing and close connection" a caregiver can have with the patient.

Response:

I found this podcast to be very interesting and informative. I was amazed to hear that therapists have been using music in the rehabilitation of war veterans since the 1940s. I had no idea that music has been used in rehabilitation for so many years. I was also intrigued by the fact that music "dampens" our autonomic nervous systems, slowing our breathing and heart rate, and thereby reducing anxiety and stress. It's quite fascinating that even when our memory and cognition is weakened (as in Alzheimer's patients), music can effect us on such a deep, primal level.

Another interesting point about the effects of music on Alzheimer's patients, is how music helps reduce stress in patients and enable them to engage in "procedural" memory (putting on pants etc). This makes me wonder how stress impacts cognitive function in general. Does stress hinder our performance of cognitive tasks, learning or memory? This podcast suggests that this is the case. In this last (stressful) month of classes, perhaps we should all make sure to listen to some music before we study or write our papers!

Music and Memory

 Janata, P, (2010) Music, Memories, and The Brain, Petr Janata: Music and the Brain. [podcast] April 29, 2010.
Link: http://www.loc.gov/podcasts/musicandthebrain/podcast_janata.html [Accessed: November 11th, 2012].

Music and Memory

This podcast is from the Library of Congress’ Music and The Brain series. It is an interview with Dr. Petr Janata, associate professor at the University of California at Davis and member of The Centre for The Mind and Brain. Dr. Janata is interested in how basic neural systems that underlie perception, attention, memory, action, and emotion interact in the context of natural behaviors, with an emphasis on music.

The subject of this podcast is Dr. Janata’s interest in and research on what he calls “music-evoked autobiographical memories.” Imagine driving down the highway in your pickup truck when you suddenly hear something familiar on the radio. Out of the speakers comes the classic rock song that was playing in the background while you proposed to your wife 20 years ago. You are instantly transported back in time; remembering the temperature outside, the smell of her perfume, the feeling of butterflies in the pit of your stomach. That is a music-evoked autobiographical memory.

Janata felt that music would be a great way to look at the brain structures associated with experiential memories, hoping to get some insight into the organizational hierarchy. For his research he used Functional Magnetic Resonance Imaging (fMRI), a procedure that measures brain activity by detecting associated changes in blood flow, as well as a questionnaire to gather data from his subjects.

He chose individuals with whom he knew he could elicit memories. While they were in the fMRI scanner he played them multiple 30-second musical excerpts. The examples included songs familiar to the subjects and numerous other random selections. After each musical excerpt the subjects were asked to rate the familiarity of the song, what memories were evoked, how pleasing the song/memory was, etc.

Janata and his team used those subjective responses to set up a statistical model to analyze the data, to figure out which regions of the brain changed depending on the level of emotion felt and/or the strength of the memory. Janata also mentions that he can pair this data with previous research in the movements of major/minor tonalities – enabling him to actually track the regions in the brain that are following things such as tonal keys, melodies, and chord progressions. He is essentially watching the brain listen. 

Reflection

I am very interested in the idea of music-evoked autobiographical memories, and it is certainly something that I experience myself. We have talked about the subject of music and memory a few times in class, on the October 9th lecture with the same title, as well as last week’s class on music and emotion. I enjoy critical discussion on this because I feel as though it is a subject that is accessible and easily relatable. Surely, as musicians ourselves, our relationship with music is above average and it is only natural that we have deep associations.

Early on in the podcast Dr. Janata touches on the similarities between memories triggered by the sense of hearing and the sense of smell. He says that there is a tight coupling between the two, that some of the same regions of the brain are activated. I found this very interesting; I often catch a whiff of something that sends me back to a moment many years ago, much like when I hear a familiar and meaningful song. When I thought about this I realized that for me, smells often trigger events or memories associated with my childhood and family - situational memories - while music will trigger more personal memories and feelings, like love, independence, and even past insecurities. This concept is something I wish to explore further. Do others have similar experiences?

As previously mentioned, I often get transported back in time through music. Honestly, any song from 1997 will immediately take me back to the summer when my mother stopped being a fulltime housewife and went back to the work force. That was the summer I gained independence, the summer I started my own cassette and occasional CD collection, and coincidentally, the summer I started going through puberty. We discussed this in class this week… Is it a coincidence? Are there certain stages in your life when music becomes more meaningful? Perhaps it just naturally lines up with the influential periods in our lives. Certainly, adolescence would qualify. Or is it that these periods in our lives seem influential because we are merely more impressionable – that these are simply the stages when we define our own personal taste.

I wanted to learn more about Dr. Janata’s research so I did some of my own. I hope to use this study as a basis for my own essay. Here are some websites I found:




Saturday, November 10, 2012

Harmonic Perception in Children




Katherine Napiwotzki

MUS 2122H 

30 October 2012



                Knowledge regarding children’s ability to perceive changes in Western harmony is continuously developing.  It is still becoming clear exactly how and when a child begins to demonstrate harmonic knowledge.  In Jourdain’s book, Music, the Brain, and Ecstasy published in 1997, an approximate timeline is given for the development of harmonic perception in children.  Jourdain states that, prior to age five, a child has little sense of harmonic relations.  Children become sensitive to key membership (the notes belonging to a particular key) at age 5, but lack any sense of consonance and dissonance relative to other keys (Jourdain, 112).  Therefore, they do not yet process harmonic chords and chord progressions.  Recently, a study by Trainor and Corrigall (2010) has found that children as early as 4 years old display some knowledge of Western harmony, and it is possible that an ability to perceive Western harmony could begin to develop even earlier. 
                Researchers seem to agree that harmonic perception is one of the latest musical skills to fully develop (Jourdain, 257; Corrigall, 195).  This is because it is significantly more difficult to detect harmony than it is to detect pitch, rhythm, amplitude, metre, or phrasing.  Increasing ability to perceive harmony involves both the ability to predict how chords might appear in sequence and the hierarchy of stability (Corrigall, 195) of particular chords within a key.  For example, the ability to differentiate between tonic and dominant chords is necessary to detect a cadence at the end of a piece.  It is believed that a general ability can be and is developed throughout childhood “through mere exposure to Western music (Corrigall, 195)”.  A study by Trehub, Cohen, Thorpe, and Morrongiello (1986) found that infants could detect a change in both tonal and atonal melodies while 4 and 5 year olds could only detect a change in tonal melodies (Corrigall, 195).  This would seem to suggest that already at the age of four and five, children are beginning to develop implicit knowledge of Western harmony.  This is due to the fact that, as we passively experience music in our daily lives, “incoming sound is [still] extensively processed in the brain stem (Jourdain, 245)”.  Our brain constantly processes sound often on a rather unconscious level.  When we are actively listening to a piece of music, whether classical or popular, our brain searches for familiar devices and patterns we have previously experienced in order to anticipate the next note or harmony in a piece of music (Jourdain, 246).  This knowledge develops as we get older so that “adults with no musical training have extensive implicit knowledge of Western harmony (Schellenberg, 553).”
                In 1994, Trainor andTrehub tested children ages five and seven as well as adults on their ability to detect a change in a 10-note melody presented repeatedly in transposition (Schellenberg, 553).  This study was intended to test the various age groups’ sensitivity to implied harmony since different notes within a melody imply specific harmonies (Corrigall, 196).   Therefore, the test included two different types of stimuli, changed notes not part of the implied harmony and changed notes part of the implied harmony.   The test discovered that adults were faster to detect a note change in the melody if the note was not part of the implied harmony.  Seven year olds responded similarly to the adults.  This seems to suggest knowledge of implied harmony for seven year olds, however five year olds did not respond.  These findings are in line with Jourdain’s statement that,
                [although] young children can detect key changes, they lack all sense of near and far keys – that is, of relative consonance and dissonance.  This understanding does not set in until seven or eight (Jourdain, 112).
                However, if tests are simplified, it becomes apparent that children younger than seven are influenced by harmonic changes.  For example, Schellenberg, Bigand, Poulin-Charronnat, Garnier, and Stevens (2005) discovered that six year olds respond faster in identifying timbre in the last chord of a harmonic progression when the progression follows Western harmonic rules by ending on the tonic chord.  This does not demonstrate a verbal understanding of what tonic means, but it does prove that six year olds have already developed a preference for the tonic chord simply by being exposed to Western music.
                Furthermore, Costa-Giomi (1994) proved that five year olds are able perceive changes between tonic and first inversion dominant seventh chords as well as tonic and subtonic chords in an unfamiliar piece of music.  The test presented both four year olds and five year olds with four different stimuli: En la torre (containing I & V) presented with just the accompanying chords or the melody with the accompanying chords and Drunken Sailor (containing i & VII) presented with accompanying chords or the melody with accompanying chords.  The results showed that neither five nor four year olds were able to discriminate between chords when the melody and harmonies were played at the same time, but five year olds could perceive harmonic changes when just the chords were played apart from the melody (Costa-Giomi, 77).
                So far, an increase in harmonic perception for five year olds has been shown.  Corrigall and Trainor (2010) take this a step further in their discovery that even four year olds are sensitive to harmony in a familiar piece, Twinkle, Twinkle, Little Star.  Children used hand- held happy and sad face signs to show whether a puppet played the excerpt correctly or incorrectly.  The children were tested on three deviants: out-of-key, in key out-of-harmony, and in-key and within harmony.  For example, in playing the melody, out-of-harmony deviants ended the piece on C# and within harmony deviants ended on F#.  Similarily, when the melody was accompanied by chords, out-of-key deviants ended on a d minor chord and out-of-harmony deviants ended on a G major chord.  The results showed that both four year olds and five year olds detected out-of harmony deviants.  The difference in development was that while five year olds could detect out-of-harmony deviants when only the chordal accompaniment of Twinkle Twinkle Little Star was presented, four year olds needed both the melody and chords to detect out-of harmony deviants.
                These results correspond with Trainor and Trehub’s 1994 study stating that both adults and seven-year olds are faster to detect out-of-harmony deviants than in-harmony deviants.  Through Corrigall and Trainor’s study, it is possible to observe that five year olds also have the ability to detect out-of-harmony deviants when the piece is familiar.  In addition, four year olds have the ability to detect out-of-harmony deviants when presented with both the melody and the chords to a familiar piece.  It is interesting to compare this to Costa-Giomi’s 1994 trial of the same age group where the students were presented three 15 minute harmony training sessions and still did not perform well during the test.  This would seem to suggest that harmonic perception in four and five year olds largely depends on the familiarity of the music.  In conclusion, Corrigall and Trainor’s study proves that children do have some level of harmonic perception earlier than what has previously been considered, and tests could be developed to detect smaller amounts of harmonic perception at an earlier age.



Bibliography
Corrigall, Kathleen A. and Laurel J. Trainor. "Musical Enculturation in Preschool Children: Acquisition of      Key and Harmonic Knowledge." Music Perception 28, no. 2 (2010): 195-200.                        http://search.proquest.com/docview/858125331?accountid=14771.
  Costa-Giomi, Eugenia. “Recognition of Chord Changes by 4- and 5-Year-Old American and Argentine       Children.”  Journal of Research in Music Education 42, no. 1 (Spring, 1994): 68-85.                 http://www.jstor.org.myaccess.library.utoronto.ca/stable/3345338
Jourdain, Robert. Music, the Brain, and Ecstasy. New York: Harper Collins Publishers, 1997.
Schellenberg, Glenn E., Bigand, Emmanuel, Poulin-Charronnat, Benedicte,  Garnier, Cécilia and Catherine            Stevens. “Children's implicit knowledge of harmony in Western music.” Developmental science 8, no.6 (2005): 551-566. http://journals2.scholarsportal.info.myaccess.library.utoronto.ca/tmp/6035240536070998679.pdf