Congenital
Amusia and Dyslexia:
Questions and
Possibilities
In Robert
Jourdain’s book “Music, the Brain, and Ecstasy”, the term amusia is defined as “referring to any upset in perceiving,
comprehending, remembering, reproducing, reading, or performing music.” [1]
He goes on to explain the neurological divide between receptive amusia, which refers to a difficulty in
listening/comprehending, or following pitch contours, and expressive amusia, which is related to the inability to reproduce
musical patterns or sounds.[2]
What we would more commonly refer to as tone-deafness, Jourdain would place
into the category of receptive amusia, since it is not so much a trouble of
pitch production as a difficulty in hearing pitch contours and relations. In
scholarly practice, this affliction is commonly known as congenital amusia.
Jourdain
suggests that “[congenital amusia] may even be a musical counterpart to
dyslexia (disordered reading),” citing that it seems to be proving itself a
hereditary disorder, and that both congential amusia and dyslexia appear more
often in males.[3] Later on in
the book, he tells the unfortunate story of famous composer Maurice Ravel, who
suffered from progressive left-hemispheric damage in his last years and
subsequently lost his ability to string words together correctly, to read, and
later, to write music. [4]
His condition is referred to as aphasia,
which is a general term describing the loss of general linguistic abilities.
Jourdain describes this perceived connection between language and music, and
notes that although amusia (typically thought to be caused by right-hemispheric
deficiency) and aphasia (centered in the left-hemisphere) seem to be
unconnected, “the two temporal lobes communicate fiercely, and failure on one
side can make the other stumble. [There] are also aspects of language that rely
on the right brain.”[5]
Although
this was a subject that had not been too thoroughly explored at the time
Jourdain is writing from in the late 1990’s, and he acknowledges the other
theories that point away from such conclusions about the relatedness between
congenital amusia and language disabilities, there has been much research done
recently regarding this connection. Most prominently, a research study coming
out of Harvard’s Department of Neurology, conducted by Psyche Loui, Kenneth
Kroog, Jennifer Zuk, Ellen Winner, and Gottfried Schlaug entitled “Relating
Pitch Awareness to Phonemic Awareness in Children: Implications for
Tone-Deafness and Dyslexia” (2011.) Phonemic awareness is defined as “the
ability to process and manipulate spoken words made up of individual sounds or
phonemes,” and it is one of the characteristics that identifies children with
dyslexia.[6]
This study in particular examined the correlation between pitch-awareness,
which the researchers have defined as a combination of pitch perception and
production, and phonemic awareness.[7]
The results of this study showed an association between pitch-awareness and
phonemic awareness, and the researchers suggest that this points at a
connection and possibly common basis between dyslexia and congenital amusia.[8]
Some earlier
researchers in the field strongly disagree with even the premise of these
studies, and argue that they are too general in their definition of phonemes. José
Morais from the Free University of Brussels, Belgium, argues that musical tones
and the phonemes produced in language are entirely different: musical tones
being just sounds, and phonemes being “abstractions of the units into which
language might be broken down.”[9]
The other half of Morais’ argument, however, relies on a lack in the “quality
of published empirical studies” and his suggestion that the field itself may be
to blame for a lack of discretion in which studies are deemed legitimate. [10]
Despite
Morais’ obvious qualms with the entire line of questioning, his statement from
2010 cannot discredit Loui and his colleagues’ aforementioned study on the
relationship between “pitch awareness” and “phonemic awareness”. Most recently,
Loui and Schlaug conducted a study this past year entitled “Impaired learning
of event frequencies in tone deafness”, which found that people who suffered
from congenital amusia also had difficulty with learning event frequency.[11]
Conditional probability, in this particular context, has to do with our ability
to distinguish the probability that one thing follows another (which is
important in our understanding of speech), whereas event frequency has more to
do with learning words, or being able to form a sense of tonal centre in music.[12]
This study concluded that the tone deaf having an impaired sensitivity to event
frequency, when added to the findings of other studies regarding the congenital
amusia and conditional probability, suggested strong links between language
learning and music learning abilities.[13]
Although
the research is relatively recent and there has not yet been enough work done
to draw any clear conclusions about the exact nature and reason for the
connection, there is definitely evidence that a connection between language and
music processing abilities exists to some degree. Whether this will have major
implications for helping those struggling with learning disabilities as
devastating as dyslexia remains to be seen, but researchers seem to remain
hopeful. Music therapy is still commonly used in helping those with dyslexia
manage their disability and seems to thrive despite Morais’ somewhat scathing
titling of his public release from “Music and Dyslexia” – “Music Therapy Fails
Dyslexics”[14]
In
conclusion, it seems to my perspective that there is yet much work to be done
to discover the nature of the connections, if it turns out that they exist,
between music and language. One overarching theme does come out of all of the
indecision, however – music is a great stimulator of many parts of our brain
and we are only just beginning to understand its potential as a neurological
tool.
[1] Robert Jourdain, Music,
the Brain, and Ecstasy. (New York: HarperCollins, 1997) 286.
[2] Robert Jourdain, 287.
[3] Robert Jourdain, 113.
[4] Robert Jourdain, 290-291.
[5] Robert Jourdain, 291.
[6] Psyche Loui, et al., Relating
Pitch Awareness to Phonemic Awareness in Children: Implications for
Tone-Deafness and
Dyslexia. (Frontiers in Psychology, 2011) 1.
[7] Psyche Loui, et al., 2.
[8] Psyche Loui, et al., 4.
[9] José Morais, Music and
Dyslexia. (Int. J. Arts and Technology, 2010) 177-194.
[10] José Morais, 177-194.
[11] Psyche Loui and Gottfried
Schlaug, Impaired Learning of Event Frequencies in Tone Deafness.
(New York, Ann. NY
Acad. Sci., 2012) 358.
[12] Psyche Loui and Gottfried
Schlaug, 358.
[13] Psyche Loui and Gottfried
Schlaug, 358.
[14] Morais, José. "Music
Therapy Fails Dyslexics." EurekAlert. N.p., 8 Apr. 2010.
Works Cited
Jourdain, Robert. “Music, the Brain, and Ecstacy.” New York: HarperCollins, 1997.
Loui, Psyche, and Gottfried Schlaug. "Impaired Learning of Event Frequencies in
Tone Deafness." Annals of the New York Academy of Sciences 1252 (2012):
354-60. The Music and Neuroimaging Laboratory. Web. 22 Oct. 2012.
<www.musicianbrain.com>.
Loui, Psyche, Kenneth Kroog, Jennifer Zuk, Ellen Winner, and Gottfried Schlaug.
"Relating Pitch Awareness to Phonemic Awareness in Children: Implications
for Tone-Deafness and Dyslexia." PMC: US National Library of Medicine. N.p.,
30 May 2011. Web. 18 Oct. 2012. <http://www.ncbi.nlm.nih.gov>.
Morais, José. “Music and Dyslexia.” Int. J. of Arts and Technology Vol. 3 (2010): 177-
194.
Morais, José. "Music Therapy Fails Dyslexics." EurekAlert. N.p., 8 Apr. 2010. Web. 20
Oct. 2012. <http://www.eurekalert.org>.
1 comment:
I agree that there is significant evidence to suggest a connection between language and music processing abilities. In response to José Morais’ comment that musical tones are “just sounds” whereas phonemes are “abstractions of the units into which language might be broken down”, I think there is enough evidence proving that musical tones are also part of a hierarchical structure. For example, a tone in a melody can imply a specific harmony, especially if it is near a cadential point in a piece of music. While language and music are structured differently, they both have multiple levels of organization. For example, a phoneme is part of a subunit, which is part of a word, a noun or prepositional phrase, a sentence, and a paragraph. A musical tone is part of a chord which has a function within the musical phrase, and a relationship to the key of the piece. Therefore, music has its own units, forming a hierarchical structure. While the structures of music and language are quite different, there are also some similarities. Ravel’s situation seems to indicate that there is a relationship between language and music since his aphasia was eventually accompanied by amusia. It makes sense that the amusia would come later on since, like you said, music affects so many different areas of the brain. I find this particular case interesting since many of the cases of aphasia I have heard about deal with musicians who never do develop amusia.
Post a Comment