p. 674 - 681. Retrieved from http://www.nature.com.myaccess.library.utoronto.ca/neuro/journal/v6/n7/full/nn1082.html
Summary
Patel’s article proposes a similarity between syntactic
processing in language and music. He
proposes that there is a significant overlap in syntax processing regions for
language and music in the frontal area of the brain. This hypothesis offers an explanation for the
apparent contradiction in music and language syntax research. While neuroimaging data shows an overlap in
the processing of syntactic relations in language and music, neuropsychology
shows that linguistic and musical syntax can be dissociated. For example, in cases of Broca’s aphasia,
people with a deficit in language production may still have the ability to
communicate through music. Similarily, individuals showing no signs of aphasia
may have an impaired perception of harmonic relations in music.
In
contradiction to the claim by Peretz and Coltheart that the cognitive and
neural relationship between music and language syntax is largely separate,
Patel proposes a distinction between syntactic
representation and syntactic processing. This is on the basis that at least some of
the processes involved in syntactic comprehension occur in different areas of
the brain than where the syntactic representations reside. He proposes that the mental representations of musical and language
syntax are quite different from each other, but the syntactic processing of music and language is
similar.
To support his claim, he uses two
theories: Gibson’s Dependency Locality Theory for language and Lerdahl’s Tonal
Pitch Space Theory for music. Gibson’s
Dependency Locality Theory states that linguistic sentence comprehension
involves two distinct components: structural
storage for keeping track of grammatical predictions and structural integration for connecting
each incoming word to a prior word on which it depends in the structure. The neural resources required for integration increase with the distance
between the new word and the beginning of the sentence. Music syntactic processing in the Lerdahl’s
Tonal Pitch Space Theory is slightly more complex since it is hierarchical, not
sequential. For example, octave
frequencies related by a 2/1 ratio are perceived as the same letter name in
pitch. Also, within the scale, there is a hierarchy of importance with the
root, 3rd, and 5th degrees perceived as more stable and
entire musical keys are perceived in terms of distance from one another with
relative majors and minors being more stable.
Therefore, it is possible to quantify the tonal distance between any two
musical chords in a sequence so the idea that mentally
connecting distant elements requires more resources applies to both language
and music syntax processing.
Patel’s Shared Syntactic Integration Resource
Hypothesis (SSIRH) suggests an overlap in syntax processing regions for music and language. He hypothesizes that these processing regions
provide resources for syntactic integration, and the syntactic integration
occurs in a separate region, the representation
region. Therefore, Patel proposes that
the dissociations between musical and linguistic syntactic abilities in people
with acquired amusia are due to damage to the representation region where long-term knowledge of harmonic
relations is stored, and the processing region
remains functional for speech.
Similarily, congenital amusia is described as the developmental failure
to form cognitive representations of
musical pitch.
Response
It is
interesting what Patel hypothesizes about music processing in people with
aphasia. There seem to be very little tests done so far in this specific area. He proposes that syntactic comprehension
deficits in language are related to harmonic processing deficits in music. Therefore, SSIRH proposes that while a person
with aphasia may be able to sing a melody with clear words, their ability to
process harmonic changes in music is still to some extent impaired. There does not seem to be a lot of research
in this area, but if this could be proven with testing, it could be strong
evidence for a relationship between music and language. I suppose it is a sensitive area to test
people with aphasia on their musical and linguistic abilities, but it seems
like the results from harmonic priming, a test in music cognition that Patel
suggests, could help us to further understand the relationship between music
and language processing in the brain. In
my opinion, Gabby Gifford’s ability to sing with clear words despite struggling
with speech after her injury, does indeed suggest a relationship between music
and language processing. If one skill
can bring back another skill then it would appear that they are somehow linked.
Since music is the more complex of the two, it would make sense that it would
have the ability to bring back abilities in speaking.
1 comment:
I thought your blog post raised some very intriguing points about the similarities of music and language. I find it especially interesting that people with a language deficit can still communicate through music, as you mention in your post in regard to people with Broca's aphasia. I took a continuing ed. class last year called "How the Brain Works". The professor mentioned that a friend of his has a very bad stutter, but that when he sings the stutter completely disappears. I wonder why this is? Could it be that singing uses different pathways in the brain than normal speech, which explains why people with damage to the language areas of the brain can still use music to communicate? It's interesting how the brain is so interconnected!
Post a Comment