Reference
Van der Vleuten, M., Visser, A., and Meeuwesen, L. (2012) The contribution of intimate live music performances to the quality of life for persons with dementia. Patient Education and Counseling, 89:484-488.
http://www.ncbi.nlm.nih.gov/pubmed/22742983
Summary
This study investigates the effects of live, vocal music performances on the quality of life of patients with varying severities of dementia. Multiple studies investigating the effects of music on dementia have previously shown that it decreases apathy, depression, and anxiety, and improves self-esteem, general expression, independence, memory, social interaction, and participation in meaningful activities. This publication is one of the first to describe the benefits of live music in particular on total quality of life. Previous investigations on the benefits of live music have been limited to the effects on levels of engagement and well-being In this publication, live music is suggested as a form of complementary care that could be used in nursing homes.
There are four types of music that have shown to have different effects on patients with dementia: music therapy, singing caregivers, background music, and live music.
Music therapy is the targeted, intentional use of music as a therapeutic strategy, administered by a therapist, and designed for a specific patient. This has shown to alleviate pain, decrease stress, fear, and depression, and improve expression of emotions, memory, health, and communication.
The singing caregiver seems to improve mood and expression of positive emotions. This also helps improve caregiver-patient relations, and can promote vitality in patients with advanced stages of dementia.
Background music improves expression of positive emotions and promotes playfulness.
Live music appears to have a greater effect on patients with dementia than any other type of music. It has shown to increase levels of engagement and well-being to a greater degree than recorded music, regardless of the degree of cognitive impairment. The authors state that this is due to the participation and social interaction involved in live music.
In this study, live music was performed by professional vocalists wearing fairy-tale like outfits. These performers also had been trained to maximize audience participation by encouraging dancing, maintaining eye contact, and interacting directly with the patients. These performances also included visual props and spoken poetry.
To assess total quality of life, this study used the following four parameters: participation, mental well-being, physical well-being, and residential conditions. Results were measured via 43 different characteristics, including body language, reaction when approaching, etc., which were allotted into one of the following subcategories: human contact, care relationship, communication, positive emotions, or negative emotions. These parameters were evaluated, following the performance, by the caretakers or family members of 45 dementia patients as either 1) decreased 2) did not change, or 3)increased.
The study found that live music had a positive impact on mental well-being, and participation, but did not affect physical well-being or residential conditions. The authors suggest incorporating live musical performances as part of complementary care for dementia patients. These findings also suggest that live music performances decrease patient anxiety, aggression, and depression, which in turn decreases the work load of caregivers.
Response
I think the conclusions drawn from this experiment are valid, and merit closer inspection. There is clearly a correlation between increased cognitive presence, expression of positive emotions, and live performances.
The initial concern I had regarding experimental design was that the responses were being evaluated by either family members or caretakers. This introduces an additional variable that was not addressed in the paper. I think it is extremely difficult for a family member to maintain complete objectivity when observing whether or not the performance is making their loved one happier, because that individual, on a deeply personal level, wants the patient to improve. It is likely impossible for the family member to remove an emotional bias. In order to work around this, the evaluations could be done by either care takers exclusively, or by a group of graduate students or volunteers both before and after the performance, in consultation with the care givers or family members. This way, one could insure that the observations were noticeable by a more objective party.
The second critique was that the authors state explicitly that in previous studies, live music increased levels of engagement and well-being due to participation and social interaction. In the review that was cited for this statement, the authors are very careful to explain that this is most likely due to participation and social interaction. This seems like a subtle difference, but what if the increase in well-being and engagement is due to the visual component that does not accompany recorded music? Could one play a music video and see similar responses in these parameters? Are these benefits due to a stimulation of both the auditory and visual systems, as opposed to the former in isolation? Could similar results be obtained by employing the use of Ipads instead? In all likelihood it is the social aspect of live music that seems to increase the overall quality of life to a greater extent than recorded music, but if these answers haven't been directly addressed, it can lead to false assumptions to make statements of this kind with absolute certainty.
It is also important to create a control group, since in this case one cannot decisively say if the benefits were due to live music performance alone, or in conjunction with costumes, props, poetry, and direct interaction with the audience. The authors did comment that this would need to be improved upon in future studies.
In summary, I feel this study makes a compelling case as to why live music should be incorporated into care strategies for dementia patients in old age homes. Despite some weaknesses in the preliminary study design, there is undeniably a correlation between improvement in quality of life and live music performance. It has certainly inspired me to organize more live musical performances for nursing homes in my community!
Thursday, October 3, 2013
Wednesday, October 2, 2013
Sound-Colour Synesthesia and the Nature of its Cross-Modal Mechanisms
Reference
Ward,
J., Huckstep, B., & Tsakanikos, E. (2006). Sound-colour synaesthesia: To
what extent does it use cross-modal mechanisms common to us all? Cortex, 42(2), 264-280.
(Available through UTL
catalogue)
Summary
In this study conducted at University
College London (London, UK), the nature of cross-modal perception in
synesthetes was investigated. The term
“synesthesia” is used to describe a neurological condition in which one type of
sensory stimulation evokes the automatic and involuntary sensation of another. One of the most common types of
audio-triggered synesthesia is chromesthesia, or “coloured hearing”—when the
hearing of sounds produces the automatic and involuntary visualization of colours
and patterns. A group of synesthetes who
reported colour sensations in response to music were examined alongside a
control group to find out whether this type of synesthesia employs similar mechanisms
used in normal cross-modal perception (common to all people), or whether there
are direct, privileged pathways between unimodal auditory and unimodal visual
areas that are unique to synesthetes.
Ward, Huckstep, and Tsakanikos suggest
that studies in synaesthesia can be used to inform theories of normal cognition. Chromesthesia can be especially useful in
this regard because there is evidence that suggests that not only do cross-modal
audiovisual mechanisms exist in the normal population in a more general sense,
but “coloured hearing” might be present in all of us from birth, but disappears
over time.
Some studies assume that all humans
are born with neural mechanisms capable of synesthesia, but somehow lose them
during normal development. People that do retain this neural hardware retain
synesthesia in adulthood. This view
suggests that there are special neural pathways in synesthetes that are absent
in other adults, i.e. synaesthesia uses cross-modal mechanisms that are not common to us all. Ward, Huckstep, and Tsakanikos are questioning
this opinion in their experiment. The
second hypothesis, and the one this particular study is based on, is that sound-colour
synaesthesia possibly arises from utilising pathways that are used to integrate
audio and visual stimuli as part of the normal mechanisms of cross-modal perception. There exist cross-modal audiovisual areas in
the brain that are more responsive to the combined
stimuli of sound and visuals than to either stimulus separately (e.g., during lip
reading). Because these pathways are common to us all, it is possible
that synesthesia is just an advanced mutation of these pathways.
Experiment 1 first examined whether
these sound-colour associations were arbitrary, i.e. were the colours that
synesthetes saw in relation to a specific tone same/similar to other
synesthetes, or were they drastically different? The authors first inform us that synesthetes
have not been compared to non-synesthetes in this regard before this
study. However, there are some
universals amongst non-synesthetes in terms of the sound-colour relationship,
for example: (1) most people can generate visual imagery to music on demand,
and (2) most people tend to associate higher pitch sounds with lighter colours. The results showed that sound-colour synesthetes
had a greater internal consistency of matching colours to various pitches and
timbres than the control group. However,
both groups generally used the same heuristics for matching between the audio
and visual stimuli (e.g., pitch to lightness, timbre to colour). These results confirm the hypothesis that
sound-colour synesthesia is an extension of cross-modal mechanisms common to us
all, rather than a privileged pathway between auditory and visual modalities
not present in non-synesthetes.
Experiment 2 aimed to establish that
synesthetes had automatic experiences of colour when presented with a tone. In a procedure similar to a Stroop Test, the
synesthete and control groups were asked to say the colour of a patch on the
screen while simultaneously listening to various tones which they were told to
ignore. A colour either congruent or
incongruent with the colour on the screen would be automatically generated
within the synesthetes. The results
showed that colours in synesthetes are automatically elicited to such an extent
that they are produced during a cross-modal Stroop task.
In Experiment 3, the nature of the
sound and colour were not the focus. Rather,
the synesthetes and non-synesthetes participated in a cross-modal variation of
the Posner cueing paradigm. An auditory
cue is (non-laterally) presented through headphones while simultaneously, two
coloured rectangles appear on the left and right of the screen, one of which is
synesthetically corresponding with the sound.
The task involves detecting the target, an asterisk, present in one of
the two rectangles right after the auditory cue. The results showed that the auditory cue
oriented attention to the synesthetically analogous location of the asterisk. Detection of the lateralised target was enhanced when combined with a
synesthetically congruent sound-colour pairing in both the synesthetic and control groups.
The results of these experiments
suggested that sound-colour synesthesia does indeed use similar (if not the
same) mechanisms used in normal cross-modal perception common to us all, and not special, direct, or privileged
pathways between unimodal auditory and visual pathways that are only found in
synesthetes.
Reflection
I was very excited to read this
article because of its relation to the discourse on synesthetic art. In addition to being a widely recognized
condition in neuroscience, the term “synesthesia” is used in fine art to
describe the simultaneous perception of multiple stimuli incorporated into one
gestalt experience. This can include, for example, multi-sensory projects in
the genres of visual music and sound visualization, audio-visual art, and intermedia.
The idea of “synesthetic art” can refer to either (1) art created by synesthetes
or (2) art that attempts to transmit or simulate the synesthetic experience. It
is an attempt to grasp the cognitive results of the subjective perceptual
experiences of natural synesthetes, and translate them into the realities of
non-synesthetes.
One prominent example of synesthetic
art is visual music. The techniques used
within this genre to visualize the sounds in music attempt to turn all
listener-spectators exposed to them into audio-visual synesthetes,
manually. This simulated or induced
synesthetic experience is often part of electronic dance music (EDM)
performances—probably due to their otherwise visually unexciting nature. EDM artists hire extensive teams of set
designers, artistic directors, graphic designers, and computer programmers, and
put ours of thought and consideration into creating shows that have less to do
with purely auditory experiences and become intermedia spectacles.
The technique of sound visualization
is widely used. Sound or music visualization
can be described as moving nonrepresentational imagery based on, or derived
from, the organization of sound within music. Abstract qualities found in
music—including rhythm, tempo, mood, counterpoint,
intensity, harmony, and compositional structure—are assimilated within visual phenomena.
The moving images can be generated in real time by a software program, or manually
conceptualized through computer graphics programs based on sine waves. The imagery
is synchronized with the audio as it is played in real time.
What makes this practise more intriguing
is that according to Ward, Huckstep, and Tsakanikos’s findings, the visual qualia
that the creators of any particular visualised music attribute to the source
sounds would be very similar to the visual qualia that any spectator-listener would
attribute to the same sounds. The colour-to-timbre or brightness-to-pitch
associations are not all completely arbitrary amongst non-synesthetes (although
they are a lot less consistent than in true synesthetes, of course), and prove
to be significantly similar. This
suggests a certain level of intersubjectivity amongst the audio-visual perceptive
experience. This hypothesis has value in
that the simulated audio-visual synesthetic experience seen as intense or
meaningful by its creators would be received as equally intense or meaningful by
its perceivers, and would simultaneously cultivate a more involved and significant
relationship with that artist’s music.
Tuesday, October 1, 2013
The Musical Brain
Source
The Musical
Brain. Dir. Christina
Pochmursky. Perf. Dr. Daniel Levitin, Sting. Canadian Television (CTV), 2009. http://www.veoh.com/watch/v2067939348aKHTtY?h1=The+Musical+Brain
Summary
This documentary explores why the love of music is universal.
It does touch on other musical studies, but this entry is solely based on Dr.
Levitin’s study of a master musician’s brain.
Dr. Daniel Levitin at McGill University, was a musician and
producer before deciding to enter into the realm of neuroscience. He was
interested in studying the brain of master musicians. When Sting was in Montreal,
he agreed to have Dr. Levitin study his brain in a MRI machine. Dr. Levitin
conducted a study with three steps:
1. Dr. Levitin named a song and Sting had to imagine the song as vividly as possible in his head.
2. Dr. Levitin played a clip of music through headphones and Sting had to listen and enjoy.
3. Sting was asked to compose a melody and lyrics for a new piece that he has never thought of before.
When Sting came to perform in Toronto, Dr. Levitin discussed
the results with Sting. Dr. Levitin first began by telling Sting that his brain
structure was normal. The first step of the study showed that even when Sting
was asked to imagine a song playing, the musician’s brain was fully engaged and
his visual cortex was activated. The second step of the study showed that when
Sting knew the piece, the brain could predict what was coming up, but when he
was not familiar with the piece, the brain was not as active. When Sting was
asked to compose a melody and lyrics, the caudate was
activated indicating that the brain was planning physical movements to the rhythm
which was being perceived. The corpus callosum, which transfers data between
the two hemispheres, was also activated. This was rarely seen in amateur
musicians and non-musicians before, as they typically use the right side of the
brain to process pitch and the left side of the brain for language. The better
the musician, the more it spreads out between the two hemispheres.
Sting comments on how he is a visual person and when he
listens to Bach, he can see beautiful architecture, massive chambers and
palaces and imagines the space where the music is played. He also mentions how
he has to keep an external metronome with his head or his foot when he is performing
music. After receiving the analysis, Sting states how it is both fascinating
and disturbing that a logical analysis is given to such a creative process.
Reflection
Dr. Levitin’s first step of the study on imagining a song
even without hearing it is very powerful because it studies what musicians can
hear in our head is based on memory. Not only are we recalling the melodic
line, but there is also rhythm, harmony, instrumental texture, timbre quality
and the emotional feeling that it evokes within ourselves. Depending on our
learning strategies – visual, aural or kinaesthetic, our brain will imagine the
music from that area of the brain. In Sting’s situation, he was a visual
learner, which allowed him to visual many images while he was imaging a piece
of music. If they were an auditory learner, they would be hearing vibrations,
instrumentation and tone qualities, while a kinaesthetic learner would feel
movement throughout their bodies.
The results from the second step of the study indicate that
the brain can predict information when it knows the piece of music. This does
not infer that the brain is not predicting information when it is not familiar
with the piece. I believe that our Western musical culture and influences has programmed
our brain to better predict future musical experiences. If a piece of music
from another culture was used in this study, results may differ.
As musicians, we are taught that rhythm is the basis and the
melody line is an additional layer to the rhythm. I found it quite interesting
that caudate was so lively when Sting was composing a melodic line. This may
mean that musicians who are actively engaged in the creative process plan the
body movement based on melody and rhythm.
As a teacher, we encourage our students to become more
creative by working through the creative process. I view the creative process as
something that naturally occurs from influences surrounding us. I wonder how the
brain’s reaction to music would differ if this same experiment was conducted on
students of varied ages learning musical instruments.
Your Brain on Improv
In this installment of the ever-popular TED talks, Dr. Charles Limb, a head and neck surgeon at Johns Hopkins University, offers a glimpse into the creative brain. Dr. Limb, a musician in his own right, was inspired by the astounding creativity of improvising musicians to devise a series of experiments to test which areas of the brain were active during improvisation versus the playing of a memorized pieces of music. Using an fMRI machine, Dr. Limb and his colleague Allen Braun studied the brain activity of jazz musicians (published here) as well as that of freestyle rap artists.
The basic methodology of the experiment was simple: each musician played or rapped a memorized test piece, then improvised on the same piece. In the case of the jazz musicians, a plastic keyboard was used in the fMRI machine so that the musicians could play along with a pre-recorded jazz trio. In part of the experiment, the pianists “traded fours” (wherein one musician improvises for four bars, then the other, and so on) with Dr. Limb in a musical conversation. In the case of the rap artists, a pre-written rap was memorized and performed, and then the artists freestyled over the same beat pattern.
The results of the study are fascinating, and hold intriguing implications for music performance in all genres. The researchers found that during improvised performances, the lateral prefrontal cortex – an area of the brain associated with self-monitoring and self-assessment – deactivated, while the medial prefrontal cortex – associated with autobiographical information and self-expression – became more active. It appeared that the intense creativity involved in musical improvisation required a certain dis-inhibition, a willingness to make mistakes, in order for the self-expressive regions of the brain to shine through. This deactivation of the lateral prefrontal cortex occurred only during improvisation. Additionally, during the “trading fours” portion of the experiment, the researchers observed activation of the Broca’s area, associated with language. Similar effects were observed with the rap artists, although these results were not explored in as much detail (likely due to time constraints, although Dr. Limb’s endearing performance of the rap test piece is worth the time it cuts out of the presentation!).
As a classical singer, and one who both performs and teaches classical music, I've found myself thinking quite a bit about questions raised by this talk. Chief among them is this: If it is the act of improvising – of creating music anew, on the spot – that truly seems to de-activate the monitoring, judging areas of the brain and activate the self-expressive, autobiographical, communicative areas, what are we whose stock and trade rests on the impeccable memorization and delivery of standard repertoire to do? Granted, Dr. Limb was not suggesting that this is the only “correct” way to make music; however, the prospect of freeing up self-expression by quieting self-monitoring is a deeply compelling one for any musician. Interestingly, Dr. Limb at one point uses the words “memorized” and “over-learned” in quick succession, almost interchangeably. I would appreciate more clarification on this point; does he feel that by the time a piece is memorized, it is by definition “over-learned”? Or did he specifically ask his subjects to practice the prepared test pieces to the point of utter monotony? Regardless, the issue remains that once a piece is memorized to the standard that is usually demanded in classical music performance, there can seem to be very little room for spontaneity. True, musicians are often asked in masterclasses and lessons to sing or play “as if you were making it up,” or “as if it had just occurred to you,” but can the same kind of communication and creativity that is present in improvisation truly be brought to bear on a memorized piece of music?
Are there lessons to be learned here for music educators? I certainly believe so. It is worth thinking about ways in which we can introduce (or re-introduce; how many times have we all heard a child spontaneously making up songs?) a kind of improvisation into music learning and performance. I’m not suggesting that all classical singers must suddenly learn to scat-sing, although it would be a good exercise, and probably an intimidating one for many of us! However, I do believe that as teachers, we can empower and allow space for our students to be spontaneous in their own musical interpretations, even in standard classical repertoire. Looking back on my own teaching, I’m certain that there have been many instances in which I have (hopefully gently) suggested to a student that he or she might want to push a tempo here or stretch a note there, coaching and repeating the phrase until I heard what I was listening for. Although I meant well, is it possible that what I was really doing was forcing my own musical ideas about the piece onto my student? If my student, of their own volition in the moment, chose to stretch a cadence for expressive reasons, might the part of their brain associated with communication and expressiveness been active? And if I, with the best of intentions, “corrected” them, might that part have succumbed to the assessing and judging lateral prefrontal cortex?
In one of the more memorable masterclasses I’ve sung in, I was working on a Bach aria with bar after impossible bar of coloratura, and was asked by the clinician (the inimitable Benjamin Butterfield) to “sing it like it’s jazz.” Even as a part of my brain dismissed the idea as “silly,” another part seemed to engage; suddenly those dizzying patterns of notes seemed playful, the phrasing shaped itself, and the rigidity of my performance dissolved into flexible, joyful ease. It’s a feeling I can remember but find hard to recapture. I'd love to know what parts of my brain were active in that moment, but even more I'd love to reliably find ways to encourage my students toward the same sort of expressive freedom. Dr. Limb's research may offer us some ideas in that direction.
Interactions Between the Nucleus Accumbens and Auditory Cortices Predict Music Reward Value
Review
I
first came across this study while reading an op-ed about it in the New York
Times, titled Why Music Makes Our Brain Sing. The article begins with the idea that though music is
intangible, it holds incredible intrinsic value and potency. In previous
studies, the author found that when music is described as highly emotional, it
engages the reward system deep within our brains thereby “activating subcortical nuclei known to be important in reward,
motivation and emotion.”[i]
Furthermore, when we feel a peak or a climax in the music, dopamine is
released. Not only does dopamine release during
this peak emotional moment, but it also releases in anticipation of this moment. Dr. Zatorre and his research team
decided to dig deeper into questioning what happens in our brain when we hear a
piece of music for the first time. They specifically framed this in the context
of online music purchasing to study what happens when someone hears a piece of
music and decides to buy it.
The impact of music is generally thought to be as a
result of expectancies that are created through delay, anticipation, and
surprise. We can think of these expectancies perhaps in the way that great
writer’s create suspense, or how composers manipulate how we expect to hear music
unfold. These expectancies or anticipations of events in the music may be based
on specific musical conventions or may be based on more “implicit schematic
rules of how sound patterns are organized.”[ii]
These expectancies are based on our own specific musical knowledge – whether
transmitted or acquired – and are specific to culture and people. These ideas
explain how we can enjoy familiar music, but according to the authors of the
study, do not explain how previously unheard music can be enjoyed. The study
seeks to determine whether there is a biological response to music based on
“schematic expectancies” that are independent of explicit musical knowledge.
The study was conducted using new excerpts of music that none of the study’s
participants would have heard before thereby lessening any explicit predictions
about the music. These excerpts were selected with the help of a music
selection software. Each participant could purchase the music with their own
money as a sign of whether they wanted to hear the music again.
While undergoing fMRI scanning, participants
listened to each clip and placed bids of between $0 and $2 based on
desirability. Contrast analysis showed which parts of the brain were active
while making the purchase decisions. It also revealed what parts of the brain
are activated when the music is undesirable ($0) and when it is more desirable
(bids > 0). When the music is
found to be highly rewarding, a network of regions within the brain are activated.
However, only the dorsal and ventral striatum showed activity that increased
proportionally with the reward value of the music.
In some ways, the reward value of music is hard to
qualify since it involves a combined “sensory and cognitive experience that can
influence one’s affective state.”[iii] There are
strong links between both sensory and affective systems. The subcortical
regions and auditory sensory cortices work together to establish a rewarding
stimulus. The auditory cortices are involved with extracting various sonic
relationships drawing on previously heard sounds and memories of musical
structures. These cortical stores in combination the nucleus accumbens can
contribute to our perception of sounds as rewarding. Our musical expectancies
are not just tied to harmonic or structural changes, but can also be linked to
perceptions of rhythm, timbre and even changes in loudness. Overall, this study
seeks to show how inherently neutral musical excerpts can be given reward value
through interaction with higher order brain regions. The values placed on these
sounds then influence behavioural decisions, like whether to purchase the music
or not.
Reflection
This
type of research has strong implications for music purchasing online since this
is quickly becoming the most widespread distribution system. Understanding how and why people purchase the music they do is important for both people
designing the software algorithms that recommend and sell us our music as well
as for the people actually creating the music. In some ways, we live in a
musical market that is quickly becoming more and more saturated. There is so
much music to sift through that we spend increasingly less amounts of time
listening to music. We make judgments about the music of a particular band or
its inherent reward value after very short snippets of music. The consumer is
empowered in new ways in this system of music distribution since increasingly,
we as consumers are the ones deciding how much we want to pay for music. An
example of this is the website Noisetrade.com that allows musicians to sell
music on a free or PWYC basis.
Though
this study is interesting in its scope, I think there are still more factors
that contribute to the musical judgments that we make. I think that the study
is a bit too narrow in the assumption that the majority of our determination of
the value of music is based on past musical experiences and expectant musical
patterns. Often, we judge the inherent value of a particular band or artist
even before we are familiar with their music. There are times when I won’t even
listen to excerpts of music just based on other songs by the band that I have
heard and either liked or disliked. Even our perceptions of certain styles of
music are coloured by our own experiences and cultural preferences.
Furthermore, we are often influenced by different value judgments that society
places on music. Some bands are seen as “hipster” or “trendy.” It is often
considered cool within society to
have certain musical preferences. Sometimes we become pre-disposed to styles of
music just because of the cool-factor associated with them.
I
wonder if the fact that carefully crafted software algorithms selected the
music heightens our engagement and curiosity with it. For example, the study
acknowledges that most of the excerpts selected matches the current trends and
interests in the Montreal music scene – mostly dance and electronic music. So,
the participants are in a way pre-disposed to like more of the music that has
been selected than if a wider sample based of styles had been used. Of course
this has a purpose for the study since there is a stronger likelihood the music
will be a source of strong reward stimuli. However, I find that sometimes I am
more pre-disposed to like particular
music that is recommend to me either on iTunes or YouTube based on my typical
search patterns and listening patterns just because it is recommended. In other
words, I am more likely to check out the music because I feel like some sort of
“Big Brother” is recommending the music as worthwhile and important. We have to
be aware of how easily influenced we can be and be cognizant of the fact that
not all of our decisions in terms of the value of music are actually based on
musical qualities and characteristics. We often have assumptions about music
that are not based on the aesthetic at all – they are based on the extramusical
baggage that we carry with us.
[i]
Zatorre, Robert J., and Valerie N. Salimpoor. "Why Music Makes Our Brain
Sing." Editorial. The New York Times
7 June 2013. Web. 28 Sept. 2013.
[ii] Ibid.
[iii] Salimpoor,
Valorie N. et. al. "Interactions
Between the Nucleus Accumbens and Auditory Cortices Predict Music Reward
Value." Science. 340.6129 (2012): 216-19. 12 Apr. 2013. Web. 28 Sept. 2013.
How your brain tells you where you are
Reference:
Neil Burgess - “How your brain tells you where you are”
Filmed at TEDSalon London, November 2011
Review:
Neil Burgess, director of the Institute of Cognitive
Neuroscience at University College London, delivers an interesting speech about
the brain cells’ pattern activities and how these inform our perception of the physical
environment surrounding us. Burgess opens his lecture with an engaging
question: When we park a car in a parking lot, how do we remember where we
parked it? Here he illustrates the situation with an image from the animated
sitcom The Simpsons; the image we see portrays Homer (Simpson) facing the
problem presented in the opening question.
Burges then draws the audience’s attention to the hippocampus;
a part of the brain that the author refers to as the “organ of memory.” This is
where information from our short-term and long-term memory is processed and
stored. The hippocampus also plays an important role in spatial navigation. The
hippocampus, like any other part of the brain is made of neurons, and it is the
first region affected in those suffering from Alzheimer’s disease. Burgess exemplifies
the inner workings of the brain by showing a graphic representation of a rat’s neuronal
activity while moving in an enclosed space. The experiment shows that same
neurons fire electric impulses whenever the animal moves in any given direction.
Burgess also refers to a study conducted on patients suffering from epilepsy;
in the study the patients were asked to drive a drive simulator around a virtual
town. Like in the previous experiment, same neurons would become active every
time patients reached particular locations in the simulated town. In other
words, our brain is constantly mapping the world around us and stores the
retrieved spatial data.
Brain cells do not only fire due to movements within explored and familiar spaces, they also react to changes in our environment by locating and mapping them into our memory. If we expand or modify the box the rat was placed in, areas in the hippocampus will expand and change accordingly. This arguably means, that detecting the boundaries and distances pertinent to our surrounding environment is the main role of hippocampus. In order to further explore this phenomena on humans, Burgess and his team created a virtual environment that simplifies the processes informing the placement of objects within the shifting boundaries of our environment. People would be given some time to explore a given environment and then, moments later, when reintroduced to it, many of them were increasingly efficient at finding objects (a flag, a car) placed within it. Moreover, when they were placed back in environments that have been purposely enlarged, neurons stretched out in the exactly same way the place did.
Finally, place cells get an important input for path
matching from the cell called a grid cell. A grid cell is a type of neuron found
in many mammals. As the animals explore places, these cells fire out arrays into
different directions in a regular triangular grid. Burgess describes these
arrays as the latitude and longitude lines marked on a map. As the animal moves
around, the center of activity of the cells moves accordingly, informing the
animal’s brain of where it is in the space. Burgess further states that MRI
scans show that people playing a game in virtual space have an increase on the entorhinal
cortex’s activity.
At the end of his speech, Burgess returns to the opening
question. Homer, Burgess argues, remembers the location of his car by detecting
the distances between the objects and the boundaries around them, a process
regulated by the firing of boundary-detecting cells. Homer’s path to his car,
however, will be directed by the firing of grid cells. By moving around, he
will match firing patterns stored in his brain when he left the car with those
firing at the time of his return. Additionally, Burgess hypothesizes that same
neural mechanisms that carry different tasks in spatial memory, are used for
generating visual imagery. This would allow us to recreate events that happened
to us. At this moment Burgess introduces head direction cells, cells that fire
electric impulses according to the point from where we are facing objects or
events. These neurons help us define viewing directions in space. They also
help us imagine past or entirely imaginary moments from different viewing
perspectives.
Burgess closes his lecture by stating that we are
entering a new era of cognitive neuroscience where the processes informing our
capacity to remember, imagine, or perceive things around us, should be fully comprehended.
Reflections:
In his speech, Neil Burgess introduced several types of
neurons in a highly engaging manner. Place cells, grid cells, and head
direction cells are all inter-directional subsystems of spatial memory,
responsible for the production of cognitive maps. As a performing artist I find
this insight to be of great importance, not only for understanding environments
and the space around us, but for understanding how we perceive and communicate with
other subjects within it. This is especially important for group performances
where performers are required to be aware of the presence of other performers,
audience members, and the spatial particularities of the venue. During
rehearsals our firing neurons are creating cognitive maps, maps that are then
used during performances. Live performances, however, are usually unpredictable
and different firing cell patterns will occur in the actual moment.
Another important point introduced by Burgess is that
neural mechanisms responsible for spatial memory invoke the emergence of visual
imagery. Understanding the phenomena of visual imagery can be important for
artists such as painters, dancers, directors, and for all of those whose works
involve any kind of visualization.
Therefore, it is not surprising that contemporary
classical composers consider our perception of space equally important as our
perception of sound. It also evident that many creators today instruct audiences
or performers on how to position themselves in space in order to improve the performance
or experience of their works. During the second half of the 20th century
we saw the emergence of a new form of artistic expression called cite-specific
art. In this tradition, the artwork is created to exist in a specific space,
carefully chosen by the artist. Even though such locations very often include
urban sites, many artists choose rural landscapes as an environment for
presenting their works. Consequentially, our perception of an artwork is deemed
to change according to a surrounding environment that forces us rethink our relationship
to it. This way our cognitive maps would be redrawn by something more than spatial orientation.
Music in the brain: the musical multi-feature studies
1. Reference
Music in the brain: the musical multi-feature studies by Peter Vuust
http://www.en.auh.dk/files/Hospital/AUH/English/Departments/Center%20of%20Functionally%20Integrative%20Neuroscience%20(CFIN)/The%20musical%20multi-feature%20studies.pdf
2. Summary
3. Reflections
Music in the brain: the musical multi-feature studies by Peter Vuust
http://www.en.auh.dk/files/Hospital/AUH/English/Departments/Center%20of%20Functionally%20Integrative%20Neuroscience%20(CFIN)/The%20musical%20multi-feature%20studies.pdf
2. Summary
In
Peter Vuust’s ‘The
musical multi-feature studies’, he mentioned that the study of how musicians’
brains evolve through daily training has recently emerged as an effective way
of gaining insight into changes of the human brain during development and
training. Mismatch negativity (MMN) studies have consistently revealed neural
differences in early sound processing between people with different musical
backgrounds. And he throws a question, “Can the MMN paradigms be adapted to
resemble a musical context while keeping the experimental duration contained,
and will they reveal differences in sound-related brain activity among
different types of musicians?”
For his experiment, he made two changes to classic MMN-paradigm:
1. Emulating harmonic progressions found in real music by using the Alberti
bass with underlying a harmonic scheme of major and minor chords. 2. Embedding
more than one type of sound with alternating pitches. Through using this musical
multi-feature paradigm, he could test for differences between musicians playing
different styles of Western music, specifically between classical, jazz and
pop/rock musicians.
Regarding the listening experience, there are differences in
relation to how musicians are taught and learned. For example, for Jazz
musicians, they typically learn and perform music by using the ear and they are
taught by ear training programme at Jazz school, in contrast, Classic musicians
are less focused on learning by the ear. (Suzuki method teaches music by ear in
the early years of childhood)
He applied the new fast musical multi-feature MMN paradigm with
classical musicians, jazz musicians, band musicians and non-musicians with 6
types of acoustic changes: pitch, mistuning, intensity, timbre, sound-source location,
and rhythm in the same sound sequence for 15 minutes. They obtained larger
overall MMN amplitude in Jazz musicians as compared with all other types of
musicians across six different sound features. This indicates a greater overall
sensitivity to sound changes in Jazz musicians as compared to others.
Especially, sliding to tones is a typical feature in improvisational music such
as Jazz music as opposed to Classical music. When interpreting these results,
it should be kept in mind that jazz musicians score higher in musical aptitude
tests than rock musicians and non-musicians, especially with regards to tonal
abilities.
He points out few interesting implications and applications of
this study. First, the MMNs obtained in relation to the auditory deviants in
our musical multi-feature paradigm shows that it is possible to develop highly
controlled brain measuring paradigms which still resembles “real” music. “We
may be able to track brain measures (MMN) involved in survival-related
attentional processing during ‘real’ music listening, and thereby study other
important aspects of music.” Secondly, this paradigm provides an ecological
method of comparing MMNs in musicians from different musical genres and this is
important because musical complexity, in many instances, is crucial in order to
detect fine-grained auditory processing differences between participants from
various musical backgrounds. Lastly, it may find usage in clinical studies,
where it may be used to identify the cognitive limitations related to musical
processing.
3. Reflections
I was very thrilled to know about the concept and purpose of
Mismatch negativity (MMN) and it was interesting to see the differences in
sound-related brain activity among different types of musicians. I was
surprised that Jazz musicians scored higher in musical aptitude and obtained
larger overall MMN amplitude than others since I expected Classic musicians
would score higher than others. Vuust mentioned, “Jazz music in its modern form
is characterized by complex chord changes, rich harmonies and challenging
rhythmic structures such as polyrhythms that place great
demands on listeners’ and performers’ theoretical and ear training skills”, as
if classic musicians are not trained as much. I do not agree fully with his
point because there are many classic musicians who have very well-trained ear
and improvisation skills. Furthermore, we can see all of complex chord changes,
rich harmonies and challenging rhythmic structure in many classic pieces since
baroque to contemporary music. Also, for Rock musicians, there are many musicians
who can improvise and composed very well as much as Jazz musicians. Of course,
the result of this study would be varied depending on who they chose but I am
just wondering if all musicians have same level of musical skills.
Moreover, since this study is first to show differences in
pre-attentive brain responses between musicians, it would be very interesting
to see “multi-attribute ‘profiles’ of sound-discrimination abilities in single
individuals” in further study if they can refine ERP method at the individual
level like what they mentioned. Also, I strongly agree and support the idea
that it may be helpful to those who have cognitive limitations related to
musical processing.
Subscribe to:
Posts (Atom)