Saturday, November 15, 2014

(Musical) Silence


Source
Galloway, L. (2012, October 23). The quietest place on Earth. BBC Travel. Retrieved from http://www.bbc.com/travel/blog/20121022-the-quietest-place-on-earth

Summary & Commentary
          According to the Guinness Book of World Records, the “quietest place on earth” is an anechoic (echoless) chamber at Orfield Laboratories in Minneapolis, USA.  Its lack of echoes is credited to its construction.  After walking through two vault doors, one-metre long foam wedges line the room’s walls, ceiling, and even bottom; one must walk on a “floor” made of a trampoline-like mesh.  Virtually no sound is reflected off of the walls, which absorb 99.99% of all noise.  High-frequency sounds are directly absorbed by the fibreglass wedges, and low-frequency sounds bounce in between the wedges until they fizzle out.  As a result, its sound level is -9.4 dB (decibels)—humans can only detect sounds above 0 dB.  The decibel level of a room that an average person would consider to be “quiet”, for example, is 30 dB.
In the chamber, not only can you hear a pin drop, but you also start noticing the frenzy of activity that is going on inside your own body.  People have reported hearing their lungs breathing, their own heartbeat, their stomach digesting food, and even hearing the blood rush to their head and back through to their body.  Even ears themselves make sounds in the echoless room.  Steven Orfield, founder and owner of Orfield Laboratories, explains: “The ear is like a microphone and a loudspeaker. And when it is deprived of sound, it produces its own sounds.” (Weber 2012).  Orfield is talking about otoacoustic emissions (OAE) which are sounds generated by various cellular and mechanical processes in the inner ear, and usually go completely unheard by the unknowing public.
Even after about five minutes in the room, people become disoriented, dizzy, nauseous, and feel like there’s tremendous “pressure” on their head.  “What seems to be happening is you are feeling some pressure in your ears,” explains Orfield, “but what you’re really feeling is pressure being taken off of your ears. Sound is technically called sound pressure level. And in this room we’re actually taking huge amounts of pressure off of your ear, so it’s highly sensitized by not being loaded with normal amounts of noise.” (Weber 2012).  People who visit the anechoic chamber usually sit in a chair.  Standing up and moving around becomes increasingly difficult because we rely so heavily on sound to keep our balance.  Walking feels strange because we have no aural reference (or if one is inside the room in the dark, no visual cues either) for mobility.  Without any feedback from the outside world—like the sound of your own feet moving, for example—uninterrupted action is almost impossible.  The chamber also causes many to feel claustrophobic.  Usually reverb or echo tells our auditory system that there is an ample amount of space, thus the complete absence of echo can instinctively send us into a panic.
          Furthermore, once you lose certain sensations, other perceptions become heightened.  A complete absence of noise can lead to a heightened sense of smell or touch, but can also cause your hearing to become much more sensitive.  As we already know, the auditory cortex sorts, organizes, and simplifies sounds.  And many of the bodily noise people report hearing in the echoless chamber would normally be victim to habituation.  We don’t need to hear these things when we’re out in the world and have to be aware of cars, animals, weather, other people, etc.  But when all of those stimuli disappear, undetectably quiet sounds seem louder, as if our brains have recalibrated to the new noise levels.  People have even reported hearing aural hallucinations, although none of my sources have described the nature of them in detail.
          How quiet is too quiet?  For people with sensitive hearing, the chamber might indeed be too quiet and they would have to get out of the room immediately.  The longest amount of time anyone has been able to spend in Orfield Lab’s anechoic chamber without panicking and needing to leave is 45 minutes.

Reflection
          This article made me think about the relationship between silence and music in two different ways: (1) musical auditory hallucinations as a result of silence, and (2) the idea of “musical” silence, or, periods of silence in between notes, phrases, or pieces, inserted for artistic purposes.
As previously mentioned, it is a shame that the alleged “auditory hallucinations” people have had in the Minneapolis anechoic chamber aren’t better documented. What do people come to “hear” without any outside stimulation when left in complete silence?  Do people ever perceive music that isn’t really there? Is it easier to remember or vividly visualize music in silence? Maybe because we’re so used to hearing everyday sounds, that when all of it is taken away we try to make up for what we’re accustomed to by creating the sounds ourselves?  After all, the healthy population is fully capable of recreating sounds—whether it be voices, street noises, or music—in the mind’s ear with minimal effort, so it’s not a far leap to say it can sometimes occur involuntarily due to sensory deprivation. 
          An echoless room is essentially a variation on John C. Lilly’s sensory deprivation tank, which he invented in 1954.  A physician and neuroscientist, Lilly aimed to isolate the brain from any external stimulation.  The small tank, which closes shut to stop any light from getting in, is filled with warm salt water allowing for a subject to float for extended periods of time.  The sensory deprivation tank operates on the principle that in total absence of external stimuli, the human brain creates its own perceptions.  The idea of perceptual isolation is known today as the Restricted Environmental Stimulation Technique (REST) which can occur in a room or in water.  Even with the psychological and neurological research done with REST, there are very few findings about auditory hallucination—or hallucination as a result of sensory deprivation—that doesn’t cross over into issues of psychosis (see Nayani & David 1996; Na & Yang 2009, for example).
Now, knowing everything we’ve just learned about our brain’s reaction to silence, what are the potential applications of silence in music?  And how can silence be used in music for artistic or dramatic purposes?  In “Moved by Nothing,” Margulis explored five functions of silence in active, participatory music listening: (1) silence as boundary, (2) silence as interruption, (3) silence as a revealer of the inner ear, (4) silence as a promoter of meta-listening, and (5) silence as a communicator.  She says that “since literally nothing happens for the extent of the duration of the silence, all of our various percepts, reactions, surmises, and senses reveal things we have brought to the silence.” (Margulis 2007, p.246).
In a setting where one has consciously sat down to listen to music, silence can be extremely powerful.  Like the commanding silence before the beginning of a piece, for example, where every attentive ear is hypersensitive to the slightest sound, getting mentally ready to hear what is to unfold before them.  In line with the anechoic chamber discussion, sounds emerging from silence are actually better processed by our brains, as the search for an auditory stimulus activates the auditory cortex (Voisin et al. 2006).  Moreover, EEG (electroencephalography) and MEG (magnetoelectroencephalography) tests show the brain’s detection of musical phrase boundaries shortly after the phrase’s offset, suggesting that listeners spend these silences synthesizing the preceding musical phrase and refocusing their attention on hearing the subsequent one (Margulis 2007, p.253).
Just like in a sensory deprivation tank, when auditory stimuli are removed in the context of music-listening our own personal auditory imagery, visualizations, imaginings, assumptions, and expectations come to the fore.  Musical silence can definitely serve to encourage this process, additionally acting as a sonic boundary that guides our auditory attention.

References
Margulis, E. H. (2007). Moved by nothing: Listening to musical silence. Journal of Music Theory, 51(2), 245-276.
Na, H.J., & Yang S. (2009). Effects of listening to music on auditory hallucination and psychiatric symptoms in people with schizophrenia. Journal of Korean Academy of Nursing, 39(1), 62-71. http://dx.doi.org/10.4040/jkan.2009.39.1.62
Nayani, T. H., & David, A. S. (1996). The auditory hallucination: A phenomenological survey. Psychological Medicine, 26(01), 177-189. doi:10.1017/S003329170003381X
Veritasium. (2014). Can Silence Actually Drive You Crazy? YouTube. Retrieved from https://www.youtube.com/watch?v=mXVGIb3bzHI
Voisin, J., Bidet-Caulet, A., Bertrand, O., & Fonlupt, P. (2006). Listening in silence activates auditory areas: A functional magnetic resonance imaging study. The Journal of Neuroscience, 26(1), 273-278.
Weber, T. (2012, June 21). In Minneapolis, the world's quietest room. MPR News. Retrieved from http://www.mprnews.org/story/2012/04/03/daily-circuit-quiet-room.

Tuesday, November 11, 2014

Autism and Pitch Processing: A Precursor for Savant Musical Ability?

Source: 
Heaton, P., Hermelin, B., & Pring, L. (1998). Autism and pitch processing: A precursor for savant musical ability?. Music perception, 291-305.

Autism is a developmental disorder characterized by impairments in socialization, communication and cognition. (292) It has been used as an umbrella label to categorize children in the classroom that do not fit the defined "norm".  Early identification begins with the student that cannot sit still, that speaks out of turn, that cannot focus or contribute in group settings and whose progress is consistently slower than other students in the classroom. But sometimes students who are labeled "autistic" do something quite marvelous that the "norm" cannot master. Defined as idiot savants at the turn of the 19th century, these were individuals with low cognitive ability who were able to master a skill in an isolated area (291). The fascination with musical savants is evidenced in this blog. While many struggle with hours of practice and performance anxiety savants appear to be musical geniuses with the innate ability to perform music. Earlier research concludes that savants are present in 1 in 2000 of the learning disabled population.

Heaton, Hermelin and Pring theorise that savantism may actually be present in higher numbers and set out to research the "precursor" to savant like ability in autistic children. According to the study there is a high frequency of savant ability in the "general mentally handicapped population". They cite studies that have shown that autistic adolescents have been found to isolate information, what they describe as "local processing" as opposed to making sense of information as a whole, described as "global processing."

Their methodology tests ten identified autistic boys, between the ages of seven and thirteen with no prior musical training. The control group consisting of ten boys with average academic ability but younger in age. The children were matched by chronological age to the cognitive age of the autistic group. 

The children were given four pitches and four speech sounds linked to a picture of an animal. The note C was represented by a fish, a pig for the word "da" etc. After the pitch or sound was tested, the children took part in conversation for two and half minutes and then were re-tested again with the notes or sounds in random order. Pitch memory was tested after a period of one week.

The results were extraordinary. The study showed that the autistic group of children were far superior at retaining pitch memory and identifying pitch. However in the speech sound test, the control group tested higher. The results suggest that there may be something unique about musical ability in autistic children.
And has implications for music education in the special needs classroom. What is the untapped musical potential of autistic students?

In my own experience working with identified autistic children, pitch memory has not been the primary challenge. In fact, students work towards their first five notes quite rapidly. Music literacy has been the biggest challenge. The ability to put the note on the page and identify rhythm and pitch on the staff. In terms of music education, this study suggests that there may be value in teaching pitch first, through a listening mode, away from the staff, away from a method book. Have a student hear the pitch and have them recreate it. I am developing curriculum for differentiated learners that renames the pitch as 1, 2,3,4,5. This has enabled me to work on a unified line that everyone can read at the same time.  But my work is focused on eventual score reading. Although the staff is eliminated, it is still music that must be read. How can pitch be taught organically, further simplified? Just pitch and instrument? Notes are discovered. Once discovered, then repeated and then labeled. Internalized, identified and played. It also intrigues me that pitches were represented as animals. In what way can notes on a staff become a familiar image to a child, autistic and non-autistic? Is that even possible?

This study suggests that autistic children may process information selectively, laser-like in the vast world of information.  Music, which can be highly selective and specialized, may spark or activate savant characteristics in the autistic brain.  The opportunity to discover the precursor to musical or savant like ability in autistic children holds real potential and the findings are extraordinary. It also has the potential to change perspectives. Autistic students challenged with a learning disability can be perceived as children that have a unique untapped musical potential.


Musical Nostalgia: The Psychology and Neuroscience Behind It



Source
Stern, M. J. (2014, Aug. 12). Neural Nostalgia: Why do we love the music we heard as teenagers? Retrieved from

Summary
          This is an article written recently for the “Science” section of Slate.com, an online magazine also featuring stories on current affairs, business, and the arts.  The author (who does not have a background in neuroscience) laments a time when all of his favourite music was abundantly heard on radio and television, and—like almost every other person over 20—expresses his dissatisfaction with the insipid popular songs of today.  He asks: “Why do the songs I heard when I was a teenager sound sweeter than anything I listen to as an adult?” 
          Recent studies have shown that music that catered to our tastes and preferences as adolescents has greater power over our emotions than music we listen to at any other point in our lives.  This is because our auditory system “binds” us to the music we hear as teenagers, a connection that stays with us throughout the remainder of our life.  This means that the cultural phenomenon of nostalgia has clear neurological roots—other music just doesn’t please our ears as much as the sounds heard during the development stages of adolescence.
          It is obvious that listening to music can elicit powerful emotions, mixed feelings, and memories by engaging our auditory, premotor, parietal, and prefrontal cortex.  PET and fMRI brain imaging techniques show that the release of chemicals that make us feel good after music-listening depends largely on our personal preference.  Listening to our favourite music (versus listening to music we are impartial to) releases a greater amount of dopamine, serotonin, and oxytocin.  But how do we come to prefer certain kinds of music over others in the first place?  The most rapid neurological development to our brains happens between the ages of 12 and 22.  When listening to songs at that age that we like, our brains make strong neural connections to it, consequently creating strong memories about the events associated with those songs.  Due to an excessive amount of pubertal growth hormones the memories are also full of heightened emotion, and those songs/events are perceived to be overly important.
          The author also notes that musical preference developed in our teenage years is closely tied with our social lives.  Adolescence is often a time for establishing one’s identity, and music is one way of discovering and expressing it.  This, in combination with a phenomenon where autobiographical memories are disproportionately remembered for events in adolescence and early adulthood called the “reminiscence bump” (Rathbone et al. 2008; Krumhansl & Zupnick 2013), causes music that we are drawn to as teenagers to become a part of our self-image for life. 


Reflection
It seems like the music that makes us nostalgic, as well as our lifelong enjoyment of it, is literally wired into our brains.  Music can not only provoke feelings of nostalgia, but become nostalgia itself.  Songs can become memories or feelings per se, or lead us down a path of other memories to a notable event in our lives.  Songs, much like smells, can become associated with one particular memory.  In the same way that suddenly smelling something akin to an ex-lover’s perfume, your mother’s apple pie, or your cedar cabin at the summer camp you went to when you were 14, can immediately transport you back to that time in your life, music can also guide (or force) our escape into the past. 
What’s more fascinating is music’s potential to facilitate autobiographical memory.  This does not only apply to people with normal memory recollection (Schulkind et al. 1999), but also for those who have severe acquired brain injuries, or ABIs (Baird & Samson 2014).  These are called “music-evoked autobiographical memories” or MEAMs, and have already been consistently identified in the healthy population.  In Baird & Samson’s recent study (the first study of MEAMs after ABI), MEAMs were compared with verbal-evoked autobiographical memories, and in the majority of cases music was more efficient at evoking autobiographical memories than the verbal prompts (2014).  The results suggest that music is a powerful stimulus for eliciting autobiographical memories, and may be valuable in the rehabilitation of autobiographical amnesia (ibid).
Furthermore, according to the reminiscence bump research (see Rathbone, Moulin, & Conway 2008, for example), these music-evoked autobiographical memories might very well be from our teenage and early adulthood years.  Adolescence is also the first time when we discover music for ourselves, and find out what really suits us.  Jourdain says that music can “suit” us in two different ways: socially and anatomically (1997).  People can often be attracted to certain genres of music because they serve a function in their lives, whether it be for dancing, relaxation, or meeting new people.  Many also fall into certain genres in their youth to conform or belong to a certain group; they listen to what their friends listen to.  Identity and social acceptance, however, have little to do with the actual anatomy of an individual’s inner ear or the neurology of their auditory cortex.  I would go as far as to say that individual variability in these regions is the reason why people initially gravitate to a particular style of music in the first place.  But from there, the preferred musical style is “imprinted” onto our brains, causing our auditory systems to develop toward that genre during the final years of normal musical development (Jourdain 1997, p.263).
To Jourdain it seems that all further branching of musical tastes and preferences are forever in the shadow of the music of our youth.  The neural connections we made with personally-relevant music in adolescence might well dominate all of our further perception of other kinds of music.  This doesn’t mean, however, that people who still enjoy music from their teenage years are musically stunted individuals.  Yes, that music might still evoke a strong emotional reaction decade after decade, but that reaction is generally automatic and involuntary.  And it doesn’t at all thwart the evolution or strength of our musical tastes, because the more we listen—and the more we learn to listen—the wider the variety of music we mature to understand and enjoy.

References
Baird, A., & Samson, S. (2014). Music evoked autobiographical memory after severe acquired brain injury: Preliminary findings from a case series. Neuropsychological Rehabilitation, 24(1), 125-143.

Jourdain, R. (1997). Music, the brain, and ecstasy: How music captures our imagination. New York: W. Morrow.

Krumhansl, C., & Zupnick, J. (2013). Cascading reminiscence bumps in popular music. Psychological Science, 24(10), 2057-2068.

Rathbone, C., Moulin, C., & Conway, M. (2008). Self-centered memories: The reminiscence bump and the self. Memory & Cognition, 36(8), 1403-1414.

Schulkind, M., Hennis, L., & Rubin, D. (1999). Music, emotion, and autobiographical memory: They’re playing your song. Memory & Cognition, 27(6), 948-955.

The Therapeutic Effect of Neurologic Music Therapy and Speech Language Therapy in Post-Stroke Aphasic Patients

The Therapeutic Effect of Neurologic Music Therapy and Speech Language Therapy in Post-Stroke Aphasic Patients 

Source: Lim, K.B., Kim, Y.K., Lee, H.J., Yoo, J., Hwang, J.Y., Kim, J.A., & Kim, S.K. (2013). Thetherapeutic effect of neurologic music therapy and speech language therapy in post-stroke aphasic patients. Annals of Physical Rehabilitation Medicine, 37(4), 556-562. http://dx.doi.org/10.5535/arm.2013.37.4.556

Review:
Aphasia is common among those who have suffered a stroke. It affects the left middle cerebral artery. There are several types of aphasia, however, only non-fluent aphasia will be examined in this study. Non-fluent aphasia, also known as Broca’s aphasia, is a result of injury to the left frontal lobe. It reduces expressivity of speech output and can limit someone to as little as four words. 
There are several methods that can be used to treat aphasia such as intensive language-action therapy, language-oriented treatment, and melodic intonation therapy (MIT). Intensive language-action therapy is a form of speech therapy that has shown to improve language performance of chronic aphasia patients. Language-oriented treatment, another form of speech therapy, involves using the auditory sense, expression training through spoken language, pictures and texts. MIT is a method that induces speech by using rhythm and musical tones in the uninjured part of the brain. It can be used to treat severe aphasiac patients. 
A study was conducted to examine the therapeutic effects of neurologic music therapy (NMT) and speech language therapy (SLT) on post-stroke aphasic patients. 21 patients were recruited from two university hospitals with non-fluent aphasia, according to The Korean version-Western Aphasia Battery (K-WAB). 
The participants of one hospital received one-on-one speech therapy and NMT which consisted of MIT while rhythmically tapping with their uninjured hand, and singing. Singing involved voice training, respiratory training, automated speech and automated singing using familiar songs.
The patients from the other hospital received one-on-one speech therapy and language-oriented treatment through expression training via spoken language, articulation training of various syllables, consonants and vowels, and pictures and texts. 
Each group received two hour-long sessions each week, for a month. 
The researchers further divided the participants into a Subacute group or a Chronic group. Those in the Subacute group had suffered from a stroke within the last three months whereas those in the Chronic group had suffered a stroke more than three months prior to the experiment, making for a total of four groups (Chronic NMT, Chronic SLT, Subacute NMT, Subacute SLT). 
The study only used the oral language domain of the K-WAB. Four sub-tests are included in this section: spontaneous speaking (20 points), understanding (200 points), repetition (100 points) and naming (100 points). The K-WAB was used before and after every session.
The results revealed that there were significant improvements in repetition and naming in the Chronic NMT group. The Chronic SLT group showed a significant increase in repetition only. The Subacute NMT group showed significant improvements in spontaneous speaking, understanding and naming. The Subacute SLT group showed no improvements. It was therefore concluded that both NMT and SLT were effective in treating those in those with chronic non-fluent aphasia. NMT was also effective in treating language function in subacute patients. 


Reflection:
This study was interesting to me because I have not yet researched post-stroke aphasic patients, therefore, it is new to me. Although this study has shown positive results as both NMT and SLT were effective in treating chronic non-fluent aphasia patients, there were some limitations that must be considered. For example, the study was short and limited in its number of subjects, the participants were of different ages, and there was a difference in cognitive function. The NMT group consisted of participants with right and left cerebral lesions, however, the SLT group only consisted of patients with left cerebral lesions. Also, the therapies were used in two hospitals, conducted by different therapists. In the future, more controlled studies are needed to confirm the findings of this current study. 
I also thought it was interesting that NMT could be so powerful as it benefited more people in this study. According to researchers, NMT can “stimulate the speaking pathway in the left cerebral hemisphere or the singing pathway in the right side of both cerebral hemispheres.” Both cerebral hemispheres are important in vocal cord production and sensorimotor functions, involved in speaking and singing. Another reason why NMT may have been successful is that words can be pronounced slower when singing, allowing for more opportunities to distinguish between words and phrases. There is also a rhythmic aspect to singing which also aids in speaking words.
Since taking this course, I have come across many readings that use speech-vocal therapy and/or singing to alleviate some of the symptoms of an illness or disease. Singing is useful because it involves “an auditory-motor feedback loop in the brain more intensely than other music making activities such as instrumental playing” (Wan, Ruber, Hohmann, & Schlaug, 2010, p. 287). It also directly stimulates the musculature associated with respiration, articulation, phonation, and resonance and is therapeutic. This is especially important for those who suffer from neurological diseases such as Parkinson’s Disease or aphasia as it offers another option that is nonpharmacologic and non-invasive. As Oliver Sacks (2007) once said, “music is a remedy, a tonic, orange juice for the ear. But for many neurological patients, music is even more - it can provide access, even when no medication can, to movement, to speech, to life. For them, music is not a luxury, but a necessity.”

References:
Lim, K.B., Kim, Y.K., Lee, H.J., Yoo, J., Hwang, J.Y., Kim, J.A., & Kim, S.K. (2013). The therapeutic effect of neurologic music therapy and speech language therapy in post-stroke aphasic patients. Annals of Physical Rehabilitation Medicine, 37(4), 556-562. http://dx.doi.org/10.5535/arm.2013.37.4.556


Wan., C. Y., Ruber, T., Hohmann, A., & Schlaug, G. (2010). The therapeutic effects of singing in neurological disorders. Music Perception, 27(4), 287-295. http:// search.proquest.com/docview/89184449?accountid=14771