Wednesday, October 19, 2011

Treadmill Training with Music Cueing: a New Approach for Parkinson's gait facilitation




Reference: Treadmill Training with Music Cueing: a New Approach for Parkinson's gait facilitation
Author: Dootchai Chaiwanichsiri, Wuttinganok Wangno, Wasuwat Kitisomprayoonkul, Roongroj Bhidayasiri
Source: Asian Biomedicine Volume 5, No. 5 October 2011; 649-654
DOI: 10.5372/1905-0505.086
Summary:
This article describes the effects of musical cueing on treadmill training within a randomized single-blind controlled trial of thirty male Parkinson's disease (PD) patients. Participants eligible for the study were males between the ages of 60-80 years of age, diagnosed as idiopathic PD, Hoehn and Yahr stage 2-3, and possessing good cognitive function on Thai Mental State Examination, stable symptoms with unmodified anti-parkinsonian medication throughout the study, and independent walking skills without any aids. As well, there could not be any other existing medical conditions among the patients, nor could they have participated in any previous training during the previous two months, and good hearing and vision were imperative.
Participants were divided into three randomized groupings, each containing ten members: Group A, B, and C. Group A completed treadmill training with music cueing for three days/week and a home walking program for three days/week; treadmill training for three days/week combined with a home walking program three days/week was prescribed to group B, and group C participated in six days of home walking/week. Each treadmill session consisted of ten minutes of stretching exercise followed by twenty minutes of treadmill walking with long steps at each participants' preferred speed. Once the appropriate gait was selected, the pace was adjusted 5-10% faster, to the degree that the patients could still maintain an appropriate gait without difficulties or missteps. The music cue involved the use of five relaxing green music pieces which were chosen and modified by stretching or retracting the tempo using a computer music program manager. In group A, the treadmill speed was measured by an electric metronome, which was then matched with prepared music of a corresponding tempo. Participants were trained to walk in step with the rhythm of music on the treadmill, and were given MP3 recordings to take home and use during their home practice. Subjects’ home practice included a ten minute stretching exercise followed by twenty minutes of walking, which was monitored by a stretching handbook and walking diary.
Two physicians and one research assistant performed regular evaluations of participants that included interviews about medical history, gait and balance assessments, tests on step length and stride length, walking speed over 6-meter walking sections, and calculation of step cadence.
An assessment of the data collected throughout the study confirmed significant improvement to step and stride length only in group A, which was maintained to the end of the eight week study. Group A also showed the greatest increases in speed and balance, as compared between all three groups. Although one participant in this group fell during one of the at-home training sessions, he was still able to continue with the study, and did not sustain any injuries from the fall.
The enhancement to PD patients’ gait that members of participant group A displayed confirm the effectiveness of auditory cues, such as music, to module the gait pattern of an individual. External motor cues provided through the use of a 6-week intensive treadmill training sessions also increased the stride length and walking speed of those with mild to moderate cases of PD. During these sessions, the speed of the treadmill was increased in increments related to the patient’s ability to maintain gait. Evidence, however, shows that dual tasks, such as auditory and attentional cues, when used in synchronicity, can be detrimental to gait pattern in PD sufferers. Those patients who were given music cues, though, were able to follow the music-treadmill training without any difficulties, and compared to the use of a metronome, had a more therapeutic effect. Therefore, participants who walked with music cues were able to maintain a faster cadence than those who walked to the sound of metronomic cueing. The relaxation and positive emotions that result from music can also be linked to the overall improvements of music to PD patients, as typically, these are deficient areas, as well.
Conclusively, the use of auditory rhythmic musical cues can be used to improve gait and balance, through treadmill-training, for mild to moderate PD sufferers. Additionally, improvements to gait training, as well as to mood and adherence are other applications where musical cueing combined by treadmill walking can be effective.
Reflection: 


This trial study, which measured the effectivity of musical cueing combined with treadmill training among Parkinson's Disease sufferers, affected me in an overwhelming way. I have always been a staunch advocate of practicing "music for music's sake" and not as an enabler or a tool to facilitate other learning or to accomplish other extrinsic goals. I do not promote or support the claims that music makes you smarter or that music should be used to build mathematical skills, and have always lumped music therapy in this same category--one that relies upon music to do something beyond simply existing. However, through my encounters with research that shows the transformative effects that music can have on one's overall health, I am evaluating my own value system and beginning to question why should music not be used to its full potential. In accordance, I am looking at the ways through which musical practices can improve the quality of one's health and thus, one's life, and realizing that it is a more indispensable component to our lives than just music on its own.


Several thoughts arise as I reflect on these results. For starters, the benefits that the participating PD patients encountered after an 8-week trial of treadmill training using musical cues hopeful to individuals suffering with PD, as well as other diseases which attack cognitive function, such as dementia and Alzheimer's disease, and possibly even victims of stroke. If music proves an effective tool for remoulding one's brain, forcing in-tact cerebral areas to assume the functions that the primary control-centres no longer support, there is future potential for rehabilitation in a myriad of situations. Patients who are experiencing depression, anxiety or even Post Traumatic Stress Disorder (PTSD) stand to benefit from the therapeutic benefit of music. I am fascinated to conduct my own trials with individuals suffering from the early onset of both dementia and Parkinson's disease, and also, to test the effectivity of music in more pronounced cases of Alzheimer's. It is ironic, that, in an age of medical advancements technological developments, which are occurring at an alarming rate, the Western societal acceptance of music as a means of healing is only in its developing stages. Music education, therefore, needs to be an all-encompassing goal of not only developing musicians for aesthetic purposes, but also developing music for the health of body, mind and spirit. 

Tuesday, October 18, 2011

The brain and classical music

Source: www.youtube.com.
Title: Classical Music and the Brain. http://www.youtube.com/watch?v=srv4uvTB0sI.
The video features Oliver Sachs as a subject for two experiments held at Columbia University.

In the first experiment Sachs is asked to listen to a familiar piece of music while an fMRI is performed. Then he is asked to imagine the same piece in his head; again, an fMRI is performed. The comparison of the two fMRIs shows changes in blood flow in the brain during the two sessions. In both sessions, many brain regions were active in the same way; however, from the scans it is clear that 1) the frontal lobe, which performs the higher functions, is more active in the second session, when Sachs is imagining the piece, and 2) we cannot say anything about what piece Sachs was listening to, or imagining. The final question posited in the video is: are all brains musical, or only those that are trained to be musical?

In the second experiment, Sachs undergoes another pair of fMRI scans, to see whether his brain loves Bach as much as he does. During the experiment, he has to listen to two pieces: one by J. S. Bach and one by L. van Beethoven. Verbally, Sachs confirms that piece by Bach blew him away, while Beethoven’s left him flat. The scans show that the brain activity confirms his emotional description, and that Bach’s piece activated Sachs’ amygdale (which is crucial for processing emotions), while Beethoven’s music did not.

In his verbal description about his reaction to the two performances, Sachs states that, at a certain point during the experiment, he was not able to distinguish between the music of Bach and that of Beethoven; however, the fMRI scan confirms that his brain was.

The second experiment highlights that, in certain situations, parts of the brain operate independently from our will and consciousness (in the specific case, when an emotional state is involved). This makes me wonder what results the same experiment could produce on not-musically trained, or knowledgeable, subjects, and/or on subjects who do not know the repertoire played during the experiment. For example, how would the brain process the emotional side of the “unknown”? Would this emotional side still be processed in the amygdale or in other areas? Also, would the relationship between emotional and organizational sides of this “unknown” experience interact to generate different neuronal paths and activate different areas? My guess is that the role of memory would be crucial in this sense. But how?

Also, according to the first experiment, we can understand what areas of the brain are activated by certain information, but we cannot say anything about the content of that information as processed by the brain. Where does the integration among the different elements of the experience happen? And what is its nature?

Listening to tailor-made notched music reduces tinnitus loudness and tinnitus-related auditory cortex activity

Reference:
Okamoto, Hidehiko, Henning Stracke, Wolfgang Stoll, and Christo Panteva. “Listening to tailor-made notched music reduces tinnitus loudness and tinnitus-related auditory cortex activity.” Proceedings of the National Academy of Sciences of the United States of America vol. 107 no. 3: 1207-1210. 19 Jan. 2010. Web. 17 Oct. 2011.

Review:
Tinnitus, a ringing in the ears that is thought to be caused by “maladaptive auditory cortex reorganization,” is loud enough to affect daily life for 1-3% of the population. Scientists are just beginning to find methods of treating the causes of tinnitus rather than addressing only the symptoms, and this study demonstrates one effective way to retrain the brain in order to limit the perception of tinnitus.

Hearing loss from tinnitus is caused by a “rewiring” that takes place within the central auditory system. The specific neurons affected do not stop functioning altogether, but start responding to neighbouring frequencies instead of the frequencies they used to receive. Tinnitus will not resolve on its own, but the use of “notched” music prepared specifically for each person treated has been shown in this study to reduce tinnitus loudness and reorganize neural activity in the auditory cortex.

Preparation of these “notched” recordings for the target patient group used music chosen by each patient from which the frequency band of the pitch heard by the patient as a result of tinnitus was removed using a digital filter. The scientists decided to have each patient pick her or his own music because they assumed that whatever the patient chose would hold her or his attention and cause the release of dopamine, since “joyful listening to music activates the reward system of the brain and leads to release of dopamine.” Dopamine has been shown to help in cortical reorganization.

After a year of treatment, the target patient group showed great improvements in terms of tinnitus loudness and auditory cortex activity related to tinnitus.

Reflections:
Tinnitus has always been a factor in my life. My father has experienced a loud ringing in the ears for as long as I can remember, and has never been able to find an effective way to reverse his hearing loss. I would be curious to see what this notched music treatment could do for him.

One thing I find particularly interesting about this study is the decision to have the patients choose which music to use for their treatment. As a music educator, I have been taught to give students choices in order to allow them to feel ownership of the work we do together. In my teaching as well as in everyday life, I have often observed that when a person is given an opportunity to have some say in what happens to her or him, the effects are tremendously positive. It is not surprising that allowing the patients in this study a say in what they would be listening to for a year probably made the treatment more effective.

How will the scientific world make notched music treatments available to the general public? Will there someday be digital notch filters in every family doctor’s office, or even just one in every major city? If the technology were to become accessible to everyone, imagine how much we could improve the quality of life for that 1-3% of the population with severe tinnitus.

Music Improves Dopaminergic Neurotransmission: The Effect of Music on Blood Pressure Regulation

Source:
Sutoo, D., & Akiyama, K. (2004). "Music improves dopaminergic neurotransmission: demonstration based on the effect of music on blood pressure regulation." Brain Research (August 2004) 1016, 2: 255-262


Summary:
The focuse of the study was on how music reduces blood pressure in different patients. Although specific mechanisms how music modifies the brain were not known, it still plays an important role in the regulation various symptoms of epilepsy, Parkinson's, senile dementia and attention deficit hyperactivity disorder.

Dopamine (DA) is a neurotransmitter involved in the increase and decrease of heart rate and blood pressure. Calmodulin (CaM) is a calcium-modulated protein that binds itself to calcium. It mediates activities such as metabolism, immunue system and intracellular movements. Previous studies have proved that calcium increases dopamine (DA) synthesis through a process called calmoduli(CaM)-dependent phosphorylation. The calcium ions are transported to the brain through blood: they enhance CaM activities and in return increase DA synthesis. The increase of dopaminergic activity further inhibits sympathetic nerve activities ("fight-or-flight" response), thus calming the blood pressure. (255-256)

The test subjects were spontaneously hypertensive (high blood pressure) rats. Mozrt's Adagio from Divertinento No.7 in D Major (K.205) was played repeatedly for 2 hours to the test group. Blood pressure levels were measured before, during, and after the experiement. The results were also compared to: 1. groups of rats with injections of various drugs targetted on dopamine and CaM receptors (W-7, SCH23390, EDTA, etc.), and 2. the non-music group of rats.
In the first group (music without injections), the blood pressure level in the rats decreased significantly within the 30 minutes of exposing to Mozart's music (approximately 190 to 170 mmHg). The effect continued to last, and reached its lowest point (165mmHg) even 30 minutes after the music finished. The blood pressure gradually returned back to normal after that duration.

Compared to the non-music group of almost no blood pressure changes, music was evident in the decrease of blood levels. To confirm whether the effect of music affected the calcium and DA synthesis, the rats with different injections were placed through the same music listening experiment. The results showed that the blood pressure levels from drugs that inhibited CaM, Calcium and Dopamine receptors (W-7, EDTA, Eticlopride, aMPT) showed no changes when exposed to the music. However, Dopamine (DA) type one (D1) receptor SCH23390 did not inhibit the music's ability to increase calcium levels, and that group of rats did have a decrease of blood pressure. This showed that the calcium passes through specific types of dopamine receptors (D2).

Neuroimaging has also provided more information on the region which DA levels were increased by the exposure to music. Only the laterial neostriatum region was shown with a heightened level of calcium after the music exposure. The study concluded with the findings and implications for further studies in music therapy.


Reflection:
As the introduction stated, "Music has a long history of healing physical and mental illnesses" (255). It is definitely interesting to see how the brain/body reacts to the mere sound of music. The study was done to three groups of rats and the results were compared. It is astonishing that even after the two-hour period, the blood pressure level continued to decrease. The continuing effect of music must have triggered parts in the brain that regulate short-term memory, so that even when the music is finished, the brain can still organize and process it to increase calcium levels.

The researchers stated that they do not know why and how music increases calcium levels, so that would be an interesting research to look into. It is curious that music listening evokes higher calcium levels in the lateral neostraium and nowhere else. It would be helpful to include some footnotes in the the study to examine keywords that are more biological.

It is also said that the increase of calcium/CaM in the DA synthesis process only passes through D2 receptors. Although no answers to why this happens, it is fascinating to see that everything in the brain is organized and categorized accroding to its various functions. I also would like to understand why the specific adagio from Mozart's Divertimento was chosen, and wheather certain elements in the music played a role in activation calcium levels.

In the last few paragraphs, the author talked about the similarity between music and exercise, and how exercise also stimulates the same pathway as the one illustrated in this experiment. This is important to use in combination to music therapy for better recovery as well. One topic to look further would be the effects of music on ADHD, epilepsy, dementia and Parkinson's disease in terms of regulating blood pressure level.

Monday, October 17, 2011

Chord Discrimination in Pigeons - Searching for the Evolutionary Origins of Human Perception of Music

Source: Brooks, Daniel I. and Cook, Robert G. Chord Discrimination in Pigeons. Music Perception: An Interdisciplinary Journal, Vol. 27, No. 3 (February 2010), pp. 183-196

Retrieved: October 17, 2010, from JSTOR
http://www.jstor.org/stable/10.1525/mp.2010.27.3.183


Summary:

Though we may consider ourselves to be evolved and advanced today, there is substantial evidence to suggest that humans as a species originated from the most simple and primitive of life forms. From this it logically follows that the components that make us what we are have their roots in more primitive life forms, so to discover how we arrived at our current state we must look to the past and examine the biological and cognitive processes of even our most distant animal relatives.

It is for this reason that the study done by Daniel I. Brooks and Robert G. Cook entitled Chord Discrimination in Pigeons is useful to those who wish to understand the origins and evolution of human perception of music. The cognitive processes that help us identify the melodic, harmonic, and rhythmic components of music must have had some precursors in non-human primates, according to Brooks and Cook.

The focus of this study is interval perception in pigeons. Brooks and Cook note that interval perception is an interesting perceptual skill to study because it is so important to the way humans perceive differences between individual pitches and melodies. Finding out how it functions and develops in other species is the first step to understanding how it developed in humans.


Studies on musical perception and discrimination in animals have been done on a number of species, including songbirds and primates, but they have never been done on non-songbirds such as pigeons. Two new experiments were reported in this article. Experiment 1 involved training the pigeons with chords developed from the C major scale. Pigeons were trained in a go/no-go task to distinguish a C major triad from 4 other triads, each of which differed from the C major triad by one semitone (the no-go triads were C minor, C suspended 4, C flat 5, and C augmented). Pigeons were given food reinforcement when they pecked after hearing a C major triad, and no reinforcement when they pecked after hearing one of the other four triads. The training took place over fifty sessions. As the training progressed, 3 out of 5 pigeons successfully learned to discriminate among the five triads. The augmented triad was shown to be the easiest for the pigeons to identify as no-go (no pecking), while the other triads proved to be of variable levels of difficulty. This experiment revealed, for the first time, that non-song birds are able to discriminate between triadic chords differing by only one semitone.


In Experiment 2, a second set of triads was added to the discrimination test. These triads were the same type, but were based on a D root. The assumption was that if the pigeons has learned the general harmonic configuration of the chords (the relations between the notes), then they should be able to transfer this recognition to chords with a new root. This test proved more difficult for the birds, and while they were still inclined to recognize the augmented trias as no-go, they were not as successful in general in identifying go versus no-go triads.


According to Brooks and Cook, these results suggest that pigeons are able to identify different frequencies but are not as adept at identifying the relationships between different frequencies played simultaneously. Previous research suggests that in the auditory domain, birds and mammals differ in their ability to use absolute versus relational stimuli. This is especially so with regard to their capacity to process the absolute value of pitch. It is speculated that in general, birds tend to recognize more the absolute value of pitches, while mammals tend to recognize relationships between pitches.


When comparing the results of this study with other studies done on similar topics, such as the discrimination of chords in song-birds, or the discrimination of consonance and dissonance in humans, two interesting points present themselves and were mentioned in the general discussion section of the article. Firstly, as it has been shown through studies that song-birds can distinguish between chords, looking only at research done on song-birds might encourage one to assume that this ability could be the result of biological mechanisms responsible for learning songs, and therefore necessary for mating and survival as a species. However, since pigeons are non-song birds and this study suggest that they possess a similar ability to distinguish between chords, this is not necessarily the case. It seems that this ability is widely shared across birds as a class rather than just belonging to particular species for which it is a necessary survival skill. Secondly, further observations were made when the Brooks and Cook did studies on humans, asking them to define the relative consonance and dissonance of the same set of chords. Interestingly, humans and pigeons seemed to agree on many things, for example that the augmented 5 triad was the most different, or easiest to distinguish from the major triad. This may suggest while cultural conditioning plays a role in our perception of harmonies, there is a unified account that can be made of harmonic perception among all species.



Reflection:


Neurological descriptions and explanations of absolute pitch in humans are still unclear and unproven. In many ways this phenomenon is a mystery for neuroscientists and musicians alike. It is interesting, and also somewhat refreshing, to learn from this article that some animals may find using absolute pitch easier and more natural than using relative pitch, while in my experience the reverse is most often the case for humans. When I was in my first year in university, one of my professors said to the class ‘ok, if you have perfect pitch put up your hand.’ When those with the skill identified themselves, a sort of smile crept over his face that seemed to say to me ‘be envious class, these are the special people.’ Indeed my experience with opinions about absolute pitch from a cultural standpoint is that it is highly valued and admired, more so than relative pitch.


Do other species have a reversed hierarchy of importance for the skills of relative and absolute pitch? Perhaps this is the case for pigeons. So it causes me to wonder, why does this happen? Why do species prefer one skill over the other? What does this say about the nature of the two skills? Is absolute pitch a learned skill, or is there a gene that bestows some species, or some people, with the power to instantly recognize frequencies? If it is learned, then why do some people seem to display the skill so quickly and accurately that it appears to be automatic, while others can only use the skill in certain situations and with much less accuracy? If it is genetic, then do humans with true absolute pitch actually have a gene that was passed down from their aviary ancestors, while others discarded this gene in favour of some sort of ‘relative pitch gene?’ Is the absolute pitch gene recessive? Could it possibly go extinct like the gene for red hair?


Sunday, October 16, 2011

Dance in the Piano Studio

Source:
Seitz, Jay A. "Mind, Dance, and Pedagogy." Journal of Aesthetic Education 36. 4 (2002): 37-42.

Summary:
Aesthetic movement - or "dance" - has traditionally been used mostly in early education but recently, educators have begun to examine the role of kinesthetic learning in all levels of childhood learning. A basic definition of aesthetic movement is that it consists of reflective gesture - the imitation of reality. We see this occurring naturally in young children who use parts of their body to mimic objects in the world, such as arm flapping to describe a bird or plane. Dance is also used to express emotion, which we see in the way modern dancers use their bodies to convey pathos. Jay Seitz refers to the work of Rudolf Laban, who claims that the use of movement specifically in arts education increases artistic expression in children, even more so when children are encouraged to engage in movement from a young age.

Reflection:
At some point between kindergarten and grade 1, children are expected to sit quietly for extended periods of time. While this is probably to keep classroom chaos to a minimum, it does not make as much sense in the private piano lesson setting. I began formal piano lessons at age four and I recall being reminded before each lesson that I should "sit still and listen carefully." Sitting still for thirty minutes! Although I was a quiet child, I likely had trouble holding my perfect piano posture for what would have seemed like hours. Thankfully, today's piano lessons are less rigid and I have the freedom to engage my students in movement activities right from the beginning. I have found they are able to relate motions to music better than if I simply explained. Marching is an excellent way of feeling the pulse while making flowing arm gestures illustrates legato and phrasing. I've also noticed that beginning the lesson with dancing uses up energy, meaning the student will be able to "sit still and listen carefully" later on.

Most significantly, dance helps my students feel the spaces between the notes. Piano students often fall into the habit of simply pressing the keys without thinking of the relationships between them. Engaging students in "continuous flow" activities develops their auditory imagery to hear the links between notes, which in turn leads to a more musical and intentional sense of phrasing.

This article helped me to place my students' physical abilities on a general timeline. Between the ages of three and four, children are able to mimic jumping and marching but they may have trouble balancing and performing more precise actions. By ages five and six, they have mastered skipping, and they are able to mimic geometric shapes and animals. Understanding these stages will help me to tailor movement activities to the specific abilities of each student and I am already enjoying the process of creating fun and increasingly complex exercises as my little students grow in physical and musical awareness.

From singing to speaking: facilitating recovery from nonfluent aphasia.


Reference:

Schlaug, Gottfried, Andrea Norton, Sarah Marchina, Lauryn Zipse, and Catherine Y Wan. "From singing to speaking: facilitating recovery from nonfluent aphasia." Future Neurology Sep. 2010; 5(5): 657-665.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2982746/?tool=pubmed


Summary:

Aphasia is an impairment of language ability that ranges from having difficulty remembering words to being completely unable to speak, read, or write. This disorder usually develops quickly as a result of head injury or stroke, but can develop slowly from a brain tumor, infection, or dementia. Of the estimated 750,000–800,000 new stroke cases occurring in the USA each year, approximately 25–50% present with some form of aphasia. Nonfluent aphasia is caused by damage to or developmental issues in anterior regions of the brain, including the left posterior inferior frontal gyrus known as Broca’s area.


Recovery from aphasia can happen in two ways using a recruitment process, which is an increase in the response to a stimulus owing to the activation of additional receptors, resulting from the continuous application of the stimulus with the same intensity. The first type of recovery consists of the recruitment of perilesional brain regions in the affected hemisphere, with variable recruitment of right-hemispheric regions if the lesion is small. The second type of recovery consists of the recruitment of homologous language and speech-motor regions in the unaffected hemisphere if the lesion of the affected hemisphere is extensive. Patients with large left-hemispheric lesions that result in severe nonfluent aphasia typically do not show a good natural recovery nor do they appear to be as responsive to traditional speech therapy methods as patients with smaller lesions or other types of aphasia.


Melodic intonation therapy (MIT) is an intonation-based treatment method for nonfluent or dysfluent aphasic patients that was developed in response to the observation that severely aphasic patients can often produce well-articulated, linguistically accurate words while singing, but not during speech. The intonation works by translating prosodic speech patterns (spoken phrases) into melodically intoned patterns using just two pitches. The higher pitch represents the syllables that would naturally be stressed (accented) during speech. Compared with nonintonation-based speech therapies, MIT contains two unique components: the melodic intonation (singing), with its inherent continuous voicing, and the rhythmic tapping of each syllable (using the patient’s left hand) while phrases are intoned and repeated.


In one of their previous studies, the authors compared two patients with similar speech output impairments and similar lesion sizes. One was subjected to MIT and the other to a control intervention termed ‘speech repetition therapy’. Both interventions yielded significant improvements in propositional speech that generalized to nonpracticed words and phrases, but the MIT-treated patient gains surpassed those of the control-treated patients. Since MIT incorporates both the melodic and rhythmic aspects of music, it may be unique in its potential for engaging not only auditory–motor regions on the right but also nonlesional regions in the affected left hemisphere. The following image shows diffusion tensor imaging scans of a patient before and after an intense course of melodic intonation therapy.


There is a visible increase in the size (number of fibers and volume of tract) of the right arcuate fasciculus after therapy (B).


Research has shown that both components of MIT are capable of engaging fronto–temporal regions in the right hemisphere, thereby making it particularly well suited for patients with large left hemisphere lesions who also suffer from nonfluent aphasia. Treatment-associated neural changes in patients undergoing MIT indicate that the unique engagement of right-hemispheric structures (e.g., the superior temporal lobe, primary sensorimotor, premotor and inferior frontal gyrus regions) and changes in the connections across these brain regions may be responsible for its therapeutic effect. However, despite several small case series, the efficacy of MIT has not been substantiated and its neural correlates remain largely unexplored. Research


Reflections:

The research conducted by Gottfried Schlaug & al. explores new approaches to traditional therapy for patients with nonfluent aphasia. It is encouraging to discover that melodic intonation therapy engages the right fronto–temporal network through two unique components: melodic intonation and left-hand tapping. This leads to improvement in spontaneous language skills, therefore increasing the recovery rate of patients. Although approximately 1,000,000 people in the USA suffer from aphasia, reliable and standard treatment methods have not been established for this disorder. More case studies have to be conducted on the efficacy of MIT, as well as understanding the specific differences within the brain between singing and speaking, in order to implement this therapy as a standard treatment process.


As a voice performer, I always find that it is much easier and faster to learn the poetry of a song by singing it and taping the rhythm at the same time. I often tap the rhythm by clapping the hands, using conducting gestures, or even dance if it is a dance rhythm. It seems that the more body parts you have working in synchronism, the faster the brain memorizes the musical patterns. When reading this article, I was not surprise to learn that MIT was proven to be a more effective therapy for patients with nonfluent aphasia, as opposed to simple speech therapy. If a patient has a lesion in the speech area of the brain, it will be difficult to stimulate that area with speech, since it is this specific area of the brain that has been affected. By contrast, singing stimulates more areas of the brain, therefore implicating regions of the brain that do not have lesions. This seems to be the reason why the recovery process is more effective when singing for patients with nonfluent aphasia.

Brain-Compatible Music Teaching Part 2: Teaching “Nongame” Songs – Susan Kenney, 2010 23: 31General Music Today

Summary

The article begins with a summary of the previous article entitled “Brain-Compatible Music Teaching”. She revisits the idea of whole song learning instead of breaking it down into phrases and students echoing the musical material. This methodology allows the brain to make meaningful connections through patterning when singing.

In this article, the author explores the brain-compatible assumptions that are consistent with the way we learn music. Firstly, in order to learn a song the brain must hear it many times. This is validated through popular music on the radio, where the listener starts to sing along after multiple listenings. Secondly, the repetition must be meaningful to the learner. This means that students learn songs best through games or activities instead of simply singing. Yes students make take longer to learn the song itself, but their learning will be more meaningful in the end with a focus on the process instead of the product. Finally, the best way to learn a song is through whole song learning, which encourages the brain to find meaningful patterns within parts of the whole.

Children may learn music in these three ways, but what about songs that do not easily lend themselves to actions or games? For these songs, educators can encourage movement to the beat. As the teacher models the whole song, encourage tapping games on different parts of the body. After students are comfortable with the beat, start to develop skills through metre by modeling tapping with an accented beat and conducting a pattern to the song, all while singing the whole song. Remember to take time for repetition, as the brain needs to process all of the new movements along with the melody, and do not be discouraged if some students have not yet sung along with the tune.

Another method of instruction is antiphoning, where the teacher begins the phrase and drops out as the student finishes it. This is more effective then echoing because it encourages students to finish the pattern rather then mirror it. The entire lesson must be rather brief to keep the students attention, but it can be continued next class with the following additions.

One is the use of instruments, where students are invited to play the drum on the accented first beat and move to the weaker beats. Or, if drums are not available, students can play along with the rhythm on rhythm sticks. Auditory Figure-Ground is the next technique used. Here the teacher gives clues about an important word in the song and encourages students to discover it. Once it is discovered, the students start to recognize similar patterns within the music. This exercise could also be done with rhythm patters, where the teacher shows a pattern in the music and students must hunt to find where it occurs again. Finally, you need to give students an opportunity for solo singing whenever possible so you know where they need help.

An important aspect of brain-compatible teaching is how many different skills students can build through learning a song. Instead of just reaching one expectation, the student is achieving multiple expectations at the same time. As long as we remember the cycle of learning a new song (sensing information, integrating information into meaningful wholes, and transforming the meaningful wholes into action) then we can use this brain-compatible teaching technique in each of our classes.

Reflection

Since reading the first article in this series I have started to incorporate whole song learning in my primary music classroom. Students were frustrated at first because it was not simple echoing, but they were also engaged in learning to “figure out” the patterns within the music. There were points however where I reverted back to echoing to correct mistakes and secure pitches. Now I am going to incorporate some of these techniques, such as antiphoning and auditory figure-ground, in place of simple echoing to check for understanding.

I’ve already begun incorporating movement through beat and rhythm in my classes and encourage students to move along with the music. I also emphasize the accented beat one through use of passing a bean bag around the circle, shakers, and tennis balls bouncing on the down beat. The students have really enjoyed these activities and my next step is to incorporate accapella singing during them. We’ve started to locate patterns in the music already, but it’s mostly teacher lead at this point. I think my next step will be asking the students to find and identify the rhythmic and/or melodic patterns as suggested in the article.

I enjoyed reading this article because it already aligns with my way of teaching. I don’t have to question her motives and whether or not the methods work because I’ve seen them in action. I like that I can pull new, practical ideas from this article that encourage music literacy with scientific support. In my music classroom I try and align our topics with our school-wide math and language program to re-enforce those concepts while teaching musical ones. The response from my colleagues has been positive as the students demonstrate their understanding in their homerooms. I look forward to adding these new techniques to my repertoire and discovering more in the classroom.