Saturday, November 29, 2008
ANIRUDDH D. PATEL
The Neurosciences Institute
Report: Shauna Garelick
This article addresses the conflict between two beliefs as to whether musical rhythm is an innate human quality or whether the ability is a cognitive ability that is adapted. In order to determine this, Patel raises the question as to “whether there are fundamental aspects of music cognition which are innate and which cannot be explained as byproducts or secondary uses of more clearly adaptive cognitive abilities such as auditory scene analysis or language.” (Patel, p. 1). Patel begins by attempting to determine whether musical rhythms exist in relation to linguistic-rhythm. Patel identifies areas of overlap between these two concepts. He concludes that “grouping in music may well be an offshoot of prosodic grouping abilities” (Patel, p. 1). Parallels are drawn between musical and linguistic rhythm and their relationship to meter and its significance to both of these concepts. This supports the idea that musical rhythm is perhaps an offshoot of linguistic rhythm. However, the discrepancy between music and language and beats is discussed and Patel draws attention to the uniqueness that exists in music and how humans anticipate beat whereas this anticipation is not present in language. This is identified as Beat Perception Synchronization (BPS) which is unique to music. In addressing whether BPS is an innate human quality, Patel looks at infants and then proposes that the best way to determine if animals could learn BPS. If this is the case then it would be clear that natural selection is not required for rhythmic music perception. Patel concludes by hypothesizing that it will not be possible to teach animals Beat Perception Synchronization.
This article discusses several aspects of music perception in a very short space. Patel succinctly introduces the concepts being discussed. However, this article does not draw any new conclusions about the questions asked. Instead, it suggests ways in which to answer these questions and in addition provides hypotheses for potential outcomes should this research be completed. The comparisons drawn between music and language provided excellent insight into how different parts of the brain function to contribute to musical cognition and ability. Drawing the parallels to language were interesting in that both are common to all cultures in some form of another and thus some part of it must exist innately in the human disposition.
As musicians and music educators, we are constantly advocating for students claiming music is for everyone. An incredible amount of research has been done in an attempt to draw connections between how music can help brain functions in terms of mathematics, reading/literacy and overall educational achievements. However, if studies concluded that music was innate and we as humans are pre-wired to be able to perform various musical tasks, then the argument for music education would be strengthened. While it always pains me to be required to fight for music education based on everything that has nothing to do with music, it seems as if for the foreseeable future, this is the inevidible truth. If there is an aspect of the brain that would not get used or exercised should music not be a part of the curriculum, then it would be critical that music continue to be a part of education. An interesting piece of information discussed in this article is the fact that nobody has ever attempted to determine whether animals would be able to achieve BPS. In wracking my brain to figure out why this is, I believe that people do not associate music with animals because they lack vocal control. However, pitch perception is not synonymous with this, and pitch is only half of what exists in the fundamentals of music. Patel makes an excellent point in stating that several questions could be answered if it is determined whether or not animals are capable of anticipating beat. In better understanding where the ability of music originates and if it is innate, perhaps the entire approach to music education could be revolutionized.
Aniruddh D Patel (2006). Musical rhythm, linguistic rhythm, and human evolution. Music Perception, 24(1), 99-103. Retrieved November 29, 2008, from Research Library database. (Document ID: 1150648221).
Report By: Shauna Garelick
The following youtube videos documented the first time a cochlear implant was activated for two different children. The first child, Maya is approximately 18 months to 2 years old. The second child, Naomi is approximately 7 years old. Both videos showed the first time that the implant was turned on and the reaction from the child. The rest of the video showed the initial steps of the intensive therapy of the first session that these kids were able to hear. The purpose of using two different videos was to compare the reaction of the kids the first time they were able to hear, and the different steps that are taken to achieve the desired response for the therapists. Maya, the two year old did not appear to know
Both of the candidates that I watched definitely did elicit a response when the implant was turned on. However, a much more emotional and reactive response was exhibited by the two year old. Clearly, it is necessary to apply different strategies to determine responses from the participants given the difference in ages. However, it seemed like Naomi had difficulty comprehending what was asked. While it is difficult for me to judge without knowing the background information of her signing and lip-reading abilities, it seemed that it was almost more difficult in an older child. She seemed concerned with getting the right answer. I had a expected a more emotional response from Naomi. I had expected one of fear, excitement or anxiety. Instead, she was rather calm and did not seem quite focused on the task at hand. The therapist was emitting a series of beeps from her computer and Naomi was supposed to put a game piece from the Connect-Four game into the apparatus. Sometimes it was unclear if she had heard the noise. When asked if she had, she would confirm it and be told to add put a piece in the game. However, it did not seem like it was always her choice, but more like she was simply trying to get the answer right in order to please the therapist. The next test, asked Naomi to respond the same way when she heard an “ah” sound from the therapist. It was also clear that Naomi was far more responsive to her mother and while she still did not complete the task, it was much clearer that she was hearing her mother’s voice. I found that quite interesting considering she had never heard either voice before. In the therapy done with Maya the 2 year old, there was nothing systematic only noting responses from a variety of sounds that were not necessarily planned or progressive. These included toys and voices. She got obviously overwhelmed at one point a couple of minutes in. This is perhaps a result of the lack of progression involved and with multiple sounds from multiple directions, it was a sensory overload. I can think that it would be similar to that of a child with autism in that because the child does not know what sound is meaningful, they cannot extract the line of meaning and therefore do not attend to any one sound in particular and then becomes overwhelmed. Maya seemed to turn in the direction of where the sound was coming from which was quite impressive.
It seems that it is quite an incredible experience to be there at the precise moment when a person hears for the first time. In addition to these two videos, I watched a variety of other clips that showed other children with activating their implants for the first time, and the reactions are different every time. Some are scared, anxious, excited and confused. It is unpredictable but quite extraordinary.
SOURCE: Except Parent 36 no4 Ap 2006
Report: Shauna Garelick
This article tracks the process of a girl named Ashley who was diagnosed with autism spectrum disorder at the age of 21 months. Non-verbal, and socially withdrawn, Ashley was unable interact with her family and peers. While occupational and speech therapy failed, Ashley’s mother found new research being done at the Spectrum Centre in Maryland by an ear, nose and throat doctor, Dr. Tomatis. He discovered that “when our ears don’t perceive frequencies of sound, our voice won’t contain them either.” His goal was to change the way people hear so that they would have better control over their voice. He decided that a person would need to exercise the muscles inside the ear to achieve this. The underdeveloped hearing was tracked back to hearing in utero. Dr. Tomatis’ therapy replicates how an individual would hear in utero. Because a fetus in utero is only able to hear pitches from its mother, Dr. Tomatis used music from Mozart since his music contains the most pitches above this frequency. The low frequencies are removed in the initial stages of the therapy and reintroduced. Gregorian chants were also used due to its relaxed and consistent rhythm that resembles the heartbeat and respiration rate. Microphone work was also used to help Ashley to gain a more realistic perception of her voice and gain more control over it. The therapy lasts for 3 loops. The first is fifteen days while loops two and three last for eight days. Each day of the loop, the participant listens to music via headphones for two hours. Ashley’s mom kept her enrolled for nine loops. Ashley eventually began expressing her self verbally and socially. Ashley’s diagnosis of autism spectrum disorder was eventually removed from the same doctor who had diagnosed her.
This article stems from a belief from Dr. Tomatis that the eardrums of individuals with autism are underdeveloped. However, there is nothing in the article that substantiates this belief. It seems as if the success of this therapy is what confirmed his theory. However, would it not be possible to determine this through biological research as well? Given the successes that Dr. Tomatis has had with this therapy, it is difficult to dispute the results. When a child becomes verbal and social over such a short period of time with so many years of being withdrawn, it seems amazing. Questions that still remain surround the issue that many people with autism are verbal and lack social cues. Is this because they cannot properly hear inflection? Does the level of how developed the ear is determine this? Since there are results that indicate the diagnosis can be withdrawn with this therapy, does this imply that autism is actually a result of an underdeveloped ear? It would be interesting to know success rates and to what degrees people are successful within this therapy to further determine if this is the case? This article is an amazing snapshot and perhaps that was its intention, but it leaves many unanswered questions.
Because autism is so far from being understood, nothing is too farfetched. Clearly, there is something to this therapy. Ideally, there would a study that used a control group to establish its efficacy and what part of the therapy is the most significant. However, with autism, it is difficult to develop a control group because of the wide spectrum that exists between patients. Assuming that the ears of individuals with autism are significantly un/underdeveloped, then this therapy makes sense. However, I would be curious to know what provided Dr. Tomatis with the initial evidence to believe this. I have read that the auditory filter in people with autism is broader than other individuals, but this was in an attempt to explain why individuals with autism have significantly better pitch perception than others. The fact that this is true, but they’re ears are underdeveloped leaves me with many more questions than before reading the article. Is there a relationship between these two things? Or is the underdeveloped ear strictly related to that of finding meaning where the pitch perception has to do more with brain function. Lack of socialization or difficulty identifying social cues by individuals with autism is key in its diagnosis. However, how does this relate to having underdeveloped ears? People who are deaf do not have the same difficulties that people with autism do in this area.
Ashley’s mother Sharon Ruben wrote a book about Ashley’s struggle. For more information, go to www.awakeningashley.com.
A talk conducted by Stephanie Chase with special guest Eric Barnhill
A YouTube video from the Philocetes Center: The multidisciplinary study of imagination
By Richard Burrows
This 1 hour and 44 minute YouTube video is a recording of a lecture given by Eric Barnhill. Eric is a specialist in alternative therapy for special needs children, utilizing techniques from Dalcroze and Alexander. His rhythm and movement therapies help with speech, literacy, coordination, and mobility.
Eric’s theory argues that certain features of music and rhythm are a gateway to movement. This movement impacts psychological processes and neurological function. His theory is organized hierarchically from the brain, to the mind, to movement, and then to music. His lecture takes us through a transformative discussion of his work and begins with the brain.
Barnhill sees the brain as a structure and a processing organism. He argues that the brain works as a “grandmother cell” which organizes multiple ideas and senses into one coherent thought. The mind is the perception tool. He states that humans are meant to move and see motion. We constantly perceive reality through motion.
He briefly covers the 40 Hz phenomenon. He argues that the thalamus is the ‘gateway’ to neurological function, which is why this plays a crucial role in the phenomenon. He then discusses the idea of ‘slaving’. This is where smaller rhythmic vibrations are taken over by larger vibrations as a form of orientation. This idealized pattern of organization, allows the brain to become a teacher of itself, where it can find more efficient ways to deal with neurological processes.
Barnhill begins to discuss how his theory is utilized in practice. He believes that the brains first function is to entrain to the elements around us, and he argues that the need for rhythm is intrinsic in language recognition. When working with children, he begins by showing the importance of synchronicity with counting and movement. He stomps his feet and has the children count in time. He explains the importance of having the new abilities internalized.
The remainder of the lecture is a question and answer period. One interesting question arises involving the notion of music acting as a stabilizer within speech. Barnhill is asked how the brain works with this specific subject who has a dramatic stutter when speaking but if he sings, it disappears. Barnhill states that the brain perceives things coming in and has action patterns stored in a stereotypical way, which then release and are modulated to match the outside environment. He argues that music picks up the slack of neurological processing. He feels this is the answer to the healing powers of music.
The found this lecture extremely interesting. Barnhill is a very engaging speaker, who is very knowledgeable in his field. His answers were very precise, understandable, and he wasn’t afraid to say that he didn’t know the answer to certain questions. He managed to cover a wide range of topics and certainly evoked an interest to pursue further information. His examples were clear and helped further explain his theory.
As I get deeper into the research of music and brain, I find myself overwhelmed with the amount of material available to me. Just as I scratch the surface of a topic, and whole new subject emerges. The idea of music enhancing the neurological process is such a promising notion for music advocacy. Our goal as musicians, as educators is to make sure that the right people see the results of this research. The “right” people are our friends, our principals, and our government (federal, provincial, and municipal).
By Mitzi Baker Stanford University, CaliforniaAugust 1, 2007
Report By: Shauna Garelick
This study examines what occurs in the brain during transitions between movements in music of the late 18th century. The purpose of this study is to determine how the brain sorts out aural matter that exists around it deciding what is meaningful and how the brain is able to sort out events. Studies showed that during concerts, the attention of a person wanders until there is a transitional moment between movements where the attention stops and is focused. “The research team showed that music engages the areas of the brain involved with paying attention, making predictions and updating the event in memory.” (Vinod Menon). The study also showed that music from 200 years ago helps the brain to organize information. The study used music to help analyze the brain activity through a process called event segmentation. This process is defined as the “brain’s attempt to make sense of the continual flow of information that the real world generates.” (Baker, Mitzi). The brain chunks information into beginning, middle, end and transitional data. Dr. Jonathan Berger, PhD suggests that music could be a way of helping the brain to anticipate events and sustain information. Ten men and eight women entered the MRI scanner with noise-reducing headphones, with instructions to simply listen passively to the music. “Having a mismatch between what listeners expect to hear vs. what they actually hear—for example, if an unrelated chord follows an ongoing harmony—triggers similar ventral regions of the brain. Once activated, that region partitions the deviant chord as a different segment with distinct boundaries.” (Baker, Mitzi)
This study is interesting in its attempt to better comprehend not just how the brain reacts to music, but how it listens to music. The research that is discussed about the significance in how the brain reacts to silence is particularly interesting. However, there is little discussion of hypotheses as to why this might be the case. Another possible hole in the experiment exists in the specificity of the fact that the researchers only paid attention to late Baroque music. They did not state why this music was chosen only why the particular composer and piece within that era of music was chosen. (unrecognizable but formulaic nature). If there is something specific in music of that era that was being studied, it should be addressed. The language used to give instructions to the participants of the study seem like they are vague and difficult to track. The article addressed problem that exists with the loud MRI machine and provided the participants with noise-blocking headphones. However, the instructions asked them to “simply listen passively to the music.” (Baker, Mitzi). How is passive listening defined? Is it possible to control and judge whether or not a person is engaged in active or passive listening? It is difficult to imagine that if the only thing there is to do is listen to the music that passive listening is going on.
I became interested in this study through my research that I am doing on music and Autism. I am particularly interested in sensory perception. Individuals with autism struggle with the ability to identify meaningful sounds among other everyday noises. However, their ability to predict sounds and their pitch perception is far superior to individuals without autism. The future goals that were articulated in the study is to better understand how people are able to pick out meaningful conversation at a party or in a noisy environment. Perhaps if they are successful in determining how the brain achieves this, there will be better insight as to what part of the brain affects the sensory perception in individuals with autism and discover why they are unable to achieve this function. I believe that this study is still very narrow in its attempt to use music. Will silence that occurs within piece of music have the same effect? It will be interesting to track this study and learn what they will do next to follow-up what they have learned. It seems that there are a variety of paths that this may take given the unexpected information that they have gathered from this study.
An algorithm can turn brain waves patterns into musical scores. Dr. Galina Mindlin of the Brain Music Therapy Center explains how this can heal
Report: Shauna Garelick
This news report introduces a new concept called ‘Brain Music Therapy’. Brain music therapy is a form of neurofeedback discovered in Moscow and brought to North America by Dr. Galina Mindlin. Essentially, it uses music that is determined by the electricity measure in a person’s brain to create music that can treat a variety of conditions, including anxiety, insomnia, depression, stress-related disorders and migraine headaches. It is based on the premise that music can heal. In order to hear a person’s brain music, the patient must undergo an EEG. This measures the active and relaxed brainwaves in the individual’s brain. The results of the EEG are sent to the Moscow Medical Academy. There, these brain waves are put into a mathematical formula and translated into musical notes. Both music to relax the patient and music that helps to be more alert are achieved through this process. The cost of this treatment is five hundred dollars and is not covered by insurance. A person might need this done more than once. Once the patient has become relaxed as a result of the music, they can go back and have it done again and get a new result that will make them even more relaxed. A study that asked a group of people to listen to their own brainwave music and another group to listen to other people’s brainwave music showed that those who listen to their own music were significantly more effective.
While Music Brain Therapy is apparently based on scientific research, the news report does not provide substantiated evidence to reflect this. The ideas behind using a person’s brain electricity to determine how it would translate to music notes and ultimately a piece of music seems plausible. However, given that they did demonstrate one, it is interesting to notice that it was reflective of western European style. What is relaxing for a person in the west might not be the same for a person with another background. The report neglected to mention whether this would change with the ethnic background of the person. It seems difficult to imagine that a person of Chinese descent would have brainwaves that translate into sounds of pentatonic melodies. It seems that songs that would comfort or relax a person would be reflective of music that is more a part of their culture. Furthermore, how is it that a mathematical formula is created from the brain electricity to create an original piece of music? Does it translate into individual notes, phrases, harmonies? Is there one part that reflects the melody and another that determines the harmonies? Some more information or perhaps hearing more brainwave interpretations would have been helpful in understanding this.
While I definitely believe in the powers that music possesses in terms of its abilities to reflect or change a mood, it is always a challenge to comprehend how this translates into science. While qualitative research through the use of questionairres and surveys in addition to research that monitors behaviour is useful in their ability to show what music is capable of, the attempt to quantify this is a challenge. While I don’t wish to reject that people become more relaxed or alert through the music created with their EEG’s, I am reluctant to accept that it is not possible to achieve the same results for a group of individuals with similar characteristics in their EEG results. Until there is further explanation that helps me to understand exactly what it is in the EEG that translates to musical notes, I find it difficult to accept the results at face value. Another aspect of the report that I had trouble with is the research being done on alertness on pilots. I have a problem with people flying planes who need music to be more alert. I am certain that all of the passengers on the flights would agree with this. While I am not able to completely reject this new theory of Brain Music Therapy, I would need more information as to what goes into the process of creating the music. I did try to find out more information by following the link on the site but it is not functional as of yet.
Friday, November 28, 2008
Some Experiments and a Proposed Framework
By Diana Deutsch, Trevor Henthorn & Mark Dolson
Music Perception, Spring 2004, Vol. 21, No. 3, 339-356
Review by John Picone
(Note: the response to this study is also informed by discussion of the monthly MIMM meeting at McMaster University, Friday, November 21, 2008)
Although I thoroughly enjoyed preparing for my grade 10 RCM practical piano examination many years ago, there was one aspect of my lessons I still recall with something of a resentment. A good friend of mine at the time had his lesson immediately after me at the old conservatory on James Street South in Hamilton. Often, our teacher, Mrs. Eileen McManamy, would have our lessons overlap for 15 or 20 minutes and conduct our ear-training exercises together. I was always baffled when Paul would identify a chord as follows: “Oh, that’s a diminished chord in the third position.” He would then pause, turn to me with a wry smile, and continue: “In the key of F#!” Although such ability didn’t count for anything on the exam, I always felt Paul was miles ahead of me in his musical talent.
The rare attribute of absolute or perfect pitch is generally defined as the ability to name or produce a note of a particular pitch in the absence of a reference note. In this study, the researchers base their comparative experiments on the fact that absolute pitch involves, of necessity, verbal labeling. That is, one sings the pitch “A” in response to the verbal label “A” or, “please sing A.” Likewise, when hearing the above frequency, the person with absolute pitch verbally labels it: “That’s A.” “The verbal labeling of pitches necessarily involves speech and language… it is tied to linguistic processing” (p. 342). If this is the case, then clearly a person only has to learn 12 labels – the notes within the octave – that accompany the 12 pitches. For this study, the question is not so much why some people possess absolute pitch, but why it is not universal.
The researchers posit that there is a critical period in a child’s development for the acquisition of both speech and language, and absolute pitch. They hypothesize that it is exposure to pitch in language during this critical period that has a significant influence on the development of absolute pitch. Like learning a second language, young people acquire absolute pitch almost automatically, effortlessly, without specific training.
Their experiments compare the speakers of tone languages such as Mandarin or Vietnamese, and intonation languages such as English. While both kinds of languages employ lexical tones that involve pitch contours, it would appear that tone languages also employ pitch heights (registers). While it is not clearly pointed out in the article, it would seem that the pitch “height” is relative to the speaker’s normal range. What is a “high” pitch for one tone language speaker may indeed be of a different frequency than that of another speaker but the same relative to their normal speaking voice pitch range. Another difference (although this, too, was not clearly explained in the study) seems to be one of consistency. In English, an intonation language, for example, let’s consider two responses to the question, “How are you?” While two people may respond with, “Fine, thanks!” in different intonations, it’s clear that they’re really saying they are not fine at all. Intonation in English is more closely aligned with semantic meaning than lexical meaning, that is, the meaning of an actual word. To illustrate, the researchers examine the word, “ma” in Mandarin. They note that, depending on the register and the pitch contour, the word can mean “mother,” “horse,” “hemp,” or a reproach (p. 343). As it happens, one of the musicians in my music class speaks Mandarin and noted that, indeed, there was a fifth use of this word: to designate a question. I had the opportunity to record her saying five sentences with each of these lexical meanings. The pitch contours and heights were clearly different in each case.
"The question then arises as to which features of pitch are critical to conveying lexical meaning in tone language. If these features were purely relational, then the present discussion would be irrelevant to the genesis of absolute pitch. If, however, absolute pitch were employed to signal lexical meaning, then we would have the beginnings of an explanation as to why speakers of intonation languages, such as English, find absolute pitch so difficult to acquire in adulthood. The study reported here was carried out as a test of the hypothesis that absolute pitch is indeed treated by tone language speakers as a critical feature of speech. The hypothesis entails that tone language speakers would evidence absolute pitch in speech processing, and that the memory representations of the pitches of speech sounds would be qualitatively different for speakers of tone and intonation languages" (pp. 344-345).
All three experiments involved subjects reading aloud from word lists. Their voices were recorded and pitches measured and compared for consistency.
In the first experiment, seven Vietnamese speakers read out a list of ten words and repeated this reading on a different day. The study does not indicate how much time had elapsed. The results, according to the researchers, showed little difference in pitch and suggest that “the subjects must therefore have been referring to stable and precise absolute pitch templates in enunciating the list of words” (p. 346).
The second experiment was much the same as the first. In this case, fifteen native Mandarin speakers were the participants. Again, they read lists of words which were recorded and analyzed for pitch variation. The second experiment, however, had the participants read the word list twice in succession on one day and again on the second day. The goal was to compare pitch variances between successive readings with readings on the different days. “Remarkable consistencies were again obtained” (p. 347).
The third experiment was identical to the second except that native English speakers were the participants. The intonation (English) language speakers’ pitch consistency was then compared with that of the tone (Mandarin) language speakers.
"…the Mandarin and English speakers showed roughly the same degree of pitch consistency in enunciating their word lists twice in immediate succession, but the Mandarin speakers were significantly more consistent than the English speakers in enunciating their word lists on different days. Thus the Mandarin and English speakers performed differently on this reading task, both qualitatively and quantitatively, with the English speakers showing less pitch consistency across days" (p. 350).
An interesting comparison made by the researchers in their discussion refers to the neuropsychological literature showing that “whereas pitch patterns are processed for intonation purposes primarily by the nondominant hemisphere, the processing of lexical tone primarily involves the dominant hemisphere” (p. 350). They also hypothesize that “different individuals of the same sex who speak in the same dialect should match up in terms of the absolute itch levels with which they enunciate words” (p. 350).
What about the relationship between absolute pitch in language and absolute pitch in music? While the researchers acknowledge that the present study does not address this, they “surmise that absolute pitch for music is acquired by speakers of tone language as though it were a feature of a second language” (p. 351). They do note, referring to a 1999 survey of students in U.S. conservatory, university and college music programs by Gregersen et al., that “a higher prevalence of absolute pitch was reported among those students who described their ethnic background as Asian” (p. 351).
While this study is fascinating in its comparison of tone and intonation languages, with likewise interesting results from the experiments, it is not anchored in a clearly defined conceptual framework. The term “absolute pitch” is defined in the first sentence of the study as “the ability to name or produce a note of particular pitch in the absence of a reference note” (p. 339). This is the musical definition with which most people are familiar. However, the term is never clearly defined as it refers to language. It is not possible, for example, that the pitch height and contour of the Mandarin word “ma” meaning mother refer to actual musical frequencies all the time. The frequency of the lexical tone used in producing this word would naturally change with age. This reader surmises that the pitches used in producing a word in a tone language are “absolute” in the sense that they are consistency relative to the speaker’s natural range. For example, a young tone language speaker may be compared to an alto saxophone while an older tone language speaker to a baritone saxophone. The tone of the Mandarin word “ma” meaning “horse” is described by the researchers as “low, initially falling and then rising” (p. 343). Let us assume that, musically, this pitch contour starts on C, drops to A and then rises to E. Although in different registers on the two saxophones, would they, indeed, be these actual notes all the time? Is this what the researchers mean when they refer to a “pitch template”?
There also seem to be two obvious inclusions in this study which were not carried out. The first is the comparison between two people of the same sex and age producing words in a tone language. The apparatus was already in place for this. The second, given the reference to neuropsychological literature and hemispheric dominance, was a comparison of brain activity between tone and intonation language speakers when saying the words.
Many people around the MIMM discussion table found the experiments in this study to be unconvincing, noting poor controls and methodology.
Some interesting observations were made by the MIMM participants. Perhaps, given the paucity of expressed emotion in Asian culture, there is a limited pitch range in their speech heightening overall attentiveness and sensitivity to pitch. Is there a genetic component involved? There was the observation that we, as intonation language speakers, associate meaning with pitch in many facets of our lives: the dial tone is always F below middle C, most two-tone door chimes have the interval of a major third below. And everyone recognizes the Toronto Transit Subway tones before the doors close.
As to the greater prevalence of absolute pitch among Asian music students, the discussion participants noted that music lessons are often begun at a much earlier age than in Western cultures. Further, music students are “weeded out” so that only the best continue to the conservatory or university level.
Perhaps the most interesting question brought forth had to do with the possibility of a pitch code in the motor cortex. Is there pitch processing involved in controlling our vocal chords? What about the embouchure of the trumpet player? The most important question was whether or not anyone had done a pitch map of the motor cortex. Has anyone studied this?
“Dibs!” said an eager student.
And the meeting adjourned.
Kreutz, G. & Lotze, M. (2008). Neuroscience of Music and Emotion.
In W. Gruhn and F.H. Rauscher (Eds) Neurosciences in Music Pedagogy. (145-169)
This chapter in Neurosciences and Music Pedagogy connects emotion in music with neural correlates in the brain. It is a comprehensive review of all the music and emotion research to date, including structural mapping of emotional valence and arousal “centres” as well as functional hypotheses concerning musical emotion and performance. Here are some fascinating points:
- As opposed to the perceptual processing of music, the timeframe for the recognition of emotions happens so quickly that they are considered reflexes (146).
- Emotional responses to music should be similar to emotional responses in vision and speech (147).
- Emotions play a major role in modulating human learning processes (147).
Research has shown that there is a substantial overlap in the neural structures involved in cognition and emotion (150). Consequently, in the current literature emotion is now viewed as intrinsic to music processing. Emotions are normally grouped into two dimensions: valence (positive or negative) and arousal (low to high). The psychological literature distinguishes between a small number of basic emotions: fear, anger, disgust, happiness, surprise, sadness (149).
Music performance is one of the most complex areas of human engagement (160). Making music must be associated with high levels of motivation and reward to sustain a lifetime commitment for professionals and amateurs. Music can also elicit the emotional response known in social psychology as “flow”. Finally, emotional brain responses to music are mediated by individual differences and situational contexts.
Why do some respond so naturally to musical communication while others do not? What are they responding to? I found it very interesting to learn that emotional responses are considered as reflexes. Similarly, I believe that someone who is “musical” has very quick musical reflexes – immediate sensitivity to nuance, innate understanding of the character of rhythm and its changes. As musicians, our musical reflexes must be strong and highly developed. I feel that music is a moving, emotional experience, but the mechanism of communication is very complex and non-linear. Emotions have a sonic language that can be deciphered through cultural norms and learned expectations, or in the tension and resolution of the music itself.
Thursday, November 27, 2008
Daniel Levitin interview on Music and the Brain
Oliver Sacks probes music’s mysterious influence on the brain
This is an awesome, little interactive site by CBC. It's part of the main page for the CBC2 show 'The Nerve', a series about Music and the Human Experience the first episode of which is about Music and the Brain featuring our 'good friend' Daniel Levitin (http://www.cbc.ca/radio2/features/theNerve/episode1.html). This part of the site, however, consists of a brief introduction and an image of the brain. This diagram of the brain is meant to show the areas where music is processed. When you pass the mouse over this image, different sections of the brain are highlighted, and when you click on them, you get a brief description of the processes that occur there. This kind of mapping was made possible with the development of functional brain imaging in the 1990s. The brain map includes the Prefrontal Cortex, Nucleus Accumbens, Amygdala, Hippocampus, Auditory Cortex, Sensory Cortex, Motor Cortex, Corpus Callosum, and Visual Cortex. The accompanying info bits mention activities such as spatial navigation, memory, pleasure, tone perception, and so on.
This site is especially useful for its interactive quality. This allows for overlaps in areas of the brain to be understood in an enhanced way. This is important in that we are shown that different structures in the brain are not mutually exclusive. It's still somewhat deceptive in light of new findings on brain plasticity, which suggest that functions can be relocated to different parts of the brain in the case of damage or other influences. In actuality, we're not given any reason why we should be interested in the areas of the brain that process music. What good does it do to know the computational centres activated when the brain is 'on music'? (I would also be interested to know the impetus behind this reference in particular, 'on music'). The explanations of the different brain areas are also extremely brief, often unsatisfyingly so, but one point was particularly interesting. During musical improvisation, functions that suppress inhibitions must be activated so that free creativity can take place. I'm sure this is something closely linked to social factors. For a very basic understanding of which parts of the brain are stimulated by music listening, it's a fun tool.
Tuesday, November 25, 2008
The above links to an entry that is part of a broader brain blog. This article by an MA student summarizes research about music's influence on stroke recovery. Sabrina Behrens briefly details some ways that music effects cognitive and emotional functioning. MCA refers to the middle cerebral artery and distinguishes the type of stroke that effects this area. People effected by this severe type of stroke were exposed to various forms of audio therapy, while all other variables were controlled and other regular treatments were administered. The stimuli included either music, an audio book, or no additional stimuli. Patients who were exposed to music daily were less depressed and confused and performed better on assessments for verbal skills, memory and attention. Consequentially, they experienced a better post-stroke quality of life.
My main purpose in posting on this article is to link you all to the Brain Blogger site. It's interesting to see what other brain blogging is happening and that we are part of a bigger discourse. I enjoy the term 'BioPsychoSocial' as used in the subtitle for this blog, as it expresses something dear to the hearts of those involved in the music-brain project: that these things are not mutually exclusive. Although I admit that this article is a simplistic summary and much nuance can not be expected, I find it surprising that not there isn't slightest mention of what kind of music was used. I suspect that this could make a difference in the outcome of the study and I wonder further what difference musical participation might have made. I appreciate that the author places value on the use of music to improve the quality of life of the individuals involved, which is sometimes neglected when considering physical brain benefits.
This is a blog entry by a self-proclaimed 'perennial student' in psychology. His blogs frequently concern topics of philosophical, sociological, psychological and scientific interest. This one is about evolutionary links between the brain and culture. The author, Lapierre, refers to a paper written by Queen's scholar Merlin Donald, who also wrote a book called Origins of the Modern Mind. Lapierre summarizes Donald's description of the co-evolution of brain and culture, which relied heavily on multidisciplinary sources. Donald proposes that our cognitive-cultural development went through three stages, each encompassing those that came before. Each stage deals with a new way of representing reality. These stages are important to understanding current culture, ontology, and cognition, cultural difference, and future evolutionary developments in these areas.
1) Episodic culture consists of the ability to mentally represent, but not express, complex events (including social ones) in a situation-specific way.
2) Mimetic cultures consists of the ability to model actions. In this stage, early humans were able to convey a nonverbal message through conscious action.
3) Mythic culture consists of the development of speech and language, which was accompanied by many 'cultural achievements', not the least of which were music and dance. The world could then be conceived of in integrated, narrative terms.
4) Theoretic culture consists of the externalization and concretization of our representations through technology. This is a non-biological, although still highly sensual/sensory, transition to which music is highly relevant. This development allows for theoretical scrutiny of and more accurate representation of reality.
Again, whenever culture is the topic, it seems like evolution comes up. Why is this? My instinct says that it's not the only way to understand why we are the way we are. Again, this idea leads to suggestions of progress, which inevitably implies higher value for the more advanced stages of development (Lapierre says 'cultural achievements). I think the link between the development of music and that of speech, narrative, and religion is interesting. It seems like an obvious connection, but at the same time impossible to specify or articulate. The multidisciplinarity of Donald's work reminds me that discussions of brain and music sit somewhat precariously between disciplines. Anthropology, comparative biology, neuropsychology, etc., all come into play. Some of these fields have drastically different language and approach. I think we need to take care not to forget this fact and to treat our own authority in foreign fields with healthy (if not severe) doses of skepticism. This site makes me want to blog for myself on ethnomusicological matters. It seems like a productive way to explore thoughts and perhaps get feedback.
Monday, November 24, 2008
Clinical Psychiatry News: Music and Mental Health
Written Sept 2008
By H. Steven Moffic
Posted by Justine
Dr. Moffic is a professor of psychiatry and behavioral medicine, as well as family and community medicine, at the Medical College of Wisconsin, Milwaukee. Moffic states that he hadn't thought much about Musics relationship to mental health and psychiatric ethics until he recently came across two news items: the first was a "60 Minutes" segment originally broadcast on April 13 about the outstanding success of the Simon Bolivar National Youth Orchestra in Venezuela, and the speech that renowned neurologist and author Dr. Oliver Sacks gave the Convocation of Fellows address at the American Psychiatric Association's 2008 annual meeting on May 5 that focused on his 2007 book, "Musicophila: Tales of Music and the Brain." He states the people in the U.S currently spend more money on music then they do on medication. Despite its importance he says that music is peripheral to modern psychology and it is not generally part of a the routine diagnostic and treatment processes. When we start from the beginning there is evidence that music preceeded language as a tool of communication and we see how important it has been throughout history. One of the first important social roles for music was as a means of healing. Healing could come from the rhythmic, harmonic, and/or melodic aspects of the music, as well as from the placebo effect we see in psychiatry. There also has been the opposite idea of using music to do harm. We now have a better understanding of the social and pschycological experiences of music. “It seems that music, more than language, taps into primitive brain structures involved with motivation, reward, and emotion. Of course, when words are added to instrumental music, the meaning of the words also is processed in the cerebral cortex. The emotional centers turn out to be not only in the suspected limbic system but also in the heretofore unsuspected cerebellum. The cerebellum, therefore, is hugely involved in the emotional and movement response to music.” Listening to music is said to release dopamine in the nuclea accumbens and Dr. Claudius Conrad suggests that music might stimulate a growth hormone. At the 2007 meeting of the Society for Neuro-science, researchers from Alzahra University in Tehran, Iran, reported on a study examining whether music might help alleviate depression. They were able to report that depressed patients who listened to Beethoven's 3rd and 5th piano sonatas twice a week saw their Beck Depression scales go down significantly. Findings from the previously mentioned National Youth Orchestra in Venezuela are startling. This system uses classical music as the "vehicle for social change" among the country's 500,000 or so poor youth. According to 60 Minutes gang activity is way down and self-esteem is up. When dealing with patients who have a hard time communicating verbally, music can be used for self healing through listening or performing. Being exposed to joyous music can elevate mood and mental health.
I really enjoyed this article as it gives me a good view of a psychiatrist’s thoughts on the subject of music and depression, which pertains to my essay. Yes music changes lives, this is not some new revelation, this has been known for quite some time. So why not use music to help heal the ill and pained ones. His statement that the people in the U.S currently spend more money on music then they do on medication is a little bit surprising and at the same time not. Nowadays with the pirating of music I seriously doubt this statement but it probably was true 5 to 10 years ago. One of the great aspects of music especially for those who can’t afford expensive drugs and health care is that music is virtualy free! If you already know how to play an instrument and are creative composing or learning new repertoire to play is free if you don’t require a teacher. Also if you don’t know how to play an instrument singing is something that everyone can do. If you own a cd player and some cd’s then singing along to them is free as well. The internet is full of free music, music lessons and sheet music to play from. Believe what you want but when you really think about it, music was given to all of us as a gift. Some choose to use it and others choose not to. Some people get enough satisfaction from listening to music and others get more of a “fix” by actually learning how to play music and or perform it for others. For some music is just as addicting as a drug or alcohol. So Moffic’s statement now doesn’t seem so surprising…music is a type of medication! I think therapists such as Moffic should seriously think about using music as a form of therapy, even if they are not musically inclined. They really don’t have to be, they just need to know what music to use in their therapy so that it stimulates the correct part of their patients brain. Depression is caused by low seratonin in the brain which its purpose is to help regulate other neurotransmitter systems, and decreased serotonin activity may allow these systems to act in unusual and erratic ways. When serotonin levels are low this promotes low levels of norepinephrine, another monoamine neurotransmitter. Some antidepressants also enhance the levels of norepinephrine directly, whereas others raise the levels of dopamine, a third monoamine neurotransmitter. As Moffic states “listening to music is said to release dopamine in the nuclea accumbens and Dr. Claudius Conrad suggests that music might stimulate a growth hormone.” As I do more research for my essay I know that music does more than just release dopamine and that it affects not just one or two areas of the brain but all over. Music could be the miracle drug we have all been waiting for!
Reference: Fritz, S. (2008). Why dogs don’t enjoy music: Human neurons are extraordinarily sensitive to changes in pitch. Scientific American: Mind, 19(5), 15.
Review: This particular article is succinct and full of information. It discusses human neurons and how extraordinarily sensitive they are to changes in pitch. It then compares humans to other mammals. Researchers have, and continue to find it strange that humans can distinguish between the musical tones in a scale. Izhak Fried of U.C.L.A. and his colleagues were able to study the auditory cortex in great detail when working with epileptic patients who had electrodes implanted in their brain to pinpoint the source of their seizures. “The study revealed that groups of exquisitely sensitive neurons exist along the auditory nerve on its way from the ear to the auditory cortex. In these neurons natural sounds, such as the human voice, elicit a completely different and far more complex set of responses than do artificial noises such as pure tones.” This means that humans can detect frequencies easier than other mammals, in fact, humans can detect frequencies as fine as one twelfth of an octave. What can we do with this information? The researchers main question is why is this the case? As far as we know, bats are the only mammal with better ability to hear changes in pitch than humans. Dogs and other mammals are not nearly as sensitive suggesting that fine discrimination of sounds is not necessary for survival. Researchers speculate that humans use their “fine hearing to facilitate working memory and learning capabilities, but more research is needed to explore this puzzle.”
Reflection: I found this article really neat, merely for content. I find it really strange that humans can distinguish between sounds so easily. Thinking of a previous blog I wrote on the article Monkeys Hear Voices, I found it strange to learn that macaques can resolve only half an octave. I would have thought it would be more, since macaques can distinguish human voices. It will be fascinating to read more on this research as it continues to develop. I wonder if there are other reasons why humans can distinguish between different pitches. Perhaps it is because we are so advanced in our use of vocalizations?
(see link below)
This page is an offshoot of the Ohio State University's main page on the field of music cognition. It is meant to summarize the questions of the field for the sake of potential students or other interested parties seeking an introduction to music cognition. This page is literally a list of questions. At the top of the page there is a very short disclaimer that helps to nuance the forthcoming list, stating that some have been incompletely answered, some are unanswerable, and some may be outright inappropriate for other reasons. The page also includes a link to recommended reading. The questions are organized into categories, as follows: Musical Origins and Musical Character, Musical Skill and Musical Intelligence, Musical Pleasure and Preference, Musical Development, Musical Organization, Music and Memory, Music and Emotion, Music Performance and Improvisation, Music's Influences, Music, Brain and Body, Music, Environment and Culture, and Modeling Music Cognition.
I appreciate the disclaimer at the top of the page; it shows sensitivity to the implications of this research. These questions would have been very useful at the beginning of the term, to help us get started considering possible research questions for this course. It may still be a good resource for us to formulate our final papers. Sometimes what seem like basic questions are the most clarifying place to start. I, myself, question some of the categorizations of these questions. For example, the origins and character of music don't intuitively fit together in my mind. Also, the question, "What makes us hate some songs?" seems to fit equally into both the 'Music, Pleasure and Preference' heading and the 'Music and Emotion'. To what extent can pleasure and emotion be conflated. In any case, it is indeed these questions, these points of interest, that drive us in our scholarship of these areas. I am interested in the way that questions of culture pertaining to music are often allocated to considerations of the 'origins' of music. I did find the questions for 'Music, Environment and Culture' somewhat weak in terms of directly linking music to the brain, but this may be because I can't anticipate these connections the way a cognitive scientist would.
Questions that Motivate Music Cognition Researchers
Reference: Minkel, J. (2008). The roots of creativity. Scientific American: Mind, 19(3), 8.
Review: This article looking at jazz musicians and improvisation is quite short but interesting. It discusses a recent study conducted by the researchers at the National Institute of Health. The study asked six professional jazz musicians to memorize in a few days a new piece of music they had never seen before. The musicians then played the score plus an improvisation in the same key while an MRI machine scanned their brain. Results showed that the improvisation passages elicited stronger activity in the “medial prefrontal cortex, a part of the brain active in autobiographical storytelling, among other varieties of self-expression.” These results support the altered state notion, as “activity dipped in the dorsolateral prefrontal cortex (an area linked to planning and self-censorship), which, the researchers point out, is similar to what happens during dreams.” The researchers say that the same patterns may show in all kinds of improvisation, whether solving a problem or “rifting on a topic of interest.”
Reflection: I found this article quite interesting. As a musician we are faced with the challenge of improvising almost everyday in our practice at home. Singers create and add ornaments to pieces and pianists develop new warm-ups to give a couple of examples. It makes sense that improvising uses the same area of the brain as dreaming. If you think about improvisation, while playing, your performance can get stuck and lose flow. It’s only when the mind is free, and able to play, that one can truly improvise to their full potential. So often teachers ask students to improvise in the secondary music classroom. Most students find this task daunting, thinking too much about improvising that they cannot achieve this “altered-state notion.” As a teacher, one could take from this article that students must work step by step towards improvisation. Mastering every step with help until the crutch is no longer needed.
Part #1) http://www.youtube.com/watch?v=hlHzDjwdOL0
By Richard Burrows
This is a one-hour Power Point and voice-over presentation found on YouTube. A chiropractor/hypnotist named Dr. Peter DeShane has created this presentation to promote a new therapy he offers. It is called Brain Music Therapy, and works by creating a CD of music based on your neurological brainwaves. Dr. DeShane uses an EEG machine to record brainwaves. He then sends the data to New York where music is created based on your EEG results. He states, by listening to two different tracks, this music of your “neurological footprint” will entrain your brain to either slow down or speed up. This takes your brain into one of four stages of activity: delta, theta, alpha, and beta.
The first 45 minutes of this presentation is an overview of the brain. Dr. DeShane begins by explaining what brainwaves are, and compares the brain to a computer. He says the central nervous system can be compared to the central processing system of the computer. The brain has an input and output and can get bogged down with too much information.
The next slide breaks down the brain into 3 areas, the reptilian, mammalian, and the neocortex. The reptilian deals with flight, fight, feeding and reproduction. The mammalian is involved in emotion, and the neocortex is the thinking part of the brain.
Dr. DeShane states that we need to take care of our brains. In order to do so, we must feed each section. With the reptile section, we must create a safe environment by looking at how our environment is arranged and what kind of environment we are creating. One should have 7 to 8 hours of sleep and participate in physical movement to burn off adrenaline.
To feed the mammal, one needs to have supportive and nurturing relationships. They must spend time with people and make an effort to connect with them. A person should also spend time with nature. The rhythms of nature are grounding and cause your brainwaves to entrain to them. To feed the neocortex, you must actively look for new things to do. This will create new neurological pathways. You can also do old things in a new way such as, brushing your teeth with the other hand, and take time to learn new skills and engage creativity.
The next slide addressed how the brain activity is measured through EEG and what different types of brainwaves exist. Dr.DeShane talked about how the brain produces minute amounts of electrical activity and EEG records the changes in this activity. Quicker rates of change indicate beta rhythms, whereas slow activity indicates delta rhythms. Dr. DeShane stated, the more active the brain, the more active the electrical impulses. He then compares the four different types of brainwaves, delta, theta, alpha, and beta. Delta (0 and 4 Hz) is associated with sleep or brain damage. Theta (4-8Hz) is associated with daydreaming and meditation, Alpha (8-12Hz) is associated with a relaxed and focused state, and Beta (12+ Hz) is associated with a focused concentration.
Dr. DeShane discusses a symptom called Minimum Brain Dysfunction (MBD). This is an epidemic that costs the Canadian business and heathcare system $140 billion per year. Symptoms include decreased focus, decreased memory, poor sleep, fatigue and burnout. You can avoid this problem by reducing poor nutrition intake, sleep deprivation, toxicity, stress, physical damage and oxygen starvation.
Dr. DeShane postulates Brain Music Therapy (BMT) can dramatically improve MBD. Exposing the brain to music which carries it’s own specific footprint, entrainment will occur and dramatically improve your overall brain function. The treatment rewires, reteaches and retunes you brain to react differently.
The final screen discusses the procedure and cost of BMT. There are 3 appointments necessary. The first session is to record previous personal history and do the initial EEG. The data is then sent to New York where music is created based on the electrical impulses of your brainwaves. The second appointment is to give an overview of the CD and how to use it to benefit your situation. The final appointment is an optional hypnosis treatment. The entire process costs $550.00
This presentation was well organized and the material was very accessible. The brain function was very clear and well described. The material seemed to focus a lot more on the brain description, instead of the actual music treatment part. The treatment section seemed more like an infomercial, focusing on selling product.
I am quite interested in this idea of brain entrainment. I have to admit that there is a bit of skepticism, but I would be interested in seeing empirical results. I felt Dr. DeShane’s brain description was a solid ‘laymen’s’ overview, and certainly helped my understanding of brain maintenance. His analogies were useful and clearly postulated.
I was disappointed in the end, when I realized that this was a long infomercial for a product he was trying to sell. I really thought that this was an online lecture until he started talking about the “investment and incentives” for participating in BMT. I would suggest watching all segments up until #7.
Sunday, November 23, 2008
Reference: Hoppe, C., & Stojanovic, J. (2008). High-Aptitude Minds. Scientific American Mind: Brain, 19(4), 60-67.
This particular article is quite fascinating. If you have the time I suggest you take a look at it! The article begins with a description of a high-aptitude mind, reviewing the IQ test scores and brain size as factors relating to giftedness. The article discusses the fact that when Albert Einstein died they sliced his brain into 240 pieces and stored them in jars for safekeeping and research. I was surprised to learn that Einstein’s parietal lobe (an area thought to be critical for visual and mathematical thinking) was 15% wider than 35 men of normal cognitive ability. “Despite the quest to unravel the roots of high IQ, researchers say that people often overestimate the significance of intellectual ability. Studies show that practice and perseverance contribute more to accomplishment than being smart does.”
The article continues discussing different research available on giftedness and its relation to the brain. One particular aspect that is fascinating in this discussion is that academic prodigies younger than eight had a thin cerebral cortex. What makes this statement interesting is that the cerebral cortex thickened rapidly soon after so that by late childhood it was thicker than that of the less clever children.
Within this article are sub-articles or small boxes of information that relate to the overall article. One particular box, “Right over Left,” suggests that genius areas such as math, music and art are accompanied by extensive use of the right hemisphere of the brain. Another interesting aspect was that these mathematically, musically, and artistically gifted people tended to be left-handed, and have left-hemisphere deficits such as stuttering or dyslexia.
Another such article, “Musical Minds” discusses the biological underpinnings of musical talent. “Christian Gaser of the University of Jena in Germany and neurologist Gottfried Schlaug of Harvard Medical School also reported gray matter volume differences in motor, auditory, and visuospatial brain regions in professional keyboard players as compared with amateur musicians and nonmusicians.” Many researchers suggest that a bulk of these structural and functional brain differences result from lots of practice.
I found this article interesting because it discussed different reasons for giftedness. Although it is difficult to apply directly to the classroom, I still think this information is useful for an educator to know. Understanding how gifted children develop and what their strengths are could help teachers plan more appropriate activities for them. In addition, understanding what the strengths are of musically gifted students is important for curriculum planning as well. By knowing the strengths of these students and the set-backs one can help students improve on areas that they struggle with thus furthering their abilities. Finally, I think it is good to emphasize with students the fact that hard work and dedication do make a difference to the structural and functional brain. It is more encouraging to know that a change can be made.
By: Richard Burrows
This YouTube video is a promotion for a new age treatment program and website www.musclebrain.net and www.rhythmtherapy.com. This program entitled: Meditation, Exercise, Therapy, Transforms, Awareness (METTA) Physical Therapy and Movement medicine is designed to retune, and rewire your neuromuscular pathways. By studying this revolutionary technique, under the direction of Anthony “Tone” Cardenas, you learn to:
• rewire your brain - fine tune your muscles
• balance & integrate r/l brain hemispheres
• activate & re-pattern neuronal pathways
• activate millions of latent brain cells
• enhance cerebral function, mental clarity
• enhance coordination, balance, fine motor
• improve attention, concentration & focus
• awaken creativity & natural geniusness
• achieve high level sensori-motor mastery
• still the inner chatter, attain quiet mind
• relax & attain alpha wave levels quickly
• learn movement meditation
• stress management
• free up & circulate blocked energy
The technique presents the “fun”-damentals of movement medicine, conscious exercises of attention, and revolutionary movement technology through corporeal multitasking and synchronized repatterning of the brain. Corporeal multitasking is a performance of two or more geometrical movement patterns with different limbs of the body. Synchronized repatterening is the rewiring of sensory-motor and neurological pathways. Cardenas utilizes ancient movement techniques that combine tai chi, yoga, sufi dervish dance, and corporeal geometry with a unique rhythm therapy process. This technique is partially learned in a dream state, with hybrid movement technology.
This video has a very average production value that presents a lot of peaceful imagery, and relaxing music. The voiceover is full of empty rhetoric, which utilizes multi-syllabic words in order to convey some type of professionalism.
What a load of crap! I can't tell if this guy is serious or not. This footage immediately makes me think of a burned out hippy that is looking to make an extra dollar on some unfortunate person that is looking for a quick solution. I particularly enjoy all of the scientific referencing and then he concludes with “these techniques will blow your mind”. He claims if you move your limbs in opposite directions while standing on one foot, you will enhance your cerebral function, balance your hemispheres and activate brain cells. The only thing this activity will do is give you lower back pain from standing on one foot for too long and possibly help with coordination. I think chewing gum and walking does the same. He also states this activity will awaken the genius in you. Apparently this didn’t work for him.
Overall, I think this video was more enjoyable to watch from an entertainment perspective. His shares no prove of evidence that these techniques work, other then simply stating it does.
I just found an example of his techniques in practice. This is a must see... http://www.youtube.com/watch?v=RMHI761On5g
Reference: Belin, Pascal. (2008). Monkey’s Hear Voices. Scientific American Mind: Brain, 19(4), 14-15.
This particular article discusses new research suggesting that a brain area devoted to processing voices is not as uniquely human as previously thought. Most species use vocalizations to communicate with each other. However, humans are the only species in which these vocalizations have reached the effective method of communication: speech. The researchers asked questions such as, “How did our ancestors become the only speaking animals, some tens of thousands of years ago? Did this change happen abruptly, involving the sudden appearance of a new cerebral region or pattern of cerebral connections?” These are interesting questions that the researchers began to explore in this article.
Researchers think they may have found the missing link between the brain of vocalizing nonhuman species and the human brain. This suggests that there is evidence of a cerebral region that is specialized for processing voice in humans that also exists in rhesus macaques. “Neuroscientist Christopher I. Petkov of the Max Planck Institute for Biological Cybernetics in Tubingen, Germany, and his colleagues used functional magnetic resonance imaging to explore the macaque brain.” The monkeys listened to different natural sounds, including macaque vocalizations. The researchers found a “discrete region of the anterior temporal lobe in which activity was greater for macaque vocalizations than for other sound categories.” Another phenomenon observed was that the macaques showed “neuronal adaptation,” recognizing different calls coming from the same individual.
This is quite fascinating research as we are beginning to realize that the voice area in the human brain is not unique to our species. As the article states, this could also mean that the voice area has a long evolutionary history in both humans and macaques. It is also neat that Petkov and his colleagues have actually located a cerebral location for these abilities. Another fascinating aspect of this research is that the “identity-specific neuronal adaptation was observed only in the right hemisphere of the macaque brain, exactly as in the human studies.” This points to the fact that the right hemisphere played the main role in how speech appeared in our ancestry. This is also quite exciting research as humans and macaques can be studied and compared to one another using similar methodologies.