Tuesday, December 1, 2009

A Brief Summary of an Experiment Dealing with Autism, Emotion, and SCRs.

Khalfa, Stephanie and Peretz, Isabelle. “Atypical Emotional Judgments and Skin Conductance Responses to Music and Language in Autism.” In Autism Research Advances, edited by L.B. Zhao, 101-119. New York: Nova Science Publishers, 2007.

General Summary:
This was a really interesting experiment, designed to further study and characterize emotional processing in autism by monitoring responses to music samples (small clips) and verbal samples (short and recorded spoken sentences). The process was divided into two different and somewhat separate experiments. In each one, emotion recognition and feeling (the actual, automatic and physiological responses generated by each participant) were tested and recorded. It is certainly worth noting that to test the feeling aspect, skin conductance responses (SCRs) were recorded; this involved measuring the rapid fluctuations in electrodermal activity of the sweat glands. During some types of emotional responses, acetylcholine is released by the sympathetic nervous system, causing these electrical fluctuations in the skin, and they can be easily detected. In this way, a person’s arousal level (stimulated vs. relaxed) and valence perception (pleasant vs. unpleasant) can be physically shown and recorded. [Note: The researchers made specific mention of the brain’s amygdala and its role in SCRs. Amygdala activity results in a positive SCR.] The emotion recognition was measured by having each participant label samples as “scary,” “happy,” “sad,” or “peaceful,” and then rating each judgment on a 10-point scale.
Experiment 1. SCRs and Emotional Evaluation of Music, Summary:
For this study, 9 participants classed as “high-functioning autistic,” with IQs greater than 89, and who were older than 16, made up the “clinical” group. The “control” group was made up of 13 individuals who were not autistic, and had similar IQ averages and ages.
The musical clips, or stimuli, were drawn from 7-second clips of western classical music. There were 5 separate samples from the Baroque, Classical, Romantic, and early 20th-century periods. Each sample in its original form was labelled “consonant,” and each was altered by semi-tones in key pitches to create the “dissonant” version. Then each consonant and dissonant clip was sped up and slowed down, and half were played in minor and half were played in major mode, creating happy consonant (original form, fast, major), happy dissonant (dissonant form, fast, major), sad consonant (original form, slow, minor) and sad dissonant (dissonant form, slow, minor) categories.
Each participant evaluated arousal (stimulation) and valence (pleasantness) on a 10-point scale (1= calm or 1=unpleasant; 10=stimulating or 10=pleasant), where arousal was shown in the sad/happy aspect of each clip, and valence was shown in the dissonant/consonant aspect. While listening to the samples, each participant had his skin conductance recorded.
The Results. The control group judged consonant excerpts as more pleasant than the clinical group, but both judged dissonant as the same degree of unpleasant. Both groups judged happy (fast) excerpts as more stimulating than the sad, and dissonant as more stimulating than the consonant. Autistic individuals exhibited larger SCR amplitudes on average, especially for dissonant and “sad” music.
ONLY the control group exhibited variations in SCR in relation to the music: happy/consonant caused larger SCRs; happy dissonant and sad consonant caused smaller SCRs. In the clinical group, all four “emotion” categories elicited similar SCR levels. Within the autistic group, there were shown to be “high-responders,” and “normal responders.” “Normal Responders” exhibited similar SCR responses as the control group.

My Response, Experiment 1:
I would really like to have had a listing of exactly what musical samples the experimenters used. The “consonant” ones could very well have included some chords that could be perceived as dissonant, thus muddying the conscious recognition aspect of the valence measurements. This would have been especially true for some of the highly chromatic pieces of the Romantic era, and many 20th-century pieces.
This study doesn’t mention the cultural background of each participant. The “musical categories” (happy, sad, consonant, dissonant, etc.) were all assumed to be understood by the participants, i.e. each was assumed to only be sensitive to a western musical interpretation. If participants were more familiar with eastern music styles, their perception of the samples could have been different than waht would have been expected, causing a skew in the results.
If autistic participants had musical training, or attended many concerts (especially classical), they may have been able to recognize the intended feeling of each sample, without actually feeling or understanding the emotions of the music.
Would SCR recording pick up “sweat signals” from participants who felt uneasy, or nervous, in the setting and situation? I know that when I am nervous, I sometimes get sweaty palms. Would something like this have skewed results? And could the recorded emotion have come from other sources rather than the music, for example, nerves? With these variables in mind, how accurate was the SCR recording?
The control and clinical groups were very small. I would be more convinced of the data comparisons if this had been a more extensive study, involving more people.
In the judging of “happy” vs. “sad” as “stimulating” vs. “unstimulating”: I think that perhaps the stimulation factor could have come from the speed of the musical clip, and not necessarily its “happy” or “sad” emotional content; the labelling of the musical clips as stimulating or not may have had nothing to do with the emotional content/quality of the music.
7-second clips are not really a good example of music that could be used to elicit emotion, in my opinion; the clips used were only sound samples. I believe that complex emotions are elicited more from actual musical pieces, as a whole, where the music is developed and changed over time, in form and harmony especially.
I think the small number in the clinical group would not necessarily represent a whole population of autistic people. As a result, dividing them into “normal and high responders” may have been erroneous. Perhaps autistic individuals exhibit a range of autonomic responses, just like the general population, and thus should not be labelled as 2 separate and distinct groups.

Experiment 2: Emotional Judgments for Music and Language, Summary.
In this experiment, there were 18 individuals in the clinical (autistic) group and 19 individuals in the control group. 28 short (7-second) musical excerpts, written expressly for this experiment, were played; each was meant to represent sadness, happiness, fear, or peacefulness. There were 24 verbal stimuli, as well. These involved short, spoken, and recorded sentences designed to express one of the 4 emotions mentioned. For each sample (musical and verbal) the participants were asked to choose from the 4 emotions, and to assess arousal and valence on the 10-point scale.
The results from this experiment showed some interesting things. Participants judged fear and sadness as unpleasant in the verbal stimuli, but not in the musical stimuli. Also, participants judged “musical sadness” as more relaxing than “verbal sadness.” “Happiness” in music was shown to be more stimulating than “fear,” but this difference was not seen in the “happy” vs. “fearful” language samples. Overall, the results failed to show a difference between how autistic and “normal” participants made emotional judgments, in music or language.
In terms of emotional recognition, the researchers noted that “normal” participants could better recognize verbal sadness. Both groups had difficultly recognizing fear, sadness, and peacefulness in music, but could recognize happiness. There was no significant difference between how the groups identified musical emotions.
Emotions in verbal stimuli (language) were better recognized than emotions in music, in the control group. Some emotions in the verbal stimuli were deemed unpleasant, while no emotions conveyed by music were deemed as such. There was no evidence for a lack of emotion recognition in individuals with high-functioning autism, in terms of aural musical and verbal samples. However, autistic individuals showed smaller valance labelling of musical samples (i.e. labelled them less pleasant or unpleasant, on average) than the control group. This may have indicated that autistic individuals lack the ability to recognize subtle emotional aspects.

My Response, Experiment 2:
The musical excerpts that were used in this experiment were, essentially, “canned” music. They only involved the piano timbre and sound, and were very short. How much emotion could possibly be conveyed in 7 seconds? How would the results have differed if the samples were more extensive, and drawn from the symphonic repertoire? Would orchestration have caused a difference in the perception of emotion? Would familiarity or unfamiliarity with the music cause any differences in the perception of its underlying emotion? Also, would it have been useful to use some musical samples that involved words, to combine the musical and verbal stimuli? What would this have shown?
What would have been shown, especially with the musical samples, if participants had not been given the 4 emotions to choose from, but had to pick their own label? How would the autistic individuals have compared to the control group? Would this have gotten ride of any pre-conceived cultural notions, such as equating major-mode and fast music with “happiness”?
How would cultural pre-conceptions have affected the emotion recognition in autism? Are we taught to hear music a certain way? Can emotion recognition in music be learned in the same way that an autistic individual learns what a smile means?

No comments: