Understanding emotion in different languages: Emotional prosody recognition in bilinguals and the impact of background noise

Open Access
- Author:
- Stokes, Gabrielle
- Area of Honors:
- Psychology
- Degree:
- Bachelor of Science
- Document Type:
- Thesis
- Thesis Supervisors:
- Michele Diaz, Thesis Supervisor
Frank Gerard Hillary, Thesis Honors Advisor
Janet Van Hell, Thesis Supervisor - Keywords:
- Bilingual
L2-processing
Background Noise
Emotional Prosody - Abstract:
- There are two main components to how we speak: what we say and how we say it. Currently, improvements in cross-cultural communication focus on translational efforts and how to get messages across semantically—what we say. However, among the most crucial components of connecting and communicating are the emotions we convey when we speak—the how. While extensive literature can be found on the concept of emotional prosody, very little is known about how acoustics are specifically linked to emotions and how environmental or cultural elements can influence recognition (Larrouy-Maestri et al., 2024). Previous studies have examined how humans recognize emotion in a foreign language that they don’t speak, with an overarching goal to examine whether aspects of emotional prosody may have universal qualities (e.g., Paulman & Uskul, 2014; Pell et al., 2009). Their findings indicated that there is an “in-group advantage” in emotional prosody recognition where listeners more accurately recognize emotions in their native language compared to a foreign language (Pell et al., 2009). However, it remains to be seen how these effects apply to bilingual individuals. Our study investigated emotional prosody recognition between different languages in bilinguals, and investigated whether and if so, how this “in-group advantage” applies to individuals in their second language (L2) as well as their first language (L1). Dutch-English bilinguals listened to pseudo-sentences in Dutch (L1), English (L2), Arabic (foreign), and Hindi (foreign). These pseudo-sentences were created to match the phonetics and syntax of each language but contained meaningless words, allowing participants to only focus on the prosodic elements of the speech (an English example, “the fector egzullin the boshent”). For each language, participants listened to pseudo-utterances spoken with happy, sad, fearful, angry, and neutral intonation. After listening to each pseudo-utterance, participants determined which emotion they thought the speaker expressed, using a button press. To investigate the effect of background noise on emotion recognition (to mimic a real-world scenario), pseudo-utterances were presented in quiet or in two-talker Dutch babble. We found that Dutch-English bilinguals showed an advantage in correctly identifying emotional prosody in L1 and in L2 over the foreign languages Arabic and Hindi, both in the quiet condition and in the noise (two-talker babble) condition. These findings indicate that the in-group advantage not only emerges in bilinguals’ first language but also in their second (and less proficient) language. We also found that participants had decreased emotion identification accuracy in the babble condition when compared to the quiet condition, and, surprisingly, that bilinguals were better at identifying emotions in L2 English than in L1 Dutch pseudo-sentences. Finally, earlier age of English acquisition and higher English proficiency (as measured by LexTALE; Lemhöfer & Broersma, 2012) were positively correlated with emotion identification accuracy. The results of this study will be related to current theories on emotional prosody and emotion recognition in the context of cross-cultural communication.