Narain K. Viswanathan , Carina C.J.M. de Klerk , Samuel V. Wass , Louise Goupil
{"title":"Learning to imitate facial expressions through sound","authors":"Narain K. Viswanathan , Carina C.J.M. de Klerk , Samuel V. Wass , Louise Goupil","doi":"10.1016/j.dr.2024.101137","DOIUrl":null,"url":null,"abstract":"<div><p>The question of how young infants learn to imitate others’ facial expressions has been central in developmental psychology for decades. Facial imitation has been argued to constitute a particularly challenging learning task for infants because facial expressions are perceptually opaque: infants cannot see changes in their own facial configuration when they execute a motor program, so how do they learn to match these gestures with those of their interacting partners? Here we argue that this apparent paradox mainly appears if one focuses only on the visual modality, as most existing work in this field has done so far. When considering other modalities, in particular the auditory modality, many facial expressions are not actually perceptually opaque. In fact, every orolabial expression that is accompanied by vocalisations has specific acoustic consequences, which means that it is relatively transparent in the auditory modality. Here, we describe how this relative perceptual transparency can allow infants to accrue experience relevant for orolabial, facial imitation every time they vocalise. We then detail two specific mechanisms that could support facial imitation learning through the auditory modality. First, we review evidence showing that experiencing correlated proprioceptive and auditory feedback when they vocalise – even when they are alone – enables infants to build audio-motor maps that could later support facial imitation of orolabial actions. Second, we show how these maps could also be used by infants to support imitation even for silent, orolabial facial expressions at a later stage. By considering non-visual perceptual domains, this paper expands our understanding of the ontogeny of facial imitation and offers new directions for future investigations.</p></div>","PeriodicalId":48214,"journal":{"name":"Developmental Review","volume":null,"pages":null},"PeriodicalIF":5.7000,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0273229724000212/pdfft?md5=b600b32363bc164d99608e88e1cb2665&pid=1-s2.0-S0273229724000212-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Developmental Review","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0273229724000212","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, DEVELOPMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
The question of how young infants learn to imitate others’ facial expressions has been central in developmental psychology for decades. Facial imitation has been argued to constitute a particularly challenging learning task for infants because facial expressions are perceptually opaque: infants cannot see changes in their own facial configuration when they execute a motor program, so how do they learn to match these gestures with those of their interacting partners? Here we argue that this apparent paradox mainly appears if one focuses only on the visual modality, as most existing work in this field has done so far. When considering other modalities, in particular the auditory modality, many facial expressions are not actually perceptually opaque. In fact, every orolabial expression that is accompanied by vocalisations has specific acoustic consequences, which means that it is relatively transparent in the auditory modality. Here, we describe how this relative perceptual transparency can allow infants to accrue experience relevant for orolabial, facial imitation every time they vocalise. We then detail two specific mechanisms that could support facial imitation learning through the auditory modality. First, we review evidence showing that experiencing correlated proprioceptive and auditory feedback when they vocalise – even when they are alone – enables infants to build audio-motor maps that could later support facial imitation of orolabial actions. Second, we show how these maps could also be used by infants to support imitation even for silent, orolabial facial expressions at a later stage. By considering non-visual perceptual domains, this paper expands our understanding of the ontogeny of facial imitation and offers new directions for future investigations.
期刊介绍:
Presenting research that bears on important conceptual issues in developmental psychology, Developmental Review: Perspectives in Behavior and Cognition provides child and developmental, child clinical, and educational psychologists with authoritative articles that reflect current thinking and cover significant scientific developments. The journal emphasizes human developmental processes and gives particular attention to issues relevant to child developmental psychology. The research concerns issues with important implications for the fields of pediatrics, psychiatry, and education, and increases the understanding of socialization processes.