Virgile Daunay, David Reby, Gregory A Bryant, Katarzyna Pisanski
{"title":"社会背景下自发性笑的产生和感知。","authors":"Virgile Daunay, David Reby, Gregory A Bryant, Katarzyna Pisanski","doi":"10.1121/10.0036388","DOIUrl":null,"url":null,"abstract":"<p><p>Human nonverbal vocalizations such as laughter communicate emotion, motivation, and intent during social interactions. While differences between spontaneous and volitional laughs have been described, little is known about the communicative functions of volitional (voluntary) laughter-a complex signal used across diverse social contexts. Here, we examined whether the acoustic structure of volitional laughter encodes social contextual information recognizable by humans and computers. We asked men and women to produce volitional laughs in eight distinct social contexts ranging from positive (e.g., watching a comedy) to negative valence (e.g., embarrassment). Human listeners and machine classification algorithms accurately identified most laughter contexts above chance. However, confusion often arose within valence categories, and could be largely explained by shared acoustics. Although some acoustic features varied across social contexts, including fundamental frequency (perceived as voice pitch) and energy parameters (entropy variance, loudness, spectral centroid, and cepstral peak prominence), which also predicted listeners' recognition of laughter contexts, laughs evoked across different social contexts still often overlapped in acoustic and perceptual space. Thus, we show that volitional laughter can convey some reliable information about social context, but much of this is tied to valence, suggesting that volitional laughter is a graded rather than discrete vocal signal.</p>","PeriodicalId":17168,"journal":{"name":"Journal of the Acoustical Society of America","volume":"157 4","pages":"2774-2789"},"PeriodicalIF":2.1000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Production and perception of volitional laughter across social contexts.\",\"authors\":\"Virgile Daunay, David Reby, Gregory A Bryant, Katarzyna Pisanski\",\"doi\":\"10.1121/10.0036388\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Human nonverbal vocalizations such as laughter communicate emotion, motivation, and intent during social interactions. While differences between spontaneous and volitional laughs have been described, little is known about the communicative functions of volitional (voluntary) laughter-a complex signal used across diverse social contexts. Here, we examined whether the acoustic structure of volitional laughter encodes social contextual information recognizable by humans and computers. We asked men and women to produce volitional laughs in eight distinct social contexts ranging from positive (e.g., watching a comedy) to negative valence (e.g., embarrassment). Human listeners and machine classification algorithms accurately identified most laughter contexts above chance. However, confusion often arose within valence categories, and could be largely explained by shared acoustics. Although some acoustic features varied across social contexts, including fundamental frequency (perceived as voice pitch) and energy parameters (entropy variance, loudness, spectral centroid, and cepstral peak prominence), which also predicted listeners' recognition of laughter contexts, laughs evoked across different social contexts still often overlapped in acoustic and perceptual space. Thus, we show that volitional laughter can convey some reliable information about social context, but much of this is tied to valence, suggesting that volitional laughter is a graded rather than discrete vocal signal.</p>\",\"PeriodicalId\":17168,\"journal\":{\"name\":\"Journal of the Acoustical Society of America\",\"volume\":\"157 4\",\"pages\":\"2774-2789\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2025-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of the Acoustical Society of America\",\"FirstCategoryId\":\"101\",\"ListUrlMain\":\"https://doi.org/10.1121/10.0036388\",\"RegionNum\":2,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ACOUSTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Acoustical Society of America","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1121/10.0036388","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ACOUSTICS","Score":null,"Total":0}
Production and perception of volitional laughter across social contexts.
Human nonverbal vocalizations such as laughter communicate emotion, motivation, and intent during social interactions. While differences between spontaneous and volitional laughs have been described, little is known about the communicative functions of volitional (voluntary) laughter-a complex signal used across diverse social contexts. Here, we examined whether the acoustic structure of volitional laughter encodes social contextual information recognizable by humans and computers. We asked men and women to produce volitional laughs in eight distinct social contexts ranging from positive (e.g., watching a comedy) to negative valence (e.g., embarrassment). Human listeners and machine classification algorithms accurately identified most laughter contexts above chance. However, confusion often arose within valence categories, and could be largely explained by shared acoustics. Although some acoustic features varied across social contexts, including fundamental frequency (perceived as voice pitch) and energy parameters (entropy variance, loudness, spectral centroid, and cepstral peak prominence), which also predicted listeners' recognition of laughter contexts, laughs evoked across different social contexts still often overlapped in acoustic and perceptual space. Thus, we show that volitional laughter can convey some reliable information about social context, but much of this is tied to valence, suggesting that volitional laughter is a graded rather than discrete vocal signal.
期刊介绍:
Since 1929 The Journal of the Acoustical Society of America has been the leading source of theoretical and experimental research results in the broad interdisciplinary study of sound. Subject coverage includes: linear and nonlinear acoustics; aeroacoustics, underwater sound and acoustical oceanography; ultrasonics and quantum acoustics; architectural and structural acoustics and vibration; speech, music and noise; psychology and physiology of hearing; engineering acoustics, transduction; bioacoustics, animal bioacoustics.