Exploring the potential of large language models to understand interpersonal emotion regulation strategies from narratives.

IF 3.4 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Emotion Pub Date : 2025-04-17 DOI:10.1037/emo0001528
Belén López-Pérez, Yuhui Chen, Xiuhui Li, Shixing Cheng, Pooya Razavi
{"title":"Exploring the potential of large language models to understand interpersonal emotion regulation strategies from narratives.","authors":"Belén López-Pérez, Yuhui Chen, Xiuhui Li, Shixing Cheng, Pooya Razavi","doi":"10.1037/emo0001528","DOIUrl":null,"url":null,"abstract":"<p><p>Interpersonal emotion regulation involves using diverse strategies to influence others' emotions, commonly assessed with questionnaires. However, this method may be less effective for individuals with limited literacy or introspection skills. To address this, recent studies have adopted narrative-based approaches, though these require time-intensive qualitative analysis. Given the potential of artificial intelligence (AI) and large language models (LLM) for information classification, we evaluated the feasibility of using AI to categorize interpersonal emotion regulation strategies. We conducted two studies in which we compared AI performance against human coding in identifying regulation strategies from narrative data. In Study 1, with 2,824 responses, ChatGPT initially achieved Kappa values over .47. Refinements in prompts (i.e., coding instructions) led to improved consistency between ChatGPT and human coders (κ > .79). In Study 2, the refined prompts demonstrated comparable accuracy (κ > .76) when analyzing a new set of responses (<i>N</i> = 2090), using both ChatGPT and Claude. Additional evaluations of LLMs' performance using different accuracy metrics pointed to notable variability in LLM's capability when interpreting narratives across different emotions and regulatory strategies. These results point to the strengths and limitations of LLMs in classifying regulation strategies, and the importance of prompt engineering and validation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":48417,"journal":{"name":"Emotion","volume":" ","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Emotion","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/emo0001528","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Interpersonal emotion regulation involves using diverse strategies to influence others' emotions, commonly assessed with questionnaires. However, this method may be less effective for individuals with limited literacy or introspection skills. To address this, recent studies have adopted narrative-based approaches, though these require time-intensive qualitative analysis. Given the potential of artificial intelligence (AI) and large language models (LLM) for information classification, we evaluated the feasibility of using AI to categorize interpersonal emotion regulation strategies. We conducted two studies in which we compared AI performance against human coding in identifying regulation strategies from narrative data. In Study 1, with 2,824 responses, ChatGPT initially achieved Kappa values over .47. Refinements in prompts (i.e., coding instructions) led to improved consistency between ChatGPT and human coders (κ > .79). In Study 2, the refined prompts demonstrated comparable accuracy (κ > .76) when analyzing a new set of responses (N = 2090), using both ChatGPT and Claude. Additional evaluations of LLMs' performance using different accuracy metrics pointed to notable variability in LLM's capability when interpreting narratives across different emotions and regulatory strategies. These results point to the strengths and limitations of LLMs in classifying regulation strategies, and the importance of prompt engineering and validation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

探索大型语言模型从叙事中理解人际情绪调节策略的潜力。
人际情绪调节包括使用不同的策略来影响他人的情绪,通常通过问卷进行评估。然而,这种方法对于读写能力或自省能力有限的人可能不太有效。为了解决这个问题,最近的研究采用了基于叙述的方法,尽管这些方法需要耗费大量时间的定性分析。鉴于人工智能(AI)和大型语言模型(LLM)在信息分类方面的潜力,我们评估了使用人工智能对人际情绪调节策略进行分类的可行性。我们进行了两项研究,在从叙事数据中识别监管策略方面,我们将人工智能的表现与人类编码进行了比较。在研究1中,有2,824个响应,ChatGPT最初实现了超过0.47的Kappa值。提示符(即编码指令)的改进提高了ChatGPT和人类编码器(κ >.79)之间的一致性。在研究2中,当使用ChatGPT和Claude分析一组新的回答(N = 2090)时,改进的提示显示出相当的准确性(κ >.76)。使用不同准确性指标对法学硕士的表现进行的额外评估表明,法学硕士在解释不同情绪和调节策略的叙述时,其能力存在显著差异。这些结果指出了法学硕士在分类监管策略方面的优势和局限性,以及及时工程和验证的重要性。(PsycInfo Database Record (c) 2025 APA,版权所有)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Emotion
Emotion PSYCHOLOGY, EXPERIMENTAL-
CiteScore
8.40
自引率
7.10%
发文量
325
审稿时长
8 weeks
期刊介绍: Emotion publishes significant contributions to the study of emotion from a wide range of theoretical traditions and research domains. The journal includes articles that advance knowledge and theory about all aspects of emotional processes, including reports of substantial empirical studies, scholarly reviews, and major theoretical articles. Submissions from all domains of emotion research are encouraged, including studies focusing on cultural, social, temperament and personality, cognitive, developmental, health, or biological variables that affect or are affected by emotional functioning. Both laboratory and field studies are appropriate for the journal, as are neuroimaging studies of emotional processes.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信