Can ChatGPT recognize impoliteness? An exploratory study of the pragmatic awareness of a large language model

IF 1.8 1区 文学 0 LANGUAGE & LINGUISTICS
Marta Andersson, Dan McIntyre
{"title":"Can ChatGPT recognize impoliteness? An exploratory study of the pragmatic awareness of a large language model","authors":"Marta Andersson,&nbsp;Dan McIntyre","doi":"10.1016/j.pragma.2025.02.001","DOIUrl":null,"url":null,"abstract":"<div><div>The practical potential of Large Language Models (LLMs) depends in part on their ability to accurately interpret pragmatic functions. In this article, we assess ChatGPT 3.5’s ability to identify and interpret linguistic impoliteness across a series of text examples. We provided ChatGPT 3.5 with instances of implicational, metalinguistic, and explicit impoliteness, alongside sarcasm, unpalatable questions, erotic talk, and unmarked impolite linguistic behavior, asking (i) whether impoliteness was present, and (ii) its source. We then further tested the bot’s ability to identify impoliteness by asking it to remove it from a series of text examples. ChatGPT 3.5 generally performed well, recognizing both conventionalized lexicogrammatical forms and context-sensitive cases. However, it struggled to account for all impoliteness. In some cases, the model was more sensitive to potentially offensive expressions than humans are, as a result of its design, training and/or inability to sufficiently determine the situational context of the examples. We also found that the model had difficulties sometimes in interpreting impoliteness generated through implicature. Given that impoliteness is a complex and multi-functional phenomenon, we consider our findings to contribute to increasing public awareness not only about the use of AI technologies but also about improving their safety, transparency, and reliability.</div></div>","PeriodicalId":16899,"journal":{"name":"Journal of Pragmatics","volume":"239 ","pages":"Pages 16-36"},"PeriodicalIF":1.8000,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Pragmatics","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0378216625000323","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 0

Abstract

The practical potential of Large Language Models (LLMs) depends in part on their ability to accurately interpret pragmatic functions. In this article, we assess ChatGPT 3.5’s ability to identify and interpret linguistic impoliteness across a series of text examples. We provided ChatGPT 3.5 with instances of implicational, metalinguistic, and explicit impoliteness, alongside sarcasm, unpalatable questions, erotic talk, and unmarked impolite linguistic behavior, asking (i) whether impoliteness was present, and (ii) its source. We then further tested the bot’s ability to identify impoliteness by asking it to remove it from a series of text examples. ChatGPT 3.5 generally performed well, recognizing both conventionalized lexicogrammatical forms and context-sensitive cases. However, it struggled to account for all impoliteness. In some cases, the model was more sensitive to potentially offensive expressions than humans are, as a result of its design, training and/or inability to sufficiently determine the situational context of the examples. We also found that the model had difficulties sometimes in interpreting impoliteness generated through implicature. Given that impoliteness is a complex and multi-functional phenomenon, we consider our findings to contribute to increasing public awareness not only about the use of AI technologies but also about improving their safety, transparency, and reliability.
ChatGPT能识别不礼貌吗?大型语言模型的语用意识探索性研究
大型语言模型(llm)的实际潜力部分取决于它们准确解释语用功能的能力。在本文中,我们通过一系列文本示例评估ChatGPT 3.5识别和解释语言不礼貌的能力。我们为ChatGPT 3.5提供了隐含的、元语言的和明确的不礼貌的实例,以及讽刺、令人不愉快的问题、色情谈话和未标记的不礼貌语言行为,询问(i)不礼貌是否存在,(ii)其来源。然后我们进一步测试了机器人识别不礼貌行为的能力,要求它从一系列文本示例中删除不礼貌行为。ChatGPT 3.5通常表现良好,既可以识别约定俗成的词汇语法形式,也可以识别上下文敏感的情况。然而,它很难解释所有的不礼貌。在某些情况下,由于其设计、训练和/或无法充分确定示例的情境背景,该模型对潜在的冒犯性表达比人类更敏感。我们还发现,该模型有时在解释由含意产生的不礼貌时存在困难。鉴于不礼貌是一种复杂且多功能的现象,我们认为我们的研究结果不仅有助于提高公众对人工智能技术使用的认识,还有助于提高其安全性、透明度和可靠性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.90
自引率
18.80%
发文量
219
期刊介绍: Since 1977, the Journal of Pragmatics has provided a forum for bringing together a wide range of research in pragmatics, including cognitive pragmatics, corpus pragmatics, experimental pragmatics, historical pragmatics, interpersonal pragmatics, multimodal pragmatics, sociopragmatics, theoretical pragmatics and related fields. Our aim is to publish innovative pragmatic scholarship from all perspectives, which contributes to theories of how speakers produce and interpret language in different contexts drawing on attested data from a wide range of languages/cultures in different parts of the world. The Journal of Pragmatics also encourages work that uses attested language data to explore the relationship between pragmatics and neighbouring research areas such as semantics, discourse analysis, conversation analysis and ethnomethodology, interactional linguistics, sociolinguistics, linguistic anthropology, media studies, psychology, sociology, and the philosophy of language. Alongside full-length articles, discussion notes and book reviews, the journal welcomes proposals for high quality special issues in all areas of pragmatics which make a significant contribution to a topical or developing area at the cutting-edge of research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信