{"title":"Communicating the cultural other: trust and bias in generative AI and large language models","authors":"Christopher J. Jenks","doi":"10.1515/applirev-2024-0196","DOIUrl":null,"url":null,"abstract":"This paper is concerned with issues of trust and bias in generative AI in general, and chatbots based on large language models in particular (e.g. ChatGPT). The discussion argues that intercultural communication scholars must do more to better understand generative AI and more specifically large language models, as such technologies produce and circulate discourse in an ostensibly impartial way, reinforcing the widespread assumption that machines are objective resources for societies to learn about important intercultural issues, such as racism and discrimination. Consequently, there is an urgent need to understand how trust and bias factor into the ways in which such technologies deal with topics and themes central to intercultural communication. It is also important to scrutinize the ways in which societies make use of AI and large language models to carry out important social actions and practices, such as teaching and learning about historical or political issues.","PeriodicalId":46472,"journal":{"name":"Applied Linguistics Review","volume":"197 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Linguistics Review","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1515/applirev-2024-0196","RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 0
Abstract
This paper is concerned with issues of trust and bias in generative AI in general, and chatbots based on large language models in particular (e.g. ChatGPT). The discussion argues that intercultural communication scholars must do more to better understand generative AI and more specifically large language models, as such technologies produce and circulate discourse in an ostensibly impartial way, reinforcing the widespread assumption that machines are objective resources for societies to learn about important intercultural issues, such as racism and discrimination. Consequently, there is an urgent need to understand how trust and bias factor into the ways in which such technologies deal with topics and themes central to intercultural communication. It is also important to scrutinize the ways in which societies make use of AI and large language models to carry out important social actions and practices, such as teaching and learning about historical or political issues.