{"title":"Can chatbots teach us how to behave? Examining assumptions about user interactions with AI assistants and their social implications.","authors":"Eleonora Lima, Tiffany Morisseau","doi":"10.3389/frai.2025.1545607","DOIUrl":null,"url":null,"abstract":"<p><p>In this article we examine the issue of AI assistants, and the way they respond to insults and sexually explicit requests. Public concern over these responses, particularly because AI assistants are usually female-voiced, prompted tech companies to make them more assertive. Researchers have explored whether these female-voiced AI assistants could encourage abusive behavior and reinforce societal sexism. However, the extent and nature of the problem are unclear due to a lack of data on user interactions. By combining psychological and socio-cultural perspectives, we problematize these assumptions and outline a number of research questions for leveraging AI assistants to promote gender inclusivity more effectively.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1545607"},"PeriodicalIF":3.0000,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12116430/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frai.2025.1545607","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In this article we examine the issue of AI assistants, and the way they respond to insults and sexually explicit requests. Public concern over these responses, particularly because AI assistants are usually female-voiced, prompted tech companies to make them more assertive. Researchers have explored whether these female-voiced AI assistants could encourage abusive behavior and reinforce societal sexism. However, the extent and nature of the problem are unclear due to a lack of data on user interactions. By combining psychological and socio-cultural perspectives, we problematize these assumptions and outline a number of research questions for leveraging AI assistants to promote gender inclusivity more effectively.