Rohit Mallick , Christopher Flathmann , Wen Duan , Beau G. Schelble , Nathan J. McNeese
{"title":"你说什么与你做什么在人类-人工智能团队中利用积极的情感表达传递人工智能队友的意图","authors":"Rohit Mallick , Christopher Flathmann , Wen Duan , Beau G. Schelble , Nathan J. McNeese","doi":"10.1016/j.ijhcs.2024.103355","DOIUrl":null,"url":null,"abstract":"<div><p>With the expansive growth of AI’s capabilities in recent years, researchers have been tasked with developing and improving human-centered AI collaborations, necessitating the creation of human–AI teams (HATs). However, the differences in communication styles between humans and AI often prevent human teammates from fully understanding the intent and needs of AI teammates. One core difference is that humans naturally leverage a positive emotional tone during communication to convey their confidence or lack thereof to convey doubt in their ability to complete a task. Yet, this communication strategy must be explicitly designed in order for an AI teammate to be human-centered. In this mixed-methods study, 45 participants completed a study examining how human teammates interpret the behaviors of their AI teammates when they express different positive emotions via specific words/phrases. Quantitative results show that, based on corresponding behaviors, AI teammates were able to use displays of emotion to increase trust in AI teammates and the positive mood of the human teammate. Additionally, our qualitative findings indicate that participants preferred their AI teammates to increase the intensity of their displayed emotions to help reduce the perceived risk of their AI teammate’s behavior. When taken in sum, these findings describe the relevance of AI teammates expressing intensities of emotion while performing various behavioral decisions as a continued means to provide social support to the wider team and better task performance.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"192 ","pages":"Article 103355"},"PeriodicalIF":5.3000,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"What you say vs what you do: Utilizing positive emotional expressions to relay AI teammate intent within human–AI teams\",\"authors\":\"Rohit Mallick , Christopher Flathmann , Wen Duan , Beau G. Schelble , Nathan J. McNeese\",\"doi\":\"10.1016/j.ijhcs.2024.103355\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>With the expansive growth of AI’s capabilities in recent years, researchers have been tasked with developing and improving human-centered AI collaborations, necessitating the creation of human–AI teams (HATs). However, the differences in communication styles between humans and AI often prevent human teammates from fully understanding the intent and needs of AI teammates. One core difference is that humans naturally leverage a positive emotional tone during communication to convey their confidence or lack thereof to convey doubt in their ability to complete a task. Yet, this communication strategy must be explicitly designed in order for an AI teammate to be human-centered. In this mixed-methods study, 45 participants completed a study examining how human teammates interpret the behaviors of their AI teammates when they express different positive emotions via specific words/phrases. Quantitative results show that, based on corresponding behaviors, AI teammates were able to use displays of emotion to increase trust in AI teammates and the positive mood of the human teammate. Additionally, our qualitative findings indicate that participants preferred their AI teammates to increase the intensity of their displayed emotions to help reduce the perceived risk of their AI teammate’s behavior. When taken in sum, these findings describe the relevance of AI teammates expressing intensities of emotion while performing various behavioral decisions as a continued means to provide social support to the wider team and better task performance.</p></div>\",\"PeriodicalId\":54955,\"journal\":{\"name\":\"International Journal of Human-Computer Studies\",\"volume\":\"192 \",\"pages\":\"Article 103355\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Human-Computer Studies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1071581924001381\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1071581924001381","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
What you say vs what you do: Utilizing positive emotional expressions to relay AI teammate intent within human–AI teams
With the expansive growth of AI’s capabilities in recent years, researchers have been tasked with developing and improving human-centered AI collaborations, necessitating the creation of human–AI teams (HATs). However, the differences in communication styles between humans and AI often prevent human teammates from fully understanding the intent and needs of AI teammates. One core difference is that humans naturally leverage a positive emotional tone during communication to convey their confidence or lack thereof to convey doubt in their ability to complete a task. Yet, this communication strategy must be explicitly designed in order for an AI teammate to be human-centered. In this mixed-methods study, 45 participants completed a study examining how human teammates interpret the behaviors of their AI teammates when they express different positive emotions via specific words/phrases. Quantitative results show that, based on corresponding behaviors, AI teammates were able to use displays of emotion to increase trust in AI teammates and the positive mood of the human teammate. Additionally, our qualitative findings indicate that participants preferred their AI teammates to increase the intensity of their displayed emotions to help reduce the perceived risk of their AI teammate’s behavior. When taken in sum, these findings describe the relevance of AI teammates expressing intensities of emotion while performing various behavioral decisions as a continued means to provide social support to the wider team and better task performance.
期刊介绍:
The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities.
Research areas relevant to the journal include, but are not limited to:
• Innovative interaction techniques
• Multimodal interaction
• Speech interaction
• Graphic interaction
• Natural language interaction
• Interaction in mobile and embedded systems
• Interface design and evaluation methodologies
• Design and evaluation of innovative interactive systems
• User interface prototyping and management systems
• Ubiquitous computing
• Wearable computers
• Pervasive computing
• Affective computing
• Empirical studies of user behaviour
• Empirical studies of programming and software engineering
• Computer supported cooperative work
• Computer mediated communication
• Virtual reality
• Mixed and augmented Reality
• Intelligent user interfaces
• Presence
...