J. Derek Lomas , Willem van der Maden , Sohhom Bandyopadhyay , Giovanni Lion , Nirmal Patel , Gyanesh Jain , Yanna Litowsky , Haian Xue , Pieter Desmet
{"title":"Evaluating the alignment of AI with human emotions","authors":"J. Derek Lomas , Willem van der Maden , Sohhom Bandyopadhyay , Giovanni Lion , Nirmal Patel , Gyanesh Jain , Yanna Litowsky , Haian Xue , Pieter Desmet","doi":"10.1016/j.ijadr.2024.10.002","DOIUrl":null,"url":null,"abstract":"<div><div>Generative AI systems are increasingly capable of expressing emotions through text, imagery, voice, and video. Effective emotional expression is particularly relevant for AI systems designed to provide care, support mental health, or promote wellbeing through emotional interactions. This research aims to enhance understanding of the alignment between AI-expressed emotions and human perception. How can we assess whether an AI system successfully conveys a specific emotion? To address this question, we designed a method to measure the alignment between emotions expressed by generative AI and human perceptions.</div><div>Three generative image models—DALL-E 2, DALL-E 3, and Stable Diffusion v1—were used to generate 240 images expressing five positive and five negative emotions in both humans and robots. Twenty-four participants recruited via Prolific rated the alignment of AI-generated emotional expressions with a string of text (e.g., “A robot expressing the emotion of amusement”).</div><div>Our results suggest that generative AI models can produce emotional expressions that align well with human emotions; however, the degree of alignment varies significantly depending on the AI model and the specific emotion expressed. We analyze these variations to identify areas for future improvement. The paper concludes with a discussion of the implications of our findings on the design of emotionally expressive AI systems.</div></div>","PeriodicalId":100031,"journal":{"name":"Advanced Design Research","volume":"2 2","pages":"Pages 88-97"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Design Research","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949782524000185","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Generative AI systems are increasingly capable of expressing emotions through text, imagery, voice, and video. Effective emotional expression is particularly relevant for AI systems designed to provide care, support mental health, or promote wellbeing through emotional interactions. This research aims to enhance understanding of the alignment between AI-expressed emotions and human perception. How can we assess whether an AI system successfully conveys a specific emotion? To address this question, we designed a method to measure the alignment between emotions expressed by generative AI and human perceptions.
Three generative image models—DALL-E 2, DALL-E 3, and Stable Diffusion v1—were used to generate 240 images expressing five positive and five negative emotions in both humans and robots. Twenty-four participants recruited via Prolific rated the alignment of AI-generated emotional expressions with a string of text (e.g., “A robot expressing the emotion of amusement”).
Our results suggest that generative AI models can produce emotional expressions that align well with human emotions; however, the degree of alignment varies significantly depending on the AI model and the specific emotion expressed. We analyze these variations to identify areas for future improvement. The paper concludes with a discussion of the implications of our findings on the design of emotionally expressive AI systems.