{"title":"Emulated Empathy: Can Risks Be Countered by a Soft-Law Standard?","authors":"Andrew McStay","doi":"10.1109/TTS.2025.3573875","DOIUrl":null,"url":null,"abstract":"As artificial intelligence systems increasingly emulate empathy – recognizing, interpreting, responding to human emotional states and psychological contexts, and potentially appearing to genuinely care about a person – questions emerge about the personal and societal implications of these developments. Emulated empathy may enhance usability, engagement, and accessibility, but it also raises concerns about manipulation, commodification of interior life, and detachment from reality. Such issues regarding emulated empathy have been raised in relation to AI companions. While AI systems may arguably possess functional aspects of empathy, mimicry of human empathy also reveals fundamental differences between human and computational forms. Drawing on insights from neuroscience, philosophy of mind, and AI ethics, the paper discusses whether such systems pose a threat to human connection or could instead augment it. Special attention is given to the IEEE P<inline-formula> <tex-math>$7014.1{^{\\text {TM}}}$ </tex-math></inline-formula> standard, which outlines ethical considerations and recommended practices for human-AI partnerships involving emulated empathy. In additional to advancing conceptual understanding of emulated empathy, the paper argues for a proactive governance approach that combines soft law with regulatory safeguards to mitigate harm, uphold trust, and guide responsible design in this emerging domain.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"6 3","pages":"250-256"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11029593/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As artificial intelligence systems increasingly emulate empathy – recognizing, interpreting, responding to human emotional states and psychological contexts, and potentially appearing to genuinely care about a person – questions emerge about the personal and societal implications of these developments. Emulated empathy may enhance usability, engagement, and accessibility, but it also raises concerns about manipulation, commodification of interior life, and detachment from reality. Such issues regarding emulated empathy have been raised in relation to AI companions. While AI systems may arguably possess functional aspects of empathy, mimicry of human empathy also reveals fundamental differences between human and computational forms. Drawing on insights from neuroscience, philosophy of mind, and AI ethics, the paper discusses whether such systems pose a threat to human connection or could instead augment it. Special attention is given to the IEEE P$7014.1{^{\text {TM}}}$ standard, which outlines ethical considerations and recommended practices for human-AI partnerships involving emulated empathy. In additional to advancing conceptual understanding of emulated empathy, the paper argues for a proactive governance approach that combines soft law with regulatory safeguards to mitigate harm, uphold trust, and guide responsible design in this emerging domain.
随着人工智能系统越来越多地模仿移情——识别、解释、响应人类的情绪状态和心理环境,并可能表现出真正关心一个人——有关这些发展对个人和社会影响的问题出现了。模拟的移情可能会增强可用性、参与度和可访问性,但它也引发了对操纵、内部生活商品化和脱离现实的担忧。关于人工智能同伴的模拟同理心问题已经被提出。虽然人工智能系统可能具有移情的功能方面,但对人类移情的模仿也揭示了人类和计算形式之间的根本差异。本文借鉴了神经科学、心灵哲学和人工智能伦理的见解,讨论了这些系统是对人类关系构成威胁,还是可以增强它。特别关注IEEE P $7014.1{^{\text {TM}}}$标准,该标准概述了涉及模拟同理心的人类-人工智能伙伴关系的道德考虑和推荐实践。除了推进对模拟同理心的概念理解外,本文还提出了一种积极主动的治理方法,将软法律与监管保障相结合,以减轻伤害,维护信任,并指导这一新兴领域的负责任设计。