AAAI 23 Spring Symposium Report on “Socially Responsible AI for Well-Bing”

IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Ai Magazine Pub Date : 2023-06-20 DOI:10.1002/aaai.12092
Takashi Kido, Keiki Takadama
{"title":"AAAI 23 Spring Symposium Report on “Socially Responsible AI for Well-Bing”","authors":"Takashi Kido,&nbsp;Keiki Takadama","doi":"10.1002/aaai.12092","DOIUrl":null,"url":null,"abstract":"<p>The AAAI 2023 spring symposium on “Socially Responsible AI for Well-Being” was held at Hyatt Regency San Francisco Airport, California, from March 27th to 29th.</p><p>AI has great potential for human well-being but also carries the risk of unintended harm. For our well-being, AI needs to fulfill social responsibilities such as fairness, accountability, transparency, trust, privacy, safety, and security, not just productivity such as exponential growth and economic and financial supremacy. For example, AI diagnostic systems must not only provide reliable results (for example, highly accurate diagnostic results and easy-to-understand explanations) but also their results must be socially acceptable (for example, data for AI [machine learning] must not be biased (the amount of data for training must not be biased by race or location (for example, the amount of data for learning must be equal across races and locations). As in this example, AI decisions affect our well-being, suggesting the importance of discussing “what is socially responsible” in several potential well-being situations in the coming AI era.</p><p>The first perspective is “(Individual) Responsible AI” and aims to identify what mechanisms and issues should be considered to design responsible AI for well-being. One of the goals of responsible AI for well-being is to provide accountable outcomes for our ever-changing health conditions. Since our environment often drives these changes in health, Responsible AI for Well-Being is expected to offer responsible outcomes by understanding how our digital experiences affect our emotions and quality of life.</p><p>The second perspective is “Socially Responsible AI,” which aims to identify what mechanisms and issues should be considered to realize the social aspects of responsible AI for well-being. One aspect of social responsibility is fairness, that is, that the results of AI should be equally helpful to all. The problem of “bias” in AI (and humans) needs to be addressed to achieve fairness. Another aspect of social responsibility is the applicability of knowledge among people. For example, health-related knowledge found by an AI for one person (for example, tips for a good night's sleep) may not be helpful to another person, meaning that such knowledge is not socially responsible. To address these problems, we must understand how fair is fair and find ways to ensure that machines do not absorb human bias by providing socially responsible results.</p><p>Our symposium included 18 technical presentations over 2-and-a-half days. Presentation topics included (1) socially responsible AI, (2) communication and evidence for well-being, (3) face expression and impression for well-being, (4) odor for well-being, (5) ethical AI, (6) robot Interaction for social well-being, (7) communication and sleep for social well-being, (8) well-being studies, (9) information and sleep for social well-being</p><p>For example, Takashi Kido, Advanced Comprehensive Research Organization of Teikyo University in Japan, presented the challenges of socially responsible AI for well-being. Oliver Bendel, School of Business FHGW in Switzerland, presented the increasing well-being through robotic hugs. Martin D. Aleksandrov, Freie Universitat Berlin in Germany, presented the limiting inequalities in the fair division with additive value preferences for indivisible social items. Melanie Swan, University College London in the United Kingdom, presented Quantum intelligence, responsible human machine entities. Dragutin Petkovic, San Francisco State University in Unites States, presented on San Francisco State University Graduate Certificate in Ethical AI.</p><p>Our symposium provides participants unique opportunities where researchers with diverse backgrounds can develop new ideas through innovative and constructive discussions. This symposium will present significant interdisciplinary challenges for guiding future advances in the AI community.</p><p>Takashi Kido and Keiki Takadama served as co-chairs of this symposium. The papers of the symposium will be published online at CEUR-WS.org.</p><p>The authors declare no conflicts of interest.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"44 2","pages":"211-212"},"PeriodicalIF":2.5000,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12092","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ai Magazine","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/aaai.12092","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The AAAI 2023 spring symposium on “Socially Responsible AI for Well-Being” was held at Hyatt Regency San Francisco Airport, California, from March 27th to 29th.

AI has great potential for human well-being but also carries the risk of unintended harm. For our well-being, AI needs to fulfill social responsibilities such as fairness, accountability, transparency, trust, privacy, safety, and security, not just productivity such as exponential growth and economic and financial supremacy. For example, AI diagnostic systems must not only provide reliable results (for example, highly accurate diagnostic results and easy-to-understand explanations) but also their results must be socially acceptable (for example, data for AI [machine learning] must not be biased (the amount of data for training must not be biased by race or location (for example, the amount of data for learning must be equal across races and locations). As in this example, AI decisions affect our well-being, suggesting the importance of discussing “what is socially responsible” in several potential well-being situations in the coming AI era.

The first perspective is “(Individual) Responsible AI” and aims to identify what mechanisms and issues should be considered to design responsible AI for well-being. One of the goals of responsible AI for well-being is to provide accountable outcomes for our ever-changing health conditions. Since our environment often drives these changes in health, Responsible AI for Well-Being is expected to offer responsible outcomes by understanding how our digital experiences affect our emotions and quality of life.

The second perspective is “Socially Responsible AI,” which aims to identify what mechanisms and issues should be considered to realize the social aspects of responsible AI for well-being. One aspect of social responsibility is fairness, that is, that the results of AI should be equally helpful to all. The problem of “bias” in AI (and humans) needs to be addressed to achieve fairness. Another aspect of social responsibility is the applicability of knowledge among people. For example, health-related knowledge found by an AI for one person (for example, tips for a good night's sleep) may not be helpful to another person, meaning that such knowledge is not socially responsible. To address these problems, we must understand how fair is fair and find ways to ensure that machines do not absorb human bias by providing socially responsible results.

Our symposium included 18 technical presentations over 2-and-a-half days. Presentation topics included (1) socially responsible AI, (2) communication and evidence for well-being, (3) face expression and impression for well-being, (4) odor for well-being, (5) ethical AI, (6) robot Interaction for social well-being, (7) communication and sleep for social well-being, (8) well-being studies, (9) information and sleep for social well-being

For example, Takashi Kido, Advanced Comprehensive Research Organization of Teikyo University in Japan, presented the challenges of socially responsible AI for well-being. Oliver Bendel, School of Business FHGW in Switzerland, presented the increasing well-being through robotic hugs. Martin D. Aleksandrov, Freie Universitat Berlin in Germany, presented the limiting inequalities in the fair division with additive value preferences for indivisible social items. Melanie Swan, University College London in the United Kingdom, presented Quantum intelligence, responsible human machine entities. Dragutin Petkovic, San Francisco State University in Unites States, presented on San Francisco State University Graduate Certificate in Ethical AI.

Our symposium provides participants unique opportunities where researchers with diverse backgrounds can develop new ideas through innovative and constructive discussions. This symposium will present significant interdisciplinary challenges for guiding future advances in the AI community.

Takashi Kido and Keiki Takadama served as co-chairs of this symposium. The papers of the symposium will be published online at CEUR-WS.org.

The authors declare no conflicts of interest.

AAAI第23届春季研讨会报告“对Well Bing负社会责任的人工智能”
3月27日至29日,AAAI 2023年春季研讨会在加利福尼亚州旧金山凯悦酒店机场举行,主题为“对健康负社会责任的人工智能”。人工智能对人类健康具有巨大潜力,但也有意外伤害的风险。为了我们的福祉,人工智能需要履行公平、问责、透明、信任、隐私、安全和保障等社会责任,而不仅仅是指数增长、经济和金融霸权等生产力。例如人工智能诊断系统不仅必须提供可靠的结果(例如,高度准确的诊断结果和易于理解的解释),而且其结果必须为社会所接受(例如,人工智能[机器学习]的数据不得有偏见(训练的数据量不得因种族或地点而有偏见(例如,不同种族和地点的学习数据量必须相等)。如本例所示,人工智能决策会影响我们的幸福感,这表明在即将到来的人工智能时代,在几种潜在的幸福感情况下讨论“什么是社会责任”的重要性。第一个视角是“(个人)负责任的人工智能”,旨在确定应该考虑哪些机制和问题来设计负责任的AI以促进福祉。负责任的人工智能的目标之一是为我们不断变化的健康状况提供负责任的结果。由于我们的环境经常推动这些健康变化,负责任的幸福人工智能有望通过了解我们的数字体验如何影响我们的情绪和生活质量来提供负责任的结果。第二个视角是“负社会责任的人工智能”,旨在确定应该考虑哪些机制和问题,以实现负责任的人工智慧对福祉的社会方面。社会责任的一个方面是公平,也就是说,人工智能的结果应该对所有人都有同样的帮助。需要解决人工智能(和人类)中的“偏见”问题,以实现公平。社会责任的另一个方面是知识在人与人之间的适用性。例如,人工智能为一个人发现的与健康相关的知识(例如,睡个好觉的技巧)可能对另一个人没有帮助,这意味着这些知识对社会没有责任。为了解决这些问题,我们必须理解公平是如何公平的,并找到方法确保机器不会通过提供对社会负责的结果来吸收人类的偏见。我们的研讨会在两天半的时间里进行了18次技术演示。演讲主题包括(1)对社会负责的人工智能,(2)沟通和幸福的证据,(3)面部表情和幸福的印象,(4)幸福的气味,(5)道德人工智能,日本东京大学高级综合研究组织介绍了对社会负责的人工智能对福祉的挑战。瑞士FHGW商学院的Oliver Bendel通过机器人拥抱展示了日益增长的幸福感。德国柏林弗雷大学的Martin D.Aleksandrov提出了公平划分中的有限不平等,以及对不可分割社会项目的附加价值偏好。英国伦敦大学学院的Melanie Swan介绍了量子智能,负责任的人机实体。美国旧金山州立大学Dragutin Petkovic在旧金山州立大学伦理人工智能研究生证书上发表演讲。我们的研讨会为参与者提供了独特的机会,不同背景的研究人员可以通过创新和建设性的讨论开发新的想法。本次研讨会将为指导人工智能社区的未来发展提出重大的跨学科挑战。Kido Takashi和Takadama Keiki担任了本次研讨会的联合主席。研讨会的论文将在CEUR-WS.org在线发表。作者声明没有利益冲突。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Ai Magazine
Ai Magazine 工程技术-计算机:人工智能
CiteScore
3.90
自引率
11.10%
发文量
61
审稿时长
>12 weeks
期刊介绍: AI Magazine publishes original articles that are reasonably self-contained and aimed at a broad spectrum of the AI community. Technical content should be kept to a minimum. In general, the magazine does not publish articles that have been published elsewhere in whole or in part. The magazine welcomes the contribution of articles on the theory and practice of AI as well as general survey articles, tutorial articles on timely topics, conference or symposia or workshop reports, and timely columns on topics of interest to AI scientists.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信