Perceptions of AI Ethics on Social Media

Ayse Ocal
{"title":"Perceptions of AI Ethics on Social Media","authors":"Ayse Ocal","doi":"10.1109/ETHICS57328.2023.10155069","DOIUrl":null,"url":null,"abstract":"Since the emergence of Artificial intelligence (AI), despite a common expectation that AI should be ‘ethical’ [1], there are many different interpretations, assumptions, and expectations about what constitutes \"ethical AI\" and which ethical problems and requirements are pointed out by the public. Even though many private companies and research institutions have highlighted present and possible future problems, needs, and guidelines associated with AI ethics, relevant public visions regarding how \"ethical AI\" can be constituted [1] have not been explored sufficiently. For obtaining public opinions, although questionnaires and interviews are commonly used, the questions in these methods are designed based on only the researchers' preferences, and this could be a limitation. Social media data, however, are produced by users freely [2], and many people share their ideas in social media discussions [3]. Social media data usage, therefore, has been growing in various research studies. Researchers intending to utilize social media as a data source predominantly harness Twitter data, yet in recent years Reddit has also gained the attention of scholars with the same research purpose, as in [2], [3]. Reddit is a huge social media platform involving over 50 million daily active users with diverse mentalities shaped by different backgrounds, prior beliefs, personal experiences, and personalities, from various geographical locations, and 100 thousand active communities, thereby it brings different segments of the public together. Moreover, users benefit from a level of anonymity on Reddit not typically accomplished on other social media platforms [4], thereby users may feel more secure and share more honest thoughts on a topic, thus the Reddit data have been used to gather public opinions in prior research as in [3]. Through the lens of technological frames [5], to explore social media users' interpretations, assumptions, and expectations about how ethical AI is built, and which problems hinder building ethical AI, Reddit conversations were analyzed. More specifically, a corpus consisting of 998 unique Reddit post titles and their corresponding 16611 comments extracted from 15 AI-related subreddits were identified by using topic modelling supported by human judgment for frame identification as in [6] based on BERTopic [7]. The findings show that perceptions about AI ethics are clustered around several themes (AI's gender bias; humans' gender bias about perceived gender of bots; regulation and patent laws related to AI use; AI spreading disinformation; AI making fake faces, videos, music; misuse of personal data; and AI impact on crime), with deviations about how these themes are interpreted, what problems or actors they pertain to, and what appropriate measures should be taken to address problems pointed out by the public. While some of these ethical issues were also highlighted in prominent AI ethics literature as in [8], the findings of this study indicated new insights such as humans' gender bias about the perceived gender of bots. The findings offer important implications. First, as a practical implication, the findings can enrich current public voice-centric explorations of AI ethics. Also, they could help designing suitable interfaces that allow proper human-AI task coordination and collaboration and deploying innovative solutions for existing or anticipated ethical problems. Second, expected outcomes can demonstrate areas where misconceptions and unrealistic visions about AI ethics are widespread, which may trigger speculative fears or concerns. Researchers may be encouraged to more focus on the areas where public misconceptions are more common; educational programs may be arranged to reduce speculative fears or concerns or take necessary measures for real ethical risks. Academia, industry and government communities may collaborate for research and policy arrangements in those areas. Third, through employing computeraided textual analysis, this study reveals frames in social media conversations to showcase the latest perceptions from different viewpoints. This method may be an example method for relevant future research.","PeriodicalId":203527,"journal":{"name":"2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ETHICS57328.2023.10155069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Since the emergence of Artificial intelligence (AI), despite a common expectation that AI should be ‘ethical’ [1], there are many different interpretations, assumptions, and expectations about what constitutes "ethical AI" and which ethical problems and requirements are pointed out by the public. Even though many private companies and research institutions have highlighted present and possible future problems, needs, and guidelines associated with AI ethics, relevant public visions regarding how "ethical AI" can be constituted [1] have not been explored sufficiently. For obtaining public opinions, although questionnaires and interviews are commonly used, the questions in these methods are designed based on only the researchers' preferences, and this could be a limitation. Social media data, however, are produced by users freely [2], and many people share their ideas in social media discussions [3]. Social media data usage, therefore, has been growing in various research studies. Researchers intending to utilize social media as a data source predominantly harness Twitter data, yet in recent years Reddit has also gained the attention of scholars with the same research purpose, as in [2], [3]. Reddit is a huge social media platform involving over 50 million daily active users with diverse mentalities shaped by different backgrounds, prior beliefs, personal experiences, and personalities, from various geographical locations, and 100 thousand active communities, thereby it brings different segments of the public together. Moreover, users benefit from a level of anonymity on Reddit not typically accomplished on other social media platforms [4], thereby users may feel more secure and share more honest thoughts on a topic, thus the Reddit data have been used to gather public opinions in prior research as in [3]. Through the lens of technological frames [5], to explore social media users' interpretations, assumptions, and expectations about how ethical AI is built, and which problems hinder building ethical AI, Reddit conversations were analyzed. More specifically, a corpus consisting of 998 unique Reddit post titles and their corresponding 16611 comments extracted from 15 AI-related subreddits were identified by using topic modelling supported by human judgment for frame identification as in [6] based on BERTopic [7]. The findings show that perceptions about AI ethics are clustered around several themes (AI's gender bias; humans' gender bias about perceived gender of bots; regulation and patent laws related to AI use; AI spreading disinformation; AI making fake faces, videos, music; misuse of personal data; and AI impact on crime), with deviations about how these themes are interpreted, what problems or actors they pertain to, and what appropriate measures should be taken to address problems pointed out by the public. While some of these ethical issues were also highlighted in prominent AI ethics literature as in [8], the findings of this study indicated new insights such as humans' gender bias about the perceived gender of bots. The findings offer important implications. First, as a practical implication, the findings can enrich current public voice-centric explorations of AI ethics. Also, they could help designing suitable interfaces that allow proper human-AI task coordination and collaboration and deploying innovative solutions for existing or anticipated ethical problems. Second, expected outcomes can demonstrate areas where misconceptions and unrealistic visions about AI ethics are widespread, which may trigger speculative fears or concerns. Researchers may be encouraged to more focus on the areas where public misconceptions are more common; educational programs may be arranged to reduce speculative fears or concerns or take necessary measures for real ethical risks. Academia, industry and government communities may collaborate for research and policy arrangements in those areas. Third, through employing computeraided textual analysis, this study reveals frames in social media conversations to showcase the latest perceptions from different viewpoints. This method may be an example method for relevant future research.
对社交媒体上人工智能伦理的看法
自人工智能(AI)出现以来,尽管人们普遍期望人工智能应该是“道德的”[1],但对于什么是“道德的人工智能”,以及公众指出了哪些道德问题和要求,存在许多不同的解释、假设和期望。尽管许多私营公司和研究机构已经强调了与人工智能伦理相关的当前和未来可能出现的问题、需求和指导方针,但关于如何构建“道德人工智能”的相关公众愿景[1]尚未得到充分探讨。为了获得公众意见,虽然问卷调查和访谈是常用的方法,但这些方法中的问题只是根据研究人员的偏好来设计的,这可能是一个局限性。然而,社交媒体数据是由用户自由产生的[2],许多人在社交媒体讨论中分享自己的想法[3]。因此,在各种研究中,社交媒体数据的使用一直在增长。试图利用社交媒体作为数据来源的研究者主要利用Twitter数据,而近年来Reddit也获得了具有相同研究目的的学者的关注,如[2],[3]。Reddit是一个庞大的社交媒体平台,拥有超过5000万的日活跃用户,这些用户来自不同的背景、先前的信仰、个人经历和性格,形成了不同的心态,来自不同的地理位置,以及10万个活跃的社区,从而将不同的公众群体聚集在一起。此外,用户在Reddit上获得了在其他社交媒体平台上通常无法实现的匿名性[4],因此用户可能会感到更安全,并就某个话题分享更诚实的想法,因此在之前的研究中,Reddit的数据被用于收集公众意见,如[3]。通过技术框架的视角[5],为了探索社交媒体用户对如何构建道德人工智能的解释、假设和期望,以及哪些问题阻碍了道德人工智能的构建,我们分析了Reddit对话。更具体地说,从15个与人工智能相关的子Reddit中提取998个独特的Reddit帖子标题及其对应的16611条评论,使用基于BERTopic[7]的人类判断支持的主题建模进行框架识别,如[6]所示。研究结果表明,人们对人工智能伦理的看法主要集中在几个主题上(人工智能的性别偏见;人类对机器人感知性别的性别偏见;与人工智能使用相关的法规和专利法;人工智能传播虚假信息;人工智能制作假面孔、视频、音乐;滥用个人资料;以及人工智能对犯罪的影响),但这些主题的解释存在偏差,涉及哪些问题或行为者,以及应该采取哪些适当措施来解决公众指出的问题。虽然其中一些伦理问题也在著名的人工智能伦理文献中得到了强调,如[8],但本研究的发现提出了新的见解,如人类对机器人感知性别的性别偏见。这些发现提供了重要的启示。首先,作为实践意义,这些发现可以丰富当前以公众声音为中心的人工智能伦理探索。此外,它们可以帮助设计合适的界面,允许适当的人机任务协调和协作,并为现有或预期的道德问题部署创新的解决方案。其次,预期结果可以显示出对人工智能伦理普遍存在误解和不切实际的看法的领域,这可能引发投机性的恐惧或担忧。研究人员可能会被鼓励更多地关注公众误解更普遍的领域;可以安排教育计划,以减少投机性的恐惧或担忧,或对真正的道德风险采取必要的措施。学术界、工业界和政府团体可以在这些领域合作进行研究和政策安排。第三,通过计算机辅助文本分析,本研究揭示了社交媒体对话中的框架,以展示不同观点的最新看法。该方法可作为今后相关研究的范例方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信