社会眼中的人工智能:评估新分类中的社会风险和社会价值认知

IF 4.3 Q1 PSYCHOLOGY, MULTIDISCIPLINARY
Gabbiadini Alessandro, Durante Federica, Baldissarri Cristina, Andrighetto Luca
{"title":"社会眼中的人工智能:评估新分类中的社会风险和社会价值认知","authors":"Gabbiadini Alessandro,&nbsp;Durante Federica,&nbsp;Baldissarri Cristina,&nbsp;Andrighetto Luca","doi":"10.1155/2024/7008056","DOIUrl":null,"url":null,"abstract":"<p>Artificial intelligence (AI) is a rapidly developing technology that has the potential to create previously unimaginable chances for our societies. Still, the public’s opinion of AI remains mixed. Since AI has been integrated into many facets of daily life, it is critical to understand how people perceive these systems. The present work investigated the perceived social risk and social value of AI. In a preliminary study, AI’s social risk and social value were first operationalized and explored by adopting a correlational approach. Results highlighted that perceived social value and social risk represent two significant and antagonistic dimensions driving the perception of AI: the higher the perceived risk, the lower the social value attributed to AI. The main study considered pretested AI applications in different domains to develop a classification of AI applications based on perceived social risk and social value. A cluster analysis revealed that in the two-dimensional social risk × social value space, the considered AI technologies grouped into six clusters, with the AI applications related to medical care (e.g., assisted surgery) unexpectedly perceived as the riskiest ones. Understanding people’s perceptions of AI can guide researchers, developers, and policymakers in adopting an anthropocentric approach when designing future AI technologies to prioritize human well-being and ensure AI’s responsible and ethical development in the years to come.</p>","PeriodicalId":36408,"journal":{"name":"Human Behavior and Emerging Technologies","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/7008056","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence in the Eyes of Society: Assessing Social Risk and Social Value Perception in a Novel Classification\",\"authors\":\"Gabbiadini Alessandro,&nbsp;Durante Federica,&nbsp;Baldissarri Cristina,&nbsp;Andrighetto Luca\",\"doi\":\"10.1155/2024/7008056\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Artificial intelligence (AI) is a rapidly developing technology that has the potential to create previously unimaginable chances for our societies. Still, the public’s opinion of AI remains mixed. Since AI has been integrated into many facets of daily life, it is critical to understand how people perceive these systems. The present work investigated the perceived social risk and social value of AI. In a preliminary study, AI’s social risk and social value were first operationalized and explored by adopting a correlational approach. Results highlighted that perceived social value and social risk represent two significant and antagonistic dimensions driving the perception of AI: the higher the perceived risk, the lower the social value attributed to AI. The main study considered pretested AI applications in different domains to develop a classification of AI applications based on perceived social risk and social value. A cluster analysis revealed that in the two-dimensional social risk × social value space, the considered AI technologies grouped into six clusters, with the AI applications related to medical care (e.g., assisted surgery) unexpectedly perceived as the riskiest ones. Understanding people’s perceptions of AI can guide researchers, developers, and policymakers in adopting an anthropocentric approach when designing future AI technologies to prioritize human well-being and ensure AI’s responsible and ethical development in the years to come.</p>\",\"PeriodicalId\":36408,\"journal\":{\"name\":\"Human Behavior and Emerging Technologies\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-03-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/7008056\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human Behavior and Emerging Technologies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1155/2024/7008056\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Behavior and Emerging Technologies","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/2024/7008056","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)是一项快速发展的技术,有可能为我们的社会创造出以前无法想象的机会。然而,公众对人工智能的看法仍然褒贬不一。由于人工智能已融入日常生活的许多方面,因此了解人们对这些系统的看法至关重要。本研究对人工智能的社会风险和社会价值进行了调查。在一项初步研究中,首先采用相关方法对人工智能的社会风险和社会价值进行了操作和探索。结果表明,感知到的社会价值和社会风险是推动人工智能感知的两个重要的对立维度:感知到的风险越高,人工智能的社会价值就越低。主要研究考虑了不同领域的人工智能预测试应用,根据感知的社会风险和社会价值对人工智能应用进行了分类。聚类分析显示,在二维社会风险 × 社会价值空间中,所考虑的人工智能技术分为六个聚类,其中与医疗相关的人工智能应用(如辅助手术)意外地被认为是风险最高的。了解人们对人工智能的看法可以指导研究人员、开发人员和政策制定者在设计未来的人工智能技术时采用以人类为中心的方法,优先考虑人类的福祉,确保人工智能在未来几年中负责任地、合乎道德地发展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial Intelligence in the Eyes of Society: Assessing Social Risk and Social Value Perception in a Novel Classification

Artificial intelligence (AI) is a rapidly developing technology that has the potential to create previously unimaginable chances for our societies. Still, the public’s opinion of AI remains mixed. Since AI has been integrated into many facets of daily life, it is critical to understand how people perceive these systems. The present work investigated the perceived social risk and social value of AI. In a preliminary study, AI’s social risk and social value were first operationalized and explored by adopting a correlational approach. Results highlighted that perceived social value and social risk represent two significant and antagonistic dimensions driving the perception of AI: the higher the perceived risk, the lower the social value attributed to AI. The main study considered pretested AI applications in different domains to develop a classification of AI applications based on perceived social risk and social value. A cluster analysis revealed that in the two-dimensional social risk × social value space, the considered AI technologies grouped into six clusters, with the AI applications related to medical care (e.g., assisted surgery) unexpectedly perceived as the riskiest ones. Understanding people’s perceptions of AI can guide researchers, developers, and policymakers in adopting an anthropocentric approach when designing future AI technologies to prioritize human well-being and ensure AI’s responsible and ethical development in the years to come.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Human Behavior and Emerging Technologies
Human Behavior and Emerging Technologies Social Sciences-Social Sciences (all)
CiteScore
17.20
自引率
8.70%
发文量
73
期刊介绍: Human Behavior and Emerging Technologies is an interdisciplinary journal dedicated to publishing high-impact research that enhances understanding of the complex interactions between diverse human behavior and emerging digital technologies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信