{"title":"Treating differently or equally: A study exploring attitudes towards AI moral advisors","authors":"Yiming Liu , Tianhong Wang","doi":"10.1016/j.techsoc.2025.102862","DOIUrl":null,"url":null,"abstract":"<div><div>Artificial intelligence (AI) technology has evolved from serving primarily as a decision-maker in the past to increasingly taking on the role of an advisor. However, contemporary attitudes toward AI applications in moral decision-making remain unclear. In Study 1, we explored whether there is a difference in attitudes towards human and AI moral advisors when both are defined in the context of a one-time decision-making scenario. Studies 2a and 2b, with the goal of achieving higher ecological validity, optimized decision-making methods and scenarios, respectively. We obtained consistent results, indicating that people equally trust the advice of AI moral advisors and human moral advisors. When it comes to assigning responsibility after a decision, individuals assign responsibility equally to both AI and human advisors.</div></div>","PeriodicalId":47979,"journal":{"name":"Technology in Society","volume":"82 ","pages":"Article 102862"},"PeriodicalIF":10.1000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technology in Society","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0160791X25000521","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL ISSUES","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) technology has evolved from serving primarily as a decision-maker in the past to increasingly taking on the role of an advisor. However, contemporary attitudes toward AI applications in moral decision-making remain unclear. In Study 1, we explored whether there is a difference in attitudes towards human and AI moral advisors when both are defined in the context of a one-time decision-making scenario. Studies 2a and 2b, with the goal of achieving higher ecological validity, optimized decision-making methods and scenarios, respectively. We obtained consistent results, indicating that people equally trust the advice of AI moral advisors and human moral advisors. When it comes to assigning responsibility after a decision, individuals assign responsibility equally to both AI and human advisors.
期刊介绍:
Technology in Society is a global journal dedicated to fostering discourse at the crossroads of technological change and the social, economic, business, and philosophical transformation of our world. The journal aims to provide scholarly contributions that empower decision-makers to thoughtfully and intentionally navigate the decisions shaping this dynamic landscape. A common thread across these fields is the role of technology in society, influencing economic, political, and cultural dynamics. Scholarly work in Technology in Society delves into the social forces shaping technological decisions and the societal choices regarding technology use. This encompasses scholarly and theoretical approaches (history and philosophy of science and technology, technology forecasting, economic growth, and policy, ethics), applied approaches (business innovation, technology management, legal and engineering), and developmental perspectives (technology transfer, technology assessment, and economic development). Detailed information about the journal's aims and scope on specific topics can be found in Technology in Society Briefings, accessible via our Special Issues and Article Collections.