Measuring Social Trust in AI: How Institutions Shape the Usage Intention of AI-Based Technologies

IF 3 Q1 PSYCHOLOGY, MULTIDISCIPLINARY
Sulfikar Amir, Sabrina Ching Yuen Luk, Shrestha Saha, Iuna Tsyrulneva, Marcus T. L. Teo
{"title":"Measuring Social Trust in AI: How Institutions Shape the Usage Intention of AI-Based Technologies","authors":"Sulfikar Amir,&nbsp;Sabrina Ching Yuen Luk,&nbsp;Shrestha Saha,&nbsp;Iuna Tsyrulneva,&nbsp;Marcus T. L. Teo","doi":"10.1155/hbe2/4084384","DOIUrl":null,"url":null,"abstract":"<p>What drives people to have trust in using artificial intelligence (AI)? How does the institutional environment shape social trust in AI? This study addresses these questions to explain the role of institutions in allowing AI-based technologies to be socially accepted. In this study, social trust in AI is situated in three institutional entities, namely, the government, tech companies, and the scientific community. It is posited that the level of social trust in AI is correlated to the level of trust in these institutions. The stronger the trust in the institutions, the deeper the social trust in the use of AI. To test this hypothesis, we conducted a cross-country survey involving a total of 4037 respondents in Singapore, Taiwan, Japan, and the Republic of Korea (ROK). The results show convincing evidence of how institutions shape social trust in AI and its acceptance. Our empirical findings reveal that trust in institutions is positively associated with trust in AI technologies. Trust in institutions is based on perceived competence, benevolence, and integrity. It can directly affect people’s trust in AI technologies. Also, our empirical findings confirm that trust in AI technologies is positively associated with the intention to use these technologies. This means that a higher level of trust in AI technologies leads to a higher level of intention to use these technologies. In conclusion, institutions greatly matter in the construction and production of social trust in AI-based technologies. Trust in AI is not a direct affair between the user and the product, but it is mediated by the whole institutional setting. This has profound implications on the governance of AI in society. By taking into account institutional factors in the planning and implementation of AI regulations, we can be assured that social trust in AI is sufficiently founded.</p>","PeriodicalId":36408,"journal":{"name":"Human Behavior and Emerging Technologies","volume":"2025 1","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/hbe2/4084384","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Behavior and Emerging Technologies","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/hbe2/4084384","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

What drives people to have trust in using artificial intelligence (AI)? How does the institutional environment shape social trust in AI? This study addresses these questions to explain the role of institutions in allowing AI-based technologies to be socially accepted. In this study, social trust in AI is situated in three institutional entities, namely, the government, tech companies, and the scientific community. It is posited that the level of social trust in AI is correlated to the level of trust in these institutions. The stronger the trust in the institutions, the deeper the social trust in the use of AI. To test this hypothesis, we conducted a cross-country survey involving a total of 4037 respondents in Singapore, Taiwan, Japan, and the Republic of Korea (ROK). The results show convincing evidence of how institutions shape social trust in AI and its acceptance. Our empirical findings reveal that trust in institutions is positively associated with trust in AI technologies. Trust in institutions is based on perceived competence, benevolence, and integrity. It can directly affect people’s trust in AI technologies. Also, our empirical findings confirm that trust in AI technologies is positively associated with the intention to use these technologies. This means that a higher level of trust in AI technologies leads to a higher level of intention to use these technologies. In conclusion, institutions greatly matter in the construction and production of social trust in AI-based technologies. Trust in AI is not a direct affair between the user and the product, but it is mediated by the whole institutional setting. This has profound implications on the governance of AI in society. By taking into account institutional factors in the planning and implementation of AI regulations, we can be assured that social trust in AI is sufficiently founded.

Abstract Image

衡量人工智能的社会信任:机构如何塑造基于人工智能的技术的使用意图
是什么促使人们信任使用人工智能(AI)?制度环境如何塑造社会对人工智能的信任?本研究解决了这些问题,以解释制度在允许基于人工智能的技术被社会接受方面的作用。在本研究中,人工智能的社会信任位于三个机构实体中,即政府、科技公司和科学界。假设人工智能的社会信任水平与这些机构的信任水平相关。对机构的信任越强,社会对人工智能使用的信任就越深。研究结果提供了令人信服的证据,表明机构如何塑造社会对人工智能的信任和接受程度。我们的实证研究结果表明,对机构的信任与对人工智能技术的信任呈正相关。对机构的信任是基于对能力、仁慈和正直的认知。它可以直接影响人们对人工智能技术的信任。此外,我们的实证研究结果证实,对人工智能技术的信任与使用这些技术的意愿呈正相关。这意味着,对人工智能技术的信任程度越高,使用这些技术的意愿就越高。总之,在基于人工智能的技术中,制度对社会信任的构建和产生至关重要。对人工智能的信任并不是用户和产品之间的直接关系,而是由整个制度环境来调节的。这对人工智能在社会中的治理有着深远的影响。通过在人工智能法规的规划和实施中考虑到制度因素,我们可以确信社会对人工智能的信任是充分建立的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Human Behavior and Emerging Technologies
Human Behavior and Emerging Technologies Social Sciences-Social Sciences (all)
CiteScore
17.20
自引率
8.70%
发文量
73
期刊介绍: Human Behavior and Emerging Technologies is an interdisciplinary journal dedicated to publishing high-impact research that enhances understanding of the complex interactions between diverse human behavior and emerging digital technologies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信