Sulfikar Amir, Sabrina Ching Yuen Luk, Shrestha Saha, Iuna Tsyrulneva, Marcus T. L. Teo
{"title":"衡量人工智能的社会信任:机构如何塑造基于人工智能的技术的使用意图","authors":"Sulfikar Amir, Sabrina Ching Yuen Luk, Shrestha Saha, Iuna Tsyrulneva, Marcus T. L. Teo","doi":"10.1155/hbe2/4084384","DOIUrl":null,"url":null,"abstract":"<p>What drives people to have trust in using artificial intelligence (AI)? How does the institutional environment shape social trust in AI? This study addresses these questions to explain the role of institutions in allowing AI-based technologies to be socially accepted. In this study, social trust in AI is situated in three institutional entities, namely, the government, tech companies, and the scientific community. It is posited that the level of social trust in AI is correlated to the level of trust in these institutions. The stronger the trust in the institutions, the deeper the social trust in the use of AI. To test this hypothesis, we conducted a cross-country survey involving a total of 4037 respondents in Singapore, Taiwan, Japan, and the Republic of Korea (ROK). The results show convincing evidence of how institutions shape social trust in AI and its acceptance. Our empirical findings reveal that trust in institutions is positively associated with trust in AI technologies. Trust in institutions is based on perceived competence, benevolence, and integrity. It can directly affect people’s trust in AI technologies. Also, our empirical findings confirm that trust in AI technologies is positively associated with the intention to use these technologies. This means that a higher level of trust in AI technologies leads to a higher level of intention to use these technologies. In conclusion, institutions greatly matter in the construction and production of social trust in AI-based technologies. Trust in AI is not a direct affair between the user and the product, but it is mediated by the whole institutional setting. This has profound implications on the governance of AI in society. By taking into account institutional factors in the planning and implementation of AI regulations, we can be assured that social trust in AI is sufficiently founded.</p>","PeriodicalId":36408,"journal":{"name":"Human Behavior and Emerging Technologies","volume":"2025 1","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/hbe2/4084384","citationCount":"0","resultStr":"{\"title\":\"Measuring Social Trust in AI: How Institutions Shape the Usage Intention of AI-Based Technologies\",\"authors\":\"Sulfikar Amir, Sabrina Ching Yuen Luk, Shrestha Saha, Iuna Tsyrulneva, Marcus T. L. Teo\",\"doi\":\"10.1155/hbe2/4084384\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>What drives people to have trust in using artificial intelligence (AI)? How does the institutional environment shape social trust in AI? This study addresses these questions to explain the role of institutions in allowing AI-based technologies to be socially accepted. In this study, social trust in AI is situated in three institutional entities, namely, the government, tech companies, and the scientific community. It is posited that the level of social trust in AI is correlated to the level of trust in these institutions. The stronger the trust in the institutions, the deeper the social trust in the use of AI. To test this hypothesis, we conducted a cross-country survey involving a total of 4037 respondents in Singapore, Taiwan, Japan, and the Republic of Korea (ROK). The results show convincing evidence of how institutions shape social trust in AI and its acceptance. Our empirical findings reveal that trust in institutions is positively associated with trust in AI technologies. Trust in institutions is based on perceived competence, benevolence, and integrity. It can directly affect people’s trust in AI technologies. Also, our empirical findings confirm that trust in AI technologies is positively associated with the intention to use these technologies. This means that a higher level of trust in AI technologies leads to a higher level of intention to use these technologies. In conclusion, institutions greatly matter in the construction and production of social trust in AI-based technologies. Trust in AI is not a direct affair between the user and the product, but it is mediated by the whole institutional setting. This has profound implications on the governance of AI in society. By taking into account institutional factors in the planning and implementation of AI regulations, we can be assured that social trust in AI is sufficiently founded.</p>\",\"PeriodicalId\":36408,\"journal\":{\"name\":\"Human Behavior and Emerging Technologies\",\"volume\":\"2025 1\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1155/hbe2/4084384\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human Behavior and Emerging Technologies\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1155/hbe2/4084384\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Behavior and Emerging Technologies","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/hbe2/4084384","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
Measuring Social Trust in AI: How Institutions Shape the Usage Intention of AI-Based Technologies
What drives people to have trust in using artificial intelligence (AI)? How does the institutional environment shape social trust in AI? This study addresses these questions to explain the role of institutions in allowing AI-based technologies to be socially accepted. In this study, social trust in AI is situated in three institutional entities, namely, the government, tech companies, and the scientific community. It is posited that the level of social trust in AI is correlated to the level of trust in these institutions. The stronger the trust in the institutions, the deeper the social trust in the use of AI. To test this hypothesis, we conducted a cross-country survey involving a total of 4037 respondents in Singapore, Taiwan, Japan, and the Republic of Korea (ROK). The results show convincing evidence of how institutions shape social trust in AI and its acceptance. Our empirical findings reveal that trust in institutions is positively associated with trust in AI technologies. Trust in institutions is based on perceived competence, benevolence, and integrity. It can directly affect people’s trust in AI technologies. Also, our empirical findings confirm that trust in AI technologies is positively associated with the intention to use these technologies. This means that a higher level of trust in AI technologies leads to a higher level of intention to use these technologies. In conclusion, institutions greatly matter in the construction and production of social trust in AI-based technologies. Trust in AI is not a direct affair between the user and the product, but it is mediated by the whole institutional setting. This has profound implications on the governance of AI in society. By taking into account institutional factors in the planning and implementation of AI regulations, we can be assured that social trust in AI is sufficiently founded.
期刊介绍:
Human Behavior and Emerging Technologies is an interdisciplinary journal dedicated to publishing high-impact research that enhances understanding of the complex interactions between diverse human behavior and emerging digital technologies.