A legitimacy-based explanation for user acceptance of controversial technologies: The case of Generative AI

IF 12.9 1区 管理学 Q1 BUSINESS
Raluca Bunduchi , Dan-Andrei Sitar-Tăut , Daniel Mican
{"title":"A legitimacy-based explanation for user acceptance of controversial technologies: The case of Generative AI","authors":"Raluca Bunduchi ,&nbsp;Dan-Andrei Sitar-Tăut ,&nbsp;Daniel Mican","doi":"10.1016/j.techfore.2025.124095","DOIUrl":null,"url":null,"abstract":"<div><div>Controversial technologies are technologies where social concerns play a disproportionate role in shaping the public attitudes to their adoption. An example of such controversial technologies is Generative Artificial Intelligence (GenAI), whose rapid diffusion is fuelled by expectations for significant performance improvements, while also facing concerns at individual (trust in technology), technology (accuracy and quality), and institutional (cultural, ethical and regulatory) level. Individual and technology factors are well accounted for by rational choice-based models which underpin most technology acceptance research. Such models are less suited to explore the role of institutional factors in shaping technology acceptance. Drawing from legitimacy and technology lifecycle research, we develop a legitimacy-based model of GenAI adoption which accounts for the institutional context in which technology use happens, and for technology characteristics, namely its maturity, in shaping users' acceptance. Surveying 483 information systems students who are GenAI users, we find that users' perceptions of technology uncertainty and variation positively affect their technology legitimacy evaluations and that their pragmatic and cognitive legitimacy evaluations, but not moral, affect their intention to use. We answer recent calls to examine alternative theoretical predictors of technology acceptance, and to consider the role of context in examining the acceptance of controversial technologies.</div></div>","PeriodicalId":48454,"journal":{"name":"Technological Forecasting and Social Change","volume":"215 ","pages":"Article 124095"},"PeriodicalIF":12.9000,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technological Forecasting and Social Change","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S004016252500126X","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0

Abstract

Controversial technologies are technologies where social concerns play a disproportionate role in shaping the public attitudes to their adoption. An example of such controversial technologies is Generative Artificial Intelligence (GenAI), whose rapid diffusion is fuelled by expectations for significant performance improvements, while also facing concerns at individual (trust in technology), technology (accuracy and quality), and institutional (cultural, ethical and regulatory) level. Individual and technology factors are well accounted for by rational choice-based models which underpin most technology acceptance research. Such models are less suited to explore the role of institutional factors in shaping technology acceptance. Drawing from legitimacy and technology lifecycle research, we develop a legitimacy-based model of GenAI adoption which accounts for the institutional context in which technology use happens, and for technology characteristics, namely its maturity, in shaping users' acceptance. Surveying 483 information systems students who are GenAI users, we find that users' perceptions of technology uncertainty and variation positively affect their technology legitimacy evaluations and that their pragmatic and cognitive legitimacy evaluations, but not moral, affect their intention to use. We answer recent calls to examine alternative theoretical predictors of technology acceptance, and to consider the role of context in examining the acceptance of controversial technologies.
用户接受有争议技术的合法性解释:生成式人工智能的案例
有争议的技术是社会关注在塑造公众对其采用的态度方面发挥不成比例作用的技术。这种有争议的技术的一个例子是生成式人工智能(GenAI),其快速传播是由对显著性能改进的期望推动的,同时也面临着个人(对技术的信任)、技术(准确性和质量)和机构(文化、道德和监管)层面的担忧。基于理性选择的模型很好地解释了个人和技术因素,这些模型是大多数技术接受研究的基础。这些模型不太适合探讨制度因素在形成技术接受度方面的作用。根据合法性和技术生命周期研究,我们开发了一个基于合法性的GenAI采用模型,该模型解释了技术使用发生的制度背景,以及技术特征(即其成熟度)在塑造用户接受度方面的作用。调查了483名GenAI用户的信息系统学生,我们发现用户对技术不确定性和变化的感知积极影响他们的技术合法性评估,他们的语用和认知合法性评估,而不是道德,影响他们的使用意图。我们回应了最近的呼吁,以检查技术接受度的替代理论预测因素,并考虑背景在检查有争议的技术接受度中的作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
21.30
自引率
10.80%
发文量
813
期刊介绍: Technological Forecasting and Social Change is a prominent platform for individuals engaged in the methodology and application of technological forecasting and future studies as planning tools, exploring the interconnectedness of social, environmental, and technological factors. In addition to serving as a key forum for these discussions, we offer numerous benefits for authors, including complimentary PDFs, a generous copyright policy, exclusive discounts on Elsevier publications, and more.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信