理解对生成式人工智能技术的接受和抵制:一个整合功能、风险和社会法律因素的多理论框架。

IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Frontiers in Artificial Intelligence Pub Date : 2025-04-28 eCollection Date: 2025-01-01 DOI:10.3389/frai.2025.1565927
Priyanka Shrivastava
{"title":"理解对生成式人工智能技术的接受和抵制:一个整合功能、风险和社会法律因素的多理论框架。","authors":"Priyanka Shrivastava","doi":"10.3389/frai.2025.1565927","DOIUrl":null,"url":null,"abstract":"<p><p>This study explores the factors influencing college students' acceptance and resistance toward generative AI technologies by integrating three theoretical frameworks: the Technology Acceptance Model (TAM), Protection Motivation Theory (PMT), and Social Exchange Theory (SET). Using data from 407 respondents collected through a structured survey, the study employed Structural Equation Modeling (SEM) to examine how functional factors (perceived usefulness, ease of use, and reliability), risk factors (privacy concerns, data security, and ethical issues), and sociolegal factors (trust in governance and regulatory frameworks) impact user attitudes. Results revealed that functional factors significantly enhanced acceptance while reducing resistance, whereas risk factors amplified resistance and negatively influenced acceptance. Sociolegal factors emerged as critical mediators, mitigating the negative impact of perceived risks and reinforcing the positive effects of functional perceptions. The study responds to prior feedback by offering a more integrated theoretical framework, clearly articulating how TAM, PMT, and SET interact to shape user behavior. It also acknowledges the limitations of using a student sample and discusses the broader applicability of the findings to other demographics, such as professionals and non-academic users. Additionally, the manuscript now highlights demographic diversity, including variations in age, gender, and academic discipline, as relevant to AI adoption patterns. Ethical concerns, including algorithmic bias, data ownership, and the labor market impact of AI, are addressed to offer a more holistic understanding of resistance behavior. Policy implications have been expanded with actionable recommendations such as AI bias mitigation strategies, clearer data ownership protections, and workforce reskilling programs. The study also compares global regulatory frameworks like the GDPR and the U.S. AI Bill of Rights, reinforcing its practical relevance. Furthermore, it emphasizes that user attitudes toward AI are dynamic and likely to evolve, suggesting the need for longitudinal studies to capture behavioral adaptation over time. By bridging theory and practice, this research contributes to the growing discourse on responsible and equitable AI adoption in higher education, offering valuable insights for developers, policymakers, and academic institutions aiming to foster ethical and inclusive technology integration.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1565927"},"PeriodicalIF":3.0000,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12066764/pdf/","citationCount":"0","resultStr":"{\"title\":\"Understanding acceptance and resistance toward generative AI technologies: a multi-theoretical framework integrating functional, risk, and sociolegal factors.\",\"authors\":\"Priyanka Shrivastava\",\"doi\":\"10.3389/frai.2025.1565927\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This study explores the factors influencing college students' acceptance and resistance toward generative AI technologies by integrating three theoretical frameworks: the Technology Acceptance Model (TAM), Protection Motivation Theory (PMT), and Social Exchange Theory (SET). Using data from 407 respondents collected through a structured survey, the study employed Structural Equation Modeling (SEM) to examine how functional factors (perceived usefulness, ease of use, and reliability), risk factors (privacy concerns, data security, and ethical issues), and sociolegal factors (trust in governance and regulatory frameworks) impact user attitudes. Results revealed that functional factors significantly enhanced acceptance while reducing resistance, whereas risk factors amplified resistance and negatively influenced acceptance. Sociolegal factors emerged as critical mediators, mitigating the negative impact of perceived risks and reinforcing the positive effects of functional perceptions. The study responds to prior feedback by offering a more integrated theoretical framework, clearly articulating how TAM, PMT, and SET interact to shape user behavior. It also acknowledges the limitations of using a student sample and discusses the broader applicability of the findings to other demographics, such as professionals and non-academic users. Additionally, the manuscript now highlights demographic diversity, including variations in age, gender, and academic discipline, as relevant to AI adoption patterns. Ethical concerns, including algorithmic bias, data ownership, and the labor market impact of AI, are addressed to offer a more holistic understanding of resistance behavior. Policy implications have been expanded with actionable recommendations such as AI bias mitigation strategies, clearer data ownership protections, and workforce reskilling programs. The study also compares global regulatory frameworks like the GDPR and the U.S. AI Bill of Rights, reinforcing its practical relevance. Furthermore, it emphasizes that user attitudes toward AI are dynamic and likely to evolve, suggesting the need for longitudinal studies to capture behavioral adaptation over time. By bridging theory and practice, this research contributes to the growing discourse on responsible and equitable AI adoption in higher education, offering valuable insights for developers, policymakers, and academic institutions aiming to foster ethical and inclusive technology integration.</p>\",\"PeriodicalId\":33315,\"journal\":{\"name\":\"Frontiers in Artificial Intelligence\",\"volume\":\"8 \",\"pages\":\"1565927\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-04-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12066764/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/frai.2025.1565927\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frai.2025.1565927","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

本研究通过整合技术接受模型(TAM)、保护动机理论(PMT)和社会交换理论(SET)三个理论框架,探讨大学生对生成式人工智能技术的接受和抵制因素。通过结构化调查收集的407名受访者的数据,该研究采用结构方程模型(SEM)来检查功能因素(感知有用性、易用性和可靠性)、风险因素(隐私问题、数据安全和道德问题)和社会法律因素(对治理和监管框架的信任)如何影响用户态度。结果表明,功能因素显著增强了接受度,降低了抵抗度,而风险因素则放大了抵抗度,并对接受度产生了负面影响。社会法律因素成为关键的中介,减轻了感知风险的负面影响,加强了功能感知的积极影响。该研究通过提供一个更完整的理论框架来回应先前的反馈,清楚地阐明了TAM、PMT和SET如何相互作用以塑造用户行为。它还承认使用学生样本的局限性,并讨论了研究结果对其他人口统计数据(如专业人员和非学术用户)的更广泛适用性。此外,该手稿现在强调了与人工智能采用模式相关的人口多样性,包括年龄、性别和学科的变化。伦理问题,包括算法偏见、数据所有权和人工智能对劳动力市场的影响,都被解决了,以提供对抵抗行为更全面的理解。政策影响已经扩大,并提出了可操作的建议,如减少人工智能偏见的战略、更明确的数据所有权保护和劳动力再培训计划。该研究还比较了GDPR和美国人工智能权利法案等全球监管框架,加强了其实际意义。此外,它强调用户对人工智能的态度是动态的,可能会演变,这表明有必要进行纵向研究,以捕捉随着时间推移的行为适应。通过将理论与实践相结合,本研究有助于在高等教育中负责任和公平地采用人工智能,为旨在促进道德和包容性技术整合的开发者、政策制定者和学术机构提供有价值的见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Understanding acceptance and resistance toward generative AI technologies: a multi-theoretical framework integrating functional, risk, and sociolegal factors.

This study explores the factors influencing college students' acceptance and resistance toward generative AI technologies by integrating three theoretical frameworks: the Technology Acceptance Model (TAM), Protection Motivation Theory (PMT), and Social Exchange Theory (SET). Using data from 407 respondents collected through a structured survey, the study employed Structural Equation Modeling (SEM) to examine how functional factors (perceived usefulness, ease of use, and reliability), risk factors (privacy concerns, data security, and ethical issues), and sociolegal factors (trust in governance and regulatory frameworks) impact user attitudes. Results revealed that functional factors significantly enhanced acceptance while reducing resistance, whereas risk factors amplified resistance and negatively influenced acceptance. Sociolegal factors emerged as critical mediators, mitigating the negative impact of perceived risks and reinforcing the positive effects of functional perceptions. The study responds to prior feedback by offering a more integrated theoretical framework, clearly articulating how TAM, PMT, and SET interact to shape user behavior. It also acknowledges the limitations of using a student sample and discusses the broader applicability of the findings to other demographics, such as professionals and non-academic users. Additionally, the manuscript now highlights demographic diversity, including variations in age, gender, and academic discipline, as relevant to AI adoption patterns. Ethical concerns, including algorithmic bias, data ownership, and the labor market impact of AI, are addressed to offer a more holistic understanding of resistance behavior. Policy implications have been expanded with actionable recommendations such as AI bias mitigation strategies, clearer data ownership protections, and workforce reskilling programs. The study also compares global regulatory frameworks like the GDPR and the U.S. AI Bill of Rights, reinforcing its practical relevance. Furthermore, it emphasizes that user attitudes toward AI are dynamic and likely to evolve, suggesting the need for longitudinal studies to capture behavioral adaptation over time. By bridging theory and practice, this research contributes to the growing discourse on responsible and equitable AI adoption in higher education, offering valuable insights for developers, policymakers, and academic institutions aiming to foster ethical and inclusive technology integration.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.10
自引率
2.50%
发文量
272
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信