{"title":"理解对生成式人工智能技术的接受和抵制:一个整合功能、风险和社会法律因素的多理论框架。","authors":"Priyanka Shrivastava","doi":"10.3389/frai.2025.1565927","DOIUrl":null,"url":null,"abstract":"<p><p>This study explores the factors influencing college students' acceptance and resistance toward generative AI technologies by integrating three theoretical frameworks: the Technology Acceptance Model (TAM), Protection Motivation Theory (PMT), and Social Exchange Theory (SET). Using data from 407 respondents collected through a structured survey, the study employed Structural Equation Modeling (SEM) to examine how functional factors (perceived usefulness, ease of use, and reliability), risk factors (privacy concerns, data security, and ethical issues), and sociolegal factors (trust in governance and regulatory frameworks) impact user attitudes. Results revealed that functional factors significantly enhanced acceptance while reducing resistance, whereas risk factors amplified resistance and negatively influenced acceptance. Sociolegal factors emerged as critical mediators, mitigating the negative impact of perceived risks and reinforcing the positive effects of functional perceptions. The study responds to prior feedback by offering a more integrated theoretical framework, clearly articulating how TAM, PMT, and SET interact to shape user behavior. It also acknowledges the limitations of using a student sample and discusses the broader applicability of the findings to other demographics, such as professionals and non-academic users. Additionally, the manuscript now highlights demographic diversity, including variations in age, gender, and academic discipline, as relevant to AI adoption patterns. Ethical concerns, including algorithmic bias, data ownership, and the labor market impact of AI, are addressed to offer a more holistic understanding of resistance behavior. Policy implications have been expanded with actionable recommendations such as AI bias mitigation strategies, clearer data ownership protections, and workforce reskilling programs. The study also compares global regulatory frameworks like the GDPR and the U.S. AI Bill of Rights, reinforcing its practical relevance. Furthermore, it emphasizes that user attitudes toward AI are dynamic and likely to evolve, suggesting the need for longitudinal studies to capture behavioral adaptation over time. By bridging theory and practice, this research contributes to the growing discourse on responsible and equitable AI adoption in higher education, offering valuable insights for developers, policymakers, and academic institutions aiming to foster ethical and inclusive technology integration.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1565927"},"PeriodicalIF":3.0000,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12066764/pdf/","citationCount":"0","resultStr":"{\"title\":\"Understanding acceptance and resistance toward generative AI technologies: a multi-theoretical framework integrating functional, risk, and sociolegal factors.\",\"authors\":\"Priyanka Shrivastava\",\"doi\":\"10.3389/frai.2025.1565927\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This study explores the factors influencing college students' acceptance and resistance toward generative AI technologies by integrating three theoretical frameworks: the Technology Acceptance Model (TAM), Protection Motivation Theory (PMT), and Social Exchange Theory (SET). Using data from 407 respondents collected through a structured survey, the study employed Structural Equation Modeling (SEM) to examine how functional factors (perceived usefulness, ease of use, and reliability), risk factors (privacy concerns, data security, and ethical issues), and sociolegal factors (trust in governance and regulatory frameworks) impact user attitudes. Results revealed that functional factors significantly enhanced acceptance while reducing resistance, whereas risk factors amplified resistance and negatively influenced acceptance. Sociolegal factors emerged as critical mediators, mitigating the negative impact of perceived risks and reinforcing the positive effects of functional perceptions. The study responds to prior feedback by offering a more integrated theoretical framework, clearly articulating how TAM, PMT, and SET interact to shape user behavior. It also acknowledges the limitations of using a student sample and discusses the broader applicability of the findings to other demographics, such as professionals and non-academic users. Additionally, the manuscript now highlights demographic diversity, including variations in age, gender, and academic discipline, as relevant to AI adoption patterns. Ethical concerns, including algorithmic bias, data ownership, and the labor market impact of AI, are addressed to offer a more holistic understanding of resistance behavior. Policy implications have been expanded with actionable recommendations such as AI bias mitigation strategies, clearer data ownership protections, and workforce reskilling programs. The study also compares global regulatory frameworks like the GDPR and the U.S. AI Bill of Rights, reinforcing its practical relevance. Furthermore, it emphasizes that user attitudes toward AI are dynamic and likely to evolve, suggesting the need for longitudinal studies to capture behavioral adaptation over time. By bridging theory and practice, this research contributes to the growing discourse on responsible and equitable AI adoption in higher education, offering valuable insights for developers, policymakers, and academic institutions aiming to foster ethical and inclusive technology integration.</p>\",\"PeriodicalId\":33315,\"journal\":{\"name\":\"Frontiers in Artificial Intelligence\",\"volume\":\"8 \",\"pages\":\"1565927\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-04-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12066764/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/frai.2025.1565927\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frai.2025.1565927","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Understanding acceptance and resistance toward generative AI technologies: a multi-theoretical framework integrating functional, risk, and sociolegal factors.
This study explores the factors influencing college students' acceptance and resistance toward generative AI technologies by integrating three theoretical frameworks: the Technology Acceptance Model (TAM), Protection Motivation Theory (PMT), and Social Exchange Theory (SET). Using data from 407 respondents collected through a structured survey, the study employed Structural Equation Modeling (SEM) to examine how functional factors (perceived usefulness, ease of use, and reliability), risk factors (privacy concerns, data security, and ethical issues), and sociolegal factors (trust in governance and regulatory frameworks) impact user attitudes. Results revealed that functional factors significantly enhanced acceptance while reducing resistance, whereas risk factors amplified resistance and negatively influenced acceptance. Sociolegal factors emerged as critical mediators, mitigating the negative impact of perceived risks and reinforcing the positive effects of functional perceptions. The study responds to prior feedback by offering a more integrated theoretical framework, clearly articulating how TAM, PMT, and SET interact to shape user behavior. It also acknowledges the limitations of using a student sample and discusses the broader applicability of the findings to other demographics, such as professionals and non-academic users. Additionally, the manuscript now highlights demographic diversity, including variations in age, gender, and academic discipline, as relevant to AI adoption patterns. Ethical concerns, including algorithmic bias, data ownership, and the labor market impact of AI, are addressed to offer a more holistic understanding of resistance behavior. Policy implications have been expanded with actionable recommendations such as AI bias mitigation strategies, clearer data ownership protections, and workforce reskilling programs. The study also compares global regulatory frameworks like the GDPR and the U.S. AI Bill of Rights, reinforcing its practical relevance. Furthermore, it emphasizes that user attitudes toward AI are dynamic and likely to evolve, suggesting the need for longitudinal studies to capture behavioral adaptation over time. By bridging theory and practice, this research contributes to the growing discourse on responsible and equitable AI adoption in higher education, offering valuable insights for developers, policymakers, and academic institutions aiming to foster ethical and inclusive technology integration.