人工智能的滥用和对信息隐私的关注:新结构验证与未来方向

IF 6.5 2区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE
Philip Menard, Gregory J. Bott
{"title":"人工智能的滥用和对信息隐私的关注:新结构验证与未来方向","authors":"Philip Menard, Gregory J. Bott","doi":"10.1111/isj.12544","DOIUrl":null,"url":null,"abstract":"To address various business challenges, organisations are increasingly employing artificial intelligence (AI) to analyse vast amounts of data. One application involves consolidating diverse user data into unified profiles, aggregating consumer behaviours to accurately tailor marketing efforts. Although AI provides more convenience to consumers and more efficient and profitable marketing for organisations, the act of aggregating data into behavioural profiles for use in machine learning algorithms introduces significant privacy implications for users, including unforeseeable personal disclosure, outcomes biased against marginalised population groups and organisations' inability to fully remove data from AI systems on consumer request. Although these implementations of AI are rapidly altering the way consumers perceive information privacy, researchers have thus far lacked an accurate method for measuring consumers' privacy concerns related to AI. In this study, we aim to (1) validate a scale for measuring privacy concerns related to AI misuse (PC‐AIM) and (2) examine the effects that PC‐AIM has on nomologically related constructs under the APCO framework. We provide evidence demonstrating the validity of our newly developed scale. We also find that PC‐AIM significantly increases risk beliefs and personal privacy advocacy behaviour, while decreasing trusting beliefs. Trusting beliefs and risk beliefs do not significantly affect behaviour, which differs from prior privacy findings. We further discuss the implications of our work on both research and practice.","PeriodicalId":48049,"journal":{"name":"Information Systems Journal","volume":null,"pages":null},"PeriodicalIF":6.5000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence misuse and concern for information privacy: New construct validation and future directions\",\"authors\":\"Philip Menard, Gregory J. Bott\",\"doi\":\"10.1111/isj.12544\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To address various business challenges, organisations are increasingly employing artificial intelligence (AI) to analyse vast amounts of data. One application involves consolidating diverse user data into unified profiles, aggregating consumer behaviours to accurately tailor marketing efforts. Although AI provides more convenience to consumers and more efficient and profitable marketing for organisations, the act of aggregating data into behavioural profiles for use in machine learning algorithms introduces significant privacy implications for users, including unforeseeable personal disclosure, outcomes biased against marginalised population groups and organisations' inability to fully remove data from AI systems on consumer request. Although these implementations of AI are rapidly altering the way consumers perceive information privacy, researchers have thus far lacked an accurate method for measuring consumers' privacy concerns related to AI. In this study, we aim to (1) validate a scale for measuring privacy concerns related to AI misuse (PC‐AIM) and (2) examine the effects that PC‐AIM has on nomologically related constructs under the APCO framework. We provide evidence demonstrating the validity of our newly developed scale. We also find that PC‐AIM significantly increases risk beliefs and personal privacy advocacy behaviour, while decreasing trusting beliefs. Trusting beliefs and risk beliefs do not significantly affect behaviour, which differs from prior privacy findings. We further discuss the implications of our work on both research and practice.\",\"PeriodicalId\":48049,\"journal\":{\"name\":\"Information Systems Journal\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Systems Journal\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1111/isj.12544\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Systems Journal","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1111/isj.12544","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

摘要

为了应对各种业务挑战,企业越来越多地采用人工智能(AI)来分析海量数据。其中一项应用是将不同的用户数据整合成统一的档案,汇总消费者行为,从而准确地调整营销工作。虽然人工智能为消费者提供了更多便利,也为企业带来了更高效、更有利的市场营销,但将数据汇总到行为档案中以用于机器学习算法的行为会对用户隐私产生重大影响,包括不可预见的个人隐私泄露、对边缘化人群的结果偏见,以及企业无法应消费者要求从人工智能系统中完全删除数据。尽管这些人工智能的应用正在迅速改变消费者对信息隐私的看法,但研究人员至今仍缺乏一种准确的方法来衡量消费者对人工智能相关隐私的担忧。在本研究中,我们的目标是:(1)验证用于测量与人工智能滥用相关的隐私问题的量表(PC-AIM);(2)在 APCO 框架下研究 PC-AIM 对名义上相关的结构的影响。我们提供的证据证明了新开发量表的有效性。我们还发现,PC-AIM 显著增加了风险信念和个人隐私倡导行为,同时降低了信任信念。信任信念和风险信念对行为的影响不大,这与之前的隐私调查结果不同。我们进一步讨论了我们的工作对研究和实践的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial intelligence misuse and concern for information privacy: New construct validation and future directions
To address various business challenges, organisations are increasingly employing artificial intelligence (AI) to analyse vast amounts of data. One application involves consolidating diverse user data into unified profiles, aggregating consumer behaviours to accurately tailor marketing efforts. Although AI provides more convenience to consumers and more efficient and profitable marketing for organisations, the act of aggregating data into behavioural profiles for use in machine learning algorithms introduces significant privacy implications for users, including unforeseeable personal disclosure, outcomes biased against marginalised population groups and organisations' inability to fully remove data from AI systems on consumer request. Although these implementations of AI are rapidly altering the way consumers perceive information privacy, researchers have thus far lacked an accurate method for measuring consumers' privacy concerns related to AI. In this study, we aim to (1) validate a scale for measuring privacy concerns related to AI misuse (PC‐AIM) and (2) examine the effects that PC‐AIM has on nomologically related constructs under the APCO framework. We provide evidence demonstrating the validity of our newly developed scale. We also find that PC‐AIM significantly increases risk beliefs and personal privacy advocacy behaviour, while decreasing trusting beliefs. Trusting beliefs and risk beliefs do not significantly affect behaviour, which differs from prior privacy findings. We further discuss the implications of our work on both research and practice.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Systems Journal
Information Systems Journal INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
14.60
自引率
7.80%
发文量
44
期刊介绍: The Information Systems Journal (ISJ) is an international journal promoting the study of, and interest in, information systems. Articles are welcome on research, practice, experience, current issues and debates. The ISJ encourages submissions that reflect the wide and interdisciplinary nature of the subject and articles that integrate technological disciplines with social, contextual and management issues, based on research using appropriate research methods.The ISJ has particularly built its reputation by publishing qualitative research and it continues to welcome such papers. Quantitative research papers are also welcome but they need to emphasise the context of the research and the theoretical and practical implications of their findings.The ISJ does not publish purely technical papers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信