Anna Leschanowsky , Silas Rech , Birgit Popp , Tom Bäckström
{"title":"评估对话式人工智能的隐私、安全和信任感:系统综述","authors":"Anna Leschanowsky , Silas Rech , Birgit Popp , Tom Bäckström","doi":"10.1016/j.chb.2024.108344","DOIUrl":null,"url":null,"abstract":"<div><p>Conversational AI (CAI) systems which encompass voice- and text-based assistants are on the rise and have been largely integrated into people’s everyday lives. Despite their widespread adoption, users voice concerns regarding privacy, security and trust in these systems. However, the composition of these perceptions, their impact on technology adoption and usage and the relationship between privacy, security and trust perceptions in the CAI context remain open research challenges. This study contributes to the field by conducting a Systematic Literature Review and offers insights into the current state of research on privacy, security and trust perceptions in the context of CAI systems. The review covers application fields and user groups and sheds light on empirical methods and tools used for assessment. Moreover, it provides insights into the reliability and validity of privacy, security and trust scales, as well as extensively investigating the subconstructs of each item as well as additional concepts which are concurrently collected. We point out that the perceptions of trust, privacy and security overlap based on the subconstructs we identified. While the majority of studies investigate one of these concepts, only a few studies were found exploring privacy, security and trust perceptions jointly. Our research aims to inform on directions to develop and use reliable scales for users’ privacy, security and trust perceptions and contribute to the development of trustworthy CAI systems.</p></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":null,"pages":null},"PeriodicalIF":9.0000,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0747563224002127/pdfft?md5=953b58193b9c99efe5934d839c65e128&pid=1-s2.0-S0747563224002127-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Evaluating privacy, security, and trust perceptions in conversational AI: A systematic review\",\"authors\":\"Anna Leschanowsky , Silas Rech , Birgit Popp , Tom Bäckström\",\"doi\":\"10.1016/j.chb.2024.108344\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Conversational AI (CAI) systems which encompass voice- and text-based assistants are on the rise and have been largely integrated into people’s everyday lives. Despite their widespread adoption, users voice concerns regarding privacy, security and trust in these systems. However, the composition of these perceptions, their impact on technology adoption and usage and the relationship between privacy, security and trust perceptions in the CAI context remain open research challenges. This study contributes to the field by conducting a Systematic Literature Review and offers insights into the current state of research on privacy, security and trust perceptions in the context of CAI systems. The review covers application fields and user groups and sheds light on empirical methods and tools used for assessment. Moreover, it provides insights into the reliability and validity of privacy, security and trust scales, as well as extensively investigating the subconstructs of each item as well as additional concepts which are concurrently collected. We point out that the perceptions of trust, privacy and security overlap based on the subconstructs we identified. While the majority of studies investigate one of these concepts, only a few studies were found exploring privacy, security and trust perceptions jointly. Our research aims to inform on directions to develop and use reliable scales for users’ privacy, security and trust perceptions and contribute to the development of trustworthy CAI systems.</p></div>\",\"PeriodicalId\":48471,\"journal\":{\"name\":\"Computers in Human Behavior\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":9.0000,\"publicationDate\":\"2024-06-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0747563224002127/pdfft?md5=953b58193b9c99efe5934d839c65e128&pid=1-s2.0-S0747563224002127-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0747563224002127\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0747563224002127","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
摘要
对话式人工智能(CAI)系统包括基于语音和文本的助手,目前正呈上升趋势,并已在很大程度上融入了人们的日常生活。尽管它们被广泛采用,但用户对这些系统的隐私、安全和信任表示担忧。然而,这些观念的构成、它们对技术采用和使用的影响以及 CAI 环境中隐私、安全和信任观念之间的关系仍然是有待解决的研究难题。本研究通过进行系统性文献综述,对 CAI 系统背景下的隐私、安全和信任认知的研究现状提出了见解,从而为该领域的研究做出了贡献。综述涵盖了应用领域和用户群体,并揭示了用于评估的经验方法和工具。此外,它还深入分析了隐私、安全和信任量表的可靠性和有效性,并广泛调查了每个项目的子结构以及同时收集的其他概念。我们指出,根据我们确定的子结构,信任、隐私和安全感是相互重叠的。虽然大多数研究调查的是其中一个概念,但只有少数研究联合调查了隐私、安全和信任感知。我们的研究旨在为开发和使用可靠的用户隐私、安全和信任感量表的方向提供信息,并为开发值得信赖的 CAI 系统做出贡献。
Evaluating privacy, security, and trust perceptions in conversational AI: A systematic review
Conversational AI (CAI) systems which encompass voice- and text-based assistants are on the rise and have been largely integrated into people’s everyday lives. Despite their widespread adoption, users voice concerns regarding privacy, security and trust in these systems. However, the composition of these perceptions, their impact on technology adoption and usage and the relationship between privacy, security and trust perceptions in the CAI context remain open research challenges. This study contributes to the field by conducting a Systematic Literature Review and offers insights into the current state of research on privacy, security and trust perceptions in the context of CAI systems. The review covers application fields and user groups and sheds light on empirical methods and tools used for assessment. Moreover, it provides insights into the reliability and validity of privacy, security and trust scales, as well as extensively investigating the subconstructs of each item as well as additional concepts which are concurrently collected. We point out that the perceptions of trust, privacy and security overlap based on the subconstructs we identified. While the majority of studies investigate one of these concepts, only a few studies were found exploring privacy, security and trust perceptions jointly. Our research aims to inform on directions to develop and use reliable scales for users’ privacy, security and trust perceptions and contribute to the development of trustworthy CAI systems.
期刊介绍:
Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.