André Markus , Maximilian Baumann , Jan Pfister , Astrid Carolus , Andreas Hotho , Carolin Wienrich
{"title":"Safer interaction with IVAs: The impact of privacy literacy training on competent use of intelligent voice assistants","authors":"André Markus , Maximilian Baumann , Jan Pfister , Astrid Carolus , Andreas Hotho , Carolin Wienrich","doi":"10.1016/j.caeai.2025.100372","DOIUrl":null,"url":null,"abstract":"<div><div>Intelligent voice assistants (IVAs) are widely used in households but can compromise privacy by inadvertently recording or encouraging personal disclosures through social cues. Against this backdrop, interventions that promote privacy literacy, sensitize users to privacy risks, and empower them to self-determine IVA interactions are becoming increasingly important. This work aims to develop and evaluate two online training modules that promote privacy literacy in the context of IVAs by providing knowledge about the institutional practices of IVA providers and clarifying users' privacy rights when using IVAs. Results show that the training modules have distinct strengths. For example, Training Module 1 increases subjective privacy literacy, raises specific concerns about IVA companies, and fosters the intention to engage more reflectively with IVAs. In contrast, Training Module 2 increases users' perceptions of control over their privacy and raises concerns about devices. Both modules share common outcomes, including increased privacy awareness, decreased trust, and social anthropomorphic perceptions of IVAs. Overall, these modules represent a significant advance in promoting the competent use of speech-based technology and provide valuable insights for future research and education on privacy in AI applications.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100372"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Education Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666920X25000128","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0
Abstract
Intelligent voice assistants (IVAs) are widely used in households but can compromise privacy by inadvertently recording or encouraging personal disclosures through social cues. Against this backdrop, interventions that promote privacy literacy, sensitize users to privacy risks, and empower them to self-determine IVA interactions are becoming increasingly important. This work aims to develop and evaluate two online training modules that promote privacy literacy in the context of IVAs by providing knowledge about the institutional practices of IVA providers and clarifying users' privacy rights when using IVAs. Results show that the training modules have distinct strengths. For example, Training Module 1 increases subjective privacy literacy, raises specific concerns about IVA companies, and fosters the intention to engage more reflectively with IVAs. In contrast, Training Module 2 increases users' perceptions of control over their privacy and raises concerns about devices. Both modules share common outcomes, including increased privacy awareness, decreased trust, and social anthropomorphic perceptions of IVAs. Overall, these modules represent a significant advance in promoting the competent use of speech-based technology and provide valuable insights for future research and education on privacy in AI applications.