Virtual Personal Assistant Design Effects on Memory Encoding

A.F. Chesser, K. Bramlett, A. Atchley, C. Gray, N. Tenhundfeld
{"title":"Virtual Personal Assistant Design Effects on Memory Encoding","authors":"A.F. Chesser, K. Bramlett, A. Atchley, C. Gray, N. Tenhundfeld","doi":"10.1109/sieds55548.2022.9799387","DOIUrl":null,"url":null,"abstract":"Virtual personal assistants (VPAs) like Siri and Alexa have become common objects in households. Users frequently rely on these systems to search the internet or help retrieve information. As such, it is important to know how using these products affect cognitive processes like memory. Previous research suggests that visual speech perception influences auditory perception in human-human interactions. However, many of these VPAs are designed as a box or sphere that does not interact with the user visually. This lack of visual speech perception when interacting with a VPA could affect the human interaction with a system and their retention of information such as determining how many ounces are in a cup or how to greet someone in another language. This poses the question of whether the design of these VPAs is preventing the ability of users to retain the information they get from these systems. To test this, we designed an experiment that will explore interactions between user memory and either a traditional audio presentation (as is found with Siri or Alexa, for example) or one that allows for visual speech perception. Participants were asked to listen to an audio clip of a nonsensical story. In one condition, participants were asked to listen while looking at a blank screen (analogous to the lack of visual feedback inherent when working with current VPA designs). After a block of 25 audio clips, the participants took a test on the information heard. This process was repeated with an animated face with synchronized mouth movements instead of a black screen. Other participants will experience the same two presentations, but in reverse order as to counterbalance condition presentation. Data collection is currently underway. We predicted that VPA paired with synchronized lip movement would promote visual speech perception and thus help participants retain information. While we are still collecting data, the trend currently does not show a significant difference between audio and lip movement conditions. This could be an indication of differing abilities in lipreading.","PeriodicalId":286724,"journal":{"name":"2022 Systems and Information Engineering Design Symposium (SIEDS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Systems and Information Engineering Design Symposium (SIEDS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/sieds55548.2022.9799387","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Virtual personal assistants (VPAs) like Siri and Alexa have become common objects in households. Users frequently rely on these systems to search the internet or help retrieve information. As such, it is important to know how using these products affect cognitive processes like memory. Previous research suggests that visual speech perception influences auditory perception in human-human interactions. However, many of these VPAs are designed as a box or sphere that does not interact with the user visually. This lack of visual speech perception when interacting with a VPA could affect the human interaction with a system and their retention of information such as determining how many ounces are in a cup or how to greet someone in another language. This poses the question of whether the design of these VPAs is preventing the ability of users to retain the information they get from these systems. To test this, we designed an experiment that will explore interactions between user memory and either a traditional audio presentation (as is found with Siri or Alexa, for example) or one that allows for visual speech perception. Participants were asked to listen to an audio clip of a nonsensical story. In one condition, participants were asked to listen while looking at a blank screen (analogous to the lack of visual feedback inherent when working with current VPA designs). After a block of 25 audio clips, the participants took a test on the information heard. This process was repeated with an animated face with synchronized mouth movements instead of a black screen. Other participants will experience the same two presentations, but in reverse order as to counterbalance condition presentation. Data collection is currently underway. We predicted that VPA paired with synchronized lip movement would promote visual speech perception and thus help participants retain information. While we are still collecting data, the trend currently does not show a significant difference between audio and lip movement conditions. This could be an indication of differing abilities in lipreading.
虚拟个人助理设计对记忆编码的影响
像Siri和Alexa这样的虚拟个人助理(vpa)已经成为家庭中的常见物品。用户经常依靠这些系统来搜索互联网或帮助检索信息。因此,了解使用这些产品如何影响记忆等认知过程是很重要的。以往的研究表明,在人与人之间的互动中,视觉语言感知会影响听觉感知。然而,许多这些vpa被设计成一个盒子或球体,不与用户进行视觉交互。当与VPA交互时,缺乏视觉语言感知可能会影响人类与系统的交互以及他们对信息的保留,例如确定杯子里有多少盎司或如何用另一种语言问候某人。这就提出了一个问题,即这些vpa的设计是否妨碍了用户保留从这些系统获得的信息的能力。为了测试这一点,我们设计了一个实验,探索用户记忆与传统音频演示(例如Siri或Alexa)或允许视觉语音感知的音频演示之间的交互。参与者被要求听一段荒谬故事的音频片段。在一种情况下,参与者被要求一边听一边看着空白屏幕(类似于使用当前VPA设计时缺乏固有的视觉反馈)。在听了25段音频片段后,参与者对听到的信息进行了测试。重复这一过程的是一张带有同步嘴部动作的动画脸,而不是黑屏。其他参与者将经历同样的两个演示,但顺序相反,以平衡条件演示。目前正在收集数据。我们预测VPA与同步唇运动配对会促进视觉言语感知,从而帮助参与者保留信息。虽然我们仍在收集数据,但目前的趋势并没有显示音频和嘴唇运动条件之间的显着差异。这可能是唇读能力不同的表现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信