Proceedings of the 4th Conference on Conversational User Interfaces最新文献

筛选
英文 中文
Should Alexa be a Police Officer, a Doctor, or a Priest?: Towards CUI Relationships Worth Having Alexa应该成为一名警官、医生还是牧师?:迈向CUI值得拥有的关系
Proceedings of the 4th Conference on Conversational User Interfaces Pub Date : 2022-07-26 DOI: 10.1145/3543829.3544522
James Simpson, Cassandra L. Crone
{"title":"Should Alexa be a Police Officer, a Doctor, or a Priest?: Towards CUI Relationships Worth Having","authors":"James Simpson, Cassandra L. Crone","doi":"10.1145/3543829.3544522","DOIUrl":"https://doi.org/10.1145/3543829.3544522","url":null,"abstract":"Would you trust your gossiping nosey neighbour to look after your house while you are away? Despite being sold the idea of a conversational user interface (CUI) as a trustworthy, helpful, and non-judgmental mate, the reality that many CUI users face is a relationship characterised by uncertainty and volatility, owing to the CUI's ambiguous social role. To escape the Weisserian nightmare of the ubiquitous, ambiguous, and manipulative CUI, we must first consider the social roles and relationships we encounter every day and use those as models for how future CUIs can be built.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117294027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
COMEX: A Multi-task Benchmark for Knowledge-grounded COnversational Media EXploration COMEX:基于知识的对话媒体探索的多任务基准
Proceedings of the 4th Conference on Conversational User Interfaces Pub Date : 2022-07-26 DOI: 10.1145/3543829.3543830
Zay Yar Tun, Alessandro Speggiorin, Jeffrey Dalton, Megan Stamper
{"title":"COMEX: A Multi-task Benchmark for Knowledge-grounded COnversational Media EXploration","authors":"Zay Yar Tun, Alessandro Speggiorin, Jeffrey Dalton, Megan Stamper","doi":"10.1145/3543829.3543830","DOIUrl":"https://doi.org/10.1145/3543829.3543830","url":null,"abstract":"Open-domain conversational interaction with news, podcasts, and other types of heterogeneous content remains an open challenge. Interactive agents must support information access in a way that is fair, impartial, and true to the content and knowledge discussed. To facilitate this, systems building on interactive retrieval from knowledge-grounded media are a controllable and known base for experimentation. A conversational media agent should retrieve relevant content, understand key concepts in the content through grounding to a knowledge base, and enable exploration by offering to discuss a topic further or progress to describe related topics. In this work, we release a new multi-task benchmark on COnversational Media EXploration (COMEX) to measure knowledge-grounded conversational content exploration. It consists of a heterogeneous semantically annotated media corpus and topic-specific data for 1) entity Wikification and salience, 2) conversational content ranking on heterogeneous media content, 3) background link ranking, and 4) background linking explanation. We develop COMEX with judgments and conversational interactions developed in partnership with professional editorial staff from the BBC. We study the behavior of state-of-the-art systems, with the results demonstrating significant headroom on all tasks.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115199954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Question of Fidelity: Comparing Different User Testing Methods for Evaluating In-Car Prompts 保真度问题:比较评估车内提示的不同用户测试方法
Proceedings of the 4th Conference on Conversational User Interfaces Pub Date : 2022-07-26 DOI: 10.1145/3543829.3544519
Anna-Maria Meck, C. Draxler, Thurid Vogt
{"title":"A Question of Fidelity: Comparing Different User Testing Methods for Evaluating In-Car Prompts","authors":"Anna-Maria Meck, C. Draxler, Thurid Vogt","doi":"10.1145/3543829.3544519","DOIUrl":"https://doi.org/10.1145/3543829.3544519","url":null,"abstract":"User studies are a major component in any user-centered design process. Testing methods thereby vary tremendously regarding the organizational, financial, and timely effort needed to conduct them. Driving simulator studies generally are the method of choice when dialogs need to be validated for in-car settings. These studies are highly time- and cost-consuming though. Online crowdsourcing studies can be an alternative as they allow for quick results and large sample sizes while at the same time being time- and cost-efficient. Still, voice user interface designers argue for a lack of applicability to concrete use cases. This is especially true for speech dialog systems in an in-car context where users experience voice as a secondary task with the primary task being driving. To compare the validity of different user testing methods, study participants in a between-subjects study design evaluated proactive in-car prompts presented a) in an online crowdsourcing study in text form, b) in an online crowdsourcing study via audio, and c) in a driving simulator. Prompt evaluations did not differ significantly between conditions a) and c) but diverged for condition b). Findings are explained by drawing from the Elaboration Likelihood Model and used to answer the question of how to efficiently validate in-car prompts.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123821015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Articulate+ : An Always-Listening Natural Language Interface for Creating Data Visualizations 清晰+:用于创建数据可视化的始终聆听的自然语言界面
Proceedings of the 4th Conference on Conversational User Interfaces Pub Date : 2022-07-26 DOI: 10.1145/3543829.3544534
Roderick S. Tabalba, Nurit Kirshenbaum, J. Leigh, Abari Bhattacharya, Andrew E. Johnson, Veronica Grosso, Barbara Maria Di Eugenio, Moira Zellner
{"title":"Articulate+ : An Always-Listening Natural Language Interface for Creating Data Visualizations","authors":"Roderick S. Tabalba, Nurit Kirshenbaum, J. Leigh, Abari Bhattacharya, Andrew E. Johnson, Veronica Grosso, Barbara Maria Di Eugenio, Moira Zellner","doi":"10.1145/3543829.3544534","DOIUrl":"https://doi.org/10.1145/3543829.3544534","url":null,"abstract":"Natural Language Interfaces and Voice User Interfaces for expressing data visualizations face ambiguities, such as, speech disfluency, under-specification, and abbreviations. In this paper, we describe Articulate+, an Artificial Intelligence Agent that is always listening, built to disambiguate requests while also spontaneously presenting informative visualizations. We conducted a preliminary user study to gain insight into the issues involved in providing an always-listening interface for data visualization. Our early results suggest that by leveraging Articulate+’s always-listening interface, users are able to obtain their desired visualizations with fewer queries while also being able to benefit from spontaneous visualizations generated by the system.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130716999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Effects of Emotional Expressiveness on Voice Chatbot Interactions 情绪表达对语音聊天机器人交互的影响
Proceedings of the 4th Conference on Conversational User Interfaces Pub Date : 2022-07-26 DOI: 10.1145/3543829.3543840
Qingxiaoyang Zhu, Author Chau, Michelle Cohn, Kai-Hui Liang, Hao-Chuan Wang, Georgia Zellou, Zhou Yu
{"title":"Effects of Emotional Expressiveness on Voice Chatbot Interactions","authors":"Qingxiaoyang Zhu, Author Chau, Michelle Cohn, Kai-Hui Liang, Hao-Chuan Wang, Georgia Zellou, Zhou Yu","doi":"10.1145/3543829.3543840","DOIUrl":"https://doi.org/10.1145/3543829.3543840","url":null,"abstract":"Speech-based dialog systems primarily interact with users through their spoken responses. Understanding users’ perception of, and subconscious behaviors toward, the system’s speech are crucial for improving their design. In the current study, a voice chatbot designed for having a conversation with users in the domain of music is used to test the impact of emotional expressiveness in its text-to-speech (TTS) output. We parametrically manipulated the degree of emotional expressiveness via prosody and lexical choice across conditions. We used a two-pronged approach to test these effects on users: a user interaction study (Experiment 1 – between-subjects design) and an independent perception study (Experiment 2 – within-subjects design). Both studies provide converging evidence that increasing emotional prosody yields more positive perceptions of chatbot interactions, in increasing perception of emotional expressiveness (Experiments 1 and 2) as well as overall engagement, human-likeness, and likability of the bot (Experiment 2). We discuss these findings in terms of theories of human-computer interaction, as well as their implications for conversational design.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129986587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Rules Of Engagement: Levelling Up To Combat Unethical CUI Design 交战规则:升级以对抗不道德的CUI设计
Proceedings of the 4th Conference on Conversational User Interfaces Pub Date : 2022-07-19 DOI: 10.1145/3543829.3544528
Thomas Mildner, Philip R. Doyle, Gian-Luca Savino, R. Malaka
{"title":"Rules Of Engagement: Levelling Up To Combat Unethical CUI Design","authors":"Thomas Mildner, Philip R. Doyle, Gian-Luca Savino, R. Malaka","doi":"10.1145/3543829.3544528","DOIUrl":"https://doi.org/10.1145/3543829.3544528","url":null,"abstract":"While a central goal of HCI has always been to create and develop interfaces that are easy to use, a deeper focus has been set more recently on designing interfaces more ethically. However, the exact meaning and measurement of ethical design has yet to be established both within the CUI community and among HCI researchers more broadly. In this provocation paper we propose a simplified methodology to assess interfaces based on five dimensions taken from prior research on so-called dark patterns. As a result, our approach offers a numeric score to its users representing the manipulative nature of evaluated interfaces. It is hoped that the approach - which draws a distinction between persuasion and manipulative design, and focuses on how the latter functions rather than how it manifests - will provide a viable way for quantifying instances of unethical interface design that will prove useful to researchers, regulators and potentially even users.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123617437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
When It’s Not Worth the Paper It’s Written On: A Provocation on the Certification of Skills in the Alexa and Google Assistant Ecosystems 当它不值得写在纸上:对Alexa和谷歌助理生态系统技能认证的挑衅
Proceedings of the 4th Conference on Conversational User Interfaces Pub Date : 2022-06-22 DOI: 10.1145/3543829.3544513
W. Seymour, M. Coté, J. Such
{"title":"When It’s Not Worth the Paper It’s Written On: A Provocation on the Certification of Skills in the Alexa and Google Assistant Ecosystems","authors":"W. Seymour, M. Coté, J. Such","doi":"10.1145/3543829.3544513","DOIUrl":"https://doi.org/10.1145/3543829.3544513","url":null,"abstract":"The increasing reach and functionality of voice assistants has allowed them to become a general-purpose platform for tasks like playing music, accessing information, and controlling smart home devices. In order to maintain the quality of third-party skills and to protect children and other members of the public from inappropriate or malicious skills, platform providers have developed content policies and certification procedures that skills must undergo prior to public release. Unfortunately, research suggests that these measures have been ineffective at curating voice assistant platforms, with documented instances of skills with significant security and privacy problems. This provocation paper outlines how the underlying architectures of these platforms had turned skill certification into a seemingly intractable problem, as well as how current certification methods fall short of their full potential. We present a roadmap for improving the state of skill certification on contemporary voice assistant platforms, including research directions and actions that need to be taken by platform vendors. Promoting this change in domestic voice assistants is especially important, as developers of commercial and industrial assistants or other similar contexts increasingly look to these devices for norms and conventions.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124808644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Can you meaningfully consent in eight seconds? Identifying Ethical Issues with Verbal Consent for Voice Assistants 你能在八秒钟内表示同意吗?识别语音助手口头同意的道德问题
Proceedings of the 4th Conference on Conversational User Interfaces Pub Date : 2022-06-22 DOI: 10.1145/3543829.3544521
W. Seymour, M. Coté, J. Such
{"title":"Can you meaningfully consent in eight seconds? Identifying Ethical Issues with Verbal Consent for Voice Assistants","authors":"W. Seymour, M. Coté, J. Such","doi":"10.1145/3543829.3544521","DOIUrl":"https://doi.org/10.1145/3543829.3544521","url":null,"abstract":"Determining how voice assistants should broker consent to share data with third party software has proven to be a complex problem. Devices often require users to switch to companion smartphone apps in order to navigate permissions menus for their otherwise hands-free voice assistant. More in line with smartphone app stores, Alexa now offers “voice-forward consent”, allowing users to grant skills access to personal data mid-conversation using speech. While more usable and convenient than opening a companion app, asking for consent ‘on the fly’ can undermine several concepts core to the informed consent process. The intangible nature of voice interfaces further blurs the boundary between parts of an interaction controlled by third-party developers from the underlying platforms. This provocation paper highlights key issues with current verbal consent implementations, outlines directions for potential solutions, and presents five open questions to the research community. In so doing, we hope to help shape the development of usable and effective verbal consent for voice assistants and similar conversational user interfaces.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125463003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Embrace your incompetence! Designing appropriate CUI communication through an ecological approach 承认你的无能!通过生态方法设计适当的CUI通信
Proceedings of the 4th Conference on Conversational User Interfaces Pub Date : 2022-06-21 DOI: 10.1145/3543829.3544531
S. Becker, Philip R. Doyle, Justin Edwards
{"title":"Embrace your incompetence! Designing appropriate CUI communication through an ecological approach","authors":"S. Becker, Philip R. Doyle, Justin Edwards","doi":"10.1145/3543829.3544531","DOIUrl":"https://doi.org/10.1145/3543829.3544531","url":null,"abstract":"People form impressions of their dialogue partners, be they other people or machines, based on cues drawn from their communicative style. Recent work has suggested that the gulf between people’s expectations and the reality of CUI interaction widens when these impressions are misaligned with the actual capabilities of conversational user interfaces (CUIs). This has led some to rally against a perceived overriding concern for naturalness, calling instead for more representative, or appropriate communicative cues. Indeed, some have argued for a move away from naturalness as a goal for CUI design and communication. We contend that naturalness need not be abandoned, if we instead aim for ecologically grounded design. We also suggest a way this might be achieved and call on CUI designers to embrace incompetence! By letting CUIs express uncertainty and embarrassment through ecologically valid and appropriate cues that are ubiquitous in human communication - CUI designers can achieve more appropriate communication without turning away from naturalness entirely.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126455537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bilingual by default: Voice Assistants and the role of code-switching in creating a bilingual user experience 默认双语:语音助手和代码转换在创建双语用户体验中的作用
Proceedings of the 4th Conference on Conversational User Interfaces Pub Date : 2022-06-20 DOI: 10.1145/3543829.3544511
Helin Cihan, Yunhan Wu, Paola R. Peña, Justin Edwards, Benjamin R. Cowan
{"title":"Bilingual by default: Voice Assistants and the role of code-switching in creating a bilingual user experience","authors":"Helin Cihan, Yunhan Wu, Paola R. Peña, Justin Edwards, Benjamin R. Cowan","doi":"10.1145/3543829.3544511","DOIUrl":"https://doi.org/10.1145/3543829.3544511","url":null,"abstract":"Conversational User Interfaces such as Voice Assistants are hugely popular. Yet they are designed to be monolingual by default, lacking support for, or sensitivity to, the bilingual dialogue experience. In this provocation paper, we highlight the language production challenges faced in VA interaction for bilingual users. We argue that, by facilitating phenomena seen in bilingual interaction, such as code-switching, we can foster a more inclusive and improved user experience for bilingual users. We also explore ways that this might be achieved, through the support of multiple language recognition as well as being sensitive to the preferences of code-switching in speech output.","PeriodicalId":138046,"journal":{"name":"Proceedings of the 4th Conference on Conversational User Interfaces","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132085734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信