{"title":"Secure, Comfortable or Functional: Exploring Domain-Sensitive Prompt Design for In-Car Voice Assistants","authors":"Anna-Maria Meck","doi":"10.1145/3571884.3604314","DOIUrl":"https://doi.org/10.1145/3571884.3604314","url":null,"abstract":"User Experience in Human-Computer Interaction is composed of a multitude of building blocks, one of which is how Voice Assistants (VAs) talk to their users. Linguistic considerations around syntax, grammar, and lexis have proven to influence users’ perception of VAs. Users have nuanced preferences regarding how they want their VAs to talk to them. Previous studies have found these preferences to differ between domains, but an exhaustive and methodical overview is still outstanding. By means of an A/B study spanning over domains as well as dialog types, this paper methodically closes this gap and explores the degree of domain-sensitivity across different types of dialogs in German. The results paint a mixed picture regarding the importance of domain-sensitivity. While some degree of domain-sensitivity was found for in-car prompts, it generally seems to play a rather minor role in users’ experience of VAs in the vehicle.","PeriodicalId":127379,"journal":{"name":"Proceedings of the 5th International Conference on Conversational User Interfaces","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134157302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tamas Makany, Sungjong Roh, Kotaro Hara, Jie Min Hua, Felicia Goh Si Ying, Wilson Teh Yang Jie
{"title":"Beyond Anthropomorphism: Unraveling the True Priorities of Chatbot Usage in SMEs","authors":"Tamas Makany, Sungjong Roh, Kotaro Hara, Jie Min Hua, Felicia Goh Si Ying, Wilson Teh Yang Jie","doi":"10.1145/3571884.3604315","DOIUrl":"https://doi.org/10.1145/3571884.3604315","url":null,"abstract":"This study examined business communication practices with chatbots among various Small and Medium Enterprise (SME) stakeholders in Singapore, including business owners/employees, customers, and developers. Through qualitative interviews and chatbot transcript analysis, we investigated two research questions: (1) How do the expectations of SME stakeholders compare to the conversational design of SME chatbots? and (2) What are the business reasons for SMEs to add human-like features to their chatbots? Our findings revealed that functionality is more crucial than anthropomorphic characteristics, such as personality and name. Stakeholders preferred chatbots that explicitly identified themselves as machines to set appropriate expectations. Customers prioritized efficiency, favoring fixed responses over free text input. Future research should consider the evolving expectations of consumers, business owners, and developers as chatbot technology advances and becomes more widely adopted.","PeriodicalId":127379,"journal":{"name":"Proceedings of the 5th International Conference on Conversational User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132780429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel Kernan Freire, Mina Foosherian, Chaofan Wang, E. Niforatos
{"title":"Harnessing Large Language Models for Cognitive Assistants in Factories","authors":"Samuel Kernan Freire, Mina Foosherian, Chaofan Wang, E. Niforatos","doi":"10.1145/3571884.3604313","DOIUrl":"https://doi.org/10.1145/3571884.3604313","url":null,"abstract":"As agile manufacturing expands and workforce mobility increases, the importance of efficient knowledge transfer among factory workers grows. Cognitive Assistants (CAs) with Large Language Models (LLMs), like GPT-3.5, can bridge knowledge gaps and improve worker performance in manufacturing settings. This study investigates the opportunities, risks, and user acceptance of LLM-powered CAs in two factory contexts: textile and detergent production. Several opportunities and risks are identified through a literature review, proof-of-concept implementation, and focus group sessions. Factory representatives raise concerns regarding data security, privacy, and the reliability of LLMs in high-stake environments. By following design guidelines regarding persistent memory, real-time data integration, security, privacy, and ethical concerns, LLM-powered CAs can become valuable assets in manufacturing settings and other industries.","PeriodicalId":127379,"journal":{"name":"Proceedings of the 5th International Conference on Conversational User Interfaces","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114487382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Bispham, S. Sattar, Clara Zard, Xavier Ferrer-Aran, Jide S. Edu, Guillermo Suarez-Tangil, J. Such
{"title":"Misinformation in Third-party Voice Applications","authors":"M. Bispham, S. Sattar, Clara Zard, Xavier Ferrer-Aran, Jide S. Edu, Guillermo Suarez-Tangil, J. Such","doi":"10.1145/3571884.3604307","DOIUrl":"https://doi.org/10.1145/3571884.3604307","url":null,"abstract":"This paper investigates the potential for spreading misinformation via third-party voice applications in voice assistant ecosystems such as Amazon Alexa and Google Assistant. Our work fills a gap in prior work on privacy issues associated with third-party voice applications, looking at security issues related to outputs from such applications rather than compromises to privacy from user inputs. We define misinformation in the context of third-party voice applications and implement an infrastructure for testing third-party voice applications using automated natural language interaction. Using our infrastructure, we identify — for the first time — several instances of misinformation in third-party voice applications currently available on the Google Assistant and Amazon Alexa platforms. We then discuss the implications of our work for developing measures to pre-empt the threat of misinformation and other types of harmful content in third-party voice assistants becoming more significant in the future.","PeriodicalId":127379,"journal":{"name":"Proceedings of the 5th International Conference on Conversational User Interfaces","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115304944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding and Answering Incomplete Questions","authors":"Angus Addlesee, Marco Damonte","doi":"10.1145/3571884.3597133","DOIUrl":"https://doi.org/10.1145/3571884.3597133","url":null,"abstract":"Voice assistants interrupt people when they pause mid-question, a frustrating interaction that requires the full repetition of the entire question again. This impacts all users, but particularly people with cognitive impairments. In human-human conversation, these situations are recovered naturally as people understand the words that were uttered. In this paper we build answer pipelines which parse incomplete questions and repair them following human recovery strategies. We evaluated these pipelines on our new corpus, SLUICE. It contains 21,000 interrupted questions, from LC-QuAD 2.0 and QALD-9-plus, paired with their underspecified SPARQL queries. Compared to a system that is given the full question, our best partial understanding pipeline answered only 0.77% fewer questions. Results show that our pipeline correctly identifies what information is required to provide an answer but is not yet provided by the incomplete question. It also accurately identifies where that missing information belongs in the semantic structure of the question.","PeriodicalId":127379,"journal":{"name":"Proceedings of the 5th International Conference on Conversational User Interfaces","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121667061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Cross-Content Conversational Agents for Behaviour Change: Investigating Domain Independence and the Role of Lexical Features in Written Language Around Change","authors":"Selina Meyer, David Elsweiler","doi":"10.1145/3571884.3597136","DOIUrl":"https://doi.org/10.1145/3571884.3597136","url":null,"abstract":"Valuable insights into an individual’s current thoughts and stance regarding behaviour change can be obtained by analysing the language they use, which can be conceptualized using Motivational Interviewing concepts. Training conversational agents (CAs) to detect and employ these concepts could help them provide more personalized and effective assistance. This study investigates the similarity of written language around behaviour change spanning diverse conversational and social contexts and change objectives. Drawing on previous research that applied MI concepts to texts about health behaviour change, we evaluate the performance of existing classifiers on six newly constructed datasets from diverse contexts. To gain insights in determining factors when identifying change language, we explore the impact of lexical features on classification. The results suggest that patterns of change language remain stable across contexts and domains, leading us to conclude that peer-to-peer online data may be sufficient to train CAs to understand user utterances related to behaviour change.","PeriodicalId":127379,"journal":{"name":"Proceedings of the 5th International Conference on Conversational User Interfaces","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121601046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Liesenfeld, Alianda Lopez, Mark Dingemanse
{"title":"Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators","authors":"Andreas Liesenfeld, Alianda Lopez, Mark Dingemanse","doi":"10.1145/3571884.3604316","DOIUrl":"https://doi.org/10.1145/3571884.3604316","url":null,"abstract":"Large language models that exhibit instruction-following behaviour represent one of the biggest recent upheavals in conversational interfaces, a trend in large part fuelled by the release of OpenAI’s ChatGPT, a proprietary large language model for text generation fine-tuned through reinforcement learning from human feedback (LLM+RLHF). We review the risks of relying on proprietary software and survey the first crop of open-source projects of comparable architecture and functionality. The main contribution of this paper is to show that openness is differentiated, and to offer scientific documentation of degrees of openness in this fast-moving field. We evaluate projects in terms of openness of code, training data, model weights, RLHF data, licensing, scientific documentation, and access methods. We find that while there is a fast-growing list of projects billing themselves as ‘open source’, many inherit undocumented data of dubious legality, few share the all-important instruction-tuning (a key site where human annotation labour is involved), and careful scientific documentation is exceedingly rare. Degrees of openness are relevant to fairness and accountability at all points, from data collection and curation to model architecture, and from training and fine-tuning to release and deployment.","PeriodicalId":127379,"journal":{"name":"Proceedings of the 5th International Conference on Conversational User Interfaces","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126185438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unified Conversational Models with System-Initiated Transitions between Chit-Chat and Task-Oriented Dialogues","authors":"Ye Liu, Stefan Ultes, W. Minker, Wolfgang Maier","doi":"10.1145/3571884.3597125","DOIUrl":"https://doi.org/10.1145/3571884.3597125","url":null,"abstract":"Spoken dialogue systems (SDSs) have been separately developed under two different categories, task-oriented and chit-chat. The former focuses on achieving functional goals and the latter aims at creating engaging social conversations without special goals. Creating a unified conversational model that can engage in both chit-chat and task-oriented dialogue is a promising research topic in recent years. However, the potential “initiative” that occurs when there is a change between dialogue modes in one dialogue has rarely been explored. In this work, we investigate two kinds of dialogue scenarios, one starts from chit-chat implicitly involving task-related topics and finally switching to task-oriented requests; the other starts from task-oriented interaction and eventually changes to casual chat after all requested information is provided. We contribute two efficient prompt models which can proactively generate a transition sentence to trigger system-initiated transitions in a unified dialogue model. One is a discrete prompt model trained with two discrete tokens, the other one is a continuous prompt model using continuous prompt embeddings automatically generated by a classifier. We furthermore show that the continuous prompt model can also be used to guide the proactive transitions between particular domains in a multi-domain task-oriented setting.","PeriodicalId":127379,"journal":{"name":"Proceedings of the 5th International Conference on Conversational User Interfaces","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128379952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deceptive AI Ecosystems: The Case of ChatGPT","authors":"Xiao Zhan, Yifan Xu, Ş. Sarkadi","doi":"10.1145/3571884.3603754","DOIUrl":"https://doi.org/10.1145/3571884.3603754","url":null,"abstract":"ChatGPT, an AI chatbot, has gained popularity for its capability in generating human-like responses. However, this feature carries several risks, most notably due to its deceptive behaviour such as offering users misleading or fabricated information that could further cause ethical issues. To better understand the impact of ChatGPT on our social, cultural, economic, and political interactions, it is crucial to investigate how ChatGPT operates in the real world where various societal pressures influence its development and deployment. This paper emphasizes the need to study ChatGPT \"in the wild\", as part of the ecosystem it is embedded in, with a strong focus on user involvement. We examine the ethical challenges stemming from ChatGPT’s deceptive human-like interactions and propose a roadmap for developing more transparent and trustworthy chatbots. Central to our approach is the importance of proactive risk assessment and user participation in shaping the future of chatbot technology.","PeriodicalId":127379,"journal":{"name":"Proceedings of the 5th International Conference on Conversational User Interfaces","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131765524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Democratizing Chatbot Debugging: A Computational Framework for Evaluating and Explaining Inappropriate Chatbot Responses","authors":"Xu Han, Michelle X. Zhou, Yichen Wang, Wenxi Chen, Tom Yeh","doi":"10.1145/3571884.3604308","DOIUrl":"https://doi.org/10.1145/3571884.3604308","url":null,"abstract":"Evaluating and understanding the inappropriateness of chatbot behaviors can be challenging, particularly for chatbot designers without technical backgrounds. To democratize the debugging process of chatbot misbehaviors for non-technical designers, we propose a framework that leverages dialogue act (DA) modeling to automate the evaluation and explanation of chatbot response inappropriateness. The framework first produces characterizations of context-aware DAs based on discourse analysis theory and real-world human-chatbot transcripts. It then automatically extracts features to identify the appropriateness level of a response and can explain the causes of the inappropriate response by examining the DA mismatch between the response and its conversational context. Using interview chatbots as a testbed, our framework achieves comparable classification accuracy with higher explainability and fewer computational resources than the deep learning baseline, making it the first step in utilizing DAs for chatbot response appropriateness evaluation and explanation.","PeriodicalId":127379,"journal":{"name":"Proceedings of the 5th International Conference on Conversational User Interfaces","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131279609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}