{"title":"Voice assistants and older people: some open issues","authors":"Sergio Sayago, B. Neves, Benjamin R. Cowan","doi":"10.1145/3342775.3342803","DOIUrl":"https://doi.org/10.1145/3342775.3342803","url":null,"abstract":"Voice Assistants (VAs) like Amazon Echo and Apple Siri are an increasingly popular way of interacting with a range of applications. VAs are also currently gaining traction in the HCI community. Yet, and despite a growing ageing population, work on VAs with older people is scant. In this CUI 2019 provocative paper we aim to encourage research on VAs with and for older people (aged 65+). We outline several important open issues to address when researching this population, such as perceptions and barriers to VAs use, aspects of Conversational User Experience tied to VAs response design, and anthropomorphic design. We also raise some 'provocative' and yet-to-be-addressed research questions, hoping to operationalize the issues discussed and spark debate and discussion about them during and after CUI 2019.","PeriodicalId":408689,"journal":{"name":"Proceedings of the 1st International Conference on Conversational User Interfaces","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124814065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"\"I don't know what you're talking about, HALexa\": the case for voice user interface guidelines","authors":"Christine Murad, Cosmin Munteanu","doi":"10.1145/3342775.3342795","DOIUrl":"https://doi.org/10.1145/3342775.3342795","url":null,"abstract":"As Voice User Interfaces (VUI) grow in popularity in both the research and academic world, designers are met with new challenges in delivering on the promises of voice interaction. These promises depict a world where one can just speak to their devices, akin to HAL-9000; yet, existing usability challenges still leave many disappointed. These challenges often make or break the experience users have with VUIs. We argue that what we are missing is a foundation on which to build (and deliver) our promises: it is essential to build a foundation of VUI principles that can guide future designers in the development of voice interaction. We must address the lack of research in developing foundational VUI-specific guidelines that can aid designers in meeting the expectations and promises of true voice interaction.","PeriodicalId":408689,"journal":{"name":"Proceedings of the 1st International Conference on Conversational User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129865939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conversation considered harmful?","authors":"S. Reeves","doi":"10.1145/3342775.3342796","DOIUrl":"https://doi.org/10.1145/3342775.3342796","url":null,"abstract":"As a concept, 'conversation' is rife with troublemaking potential. It is not that we should necessarily abandon use of 'conversation' in conversational user interface (CUI) research, but rather treat it with a significant measure of care due to the varied conceptual problems it introduces---problems sketched in this paper. I suggest an alternative, possibly safer articulation and conceptual shift: conversation-sensitive design.","PeriodicalId":408689,"journal":{"name":"Proceedings of the 1st International Conference on Conversational User Interfaces","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121623911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heloisa Candello, Claudio S. Pinhanez, M. Pichiliani, Marisa Vasconcelos, Haylla Conde
{"title":"Can direct address affect user engagement with chatbots embodied in physical spaces?","authors":"Heloisa Candello, Claudio S. Pinhanez, M. Pichiliani, Marisa Vasconcelos, Haylla Conde","doi":"10.1145/3342775.3342787","DOIUrl":"https://doi.org/10.1145/3342775.3342787","url":null,"abstract":"This paper investigates how direct addressing the user, such as using a vocative, affects the user experience with chatbots embodied in an interactive space context. Direct addressing increases user engagement in presentations and in performing arts, and we investigated its use in an artwork where visitors ask questions to three chatbots using interactive text projected on a table surface. The study comprised two versions of the system; the first was neutral while the second employed direct address in the answers from the chatbots. We logged 1188 interaction sessions with the exhibit and conducted observational studies and semi-structured interviews with 92 visitors in the wild. The analysis of the visitor's interactions showed that direct address had almost no direct effect on user engagement regarding what kind of questions were asked. The field study brought richer perspectives on how visitors interact with chatbots and their reported experiences with the two versions. Based on our findings we provide general recommendations for the design of chatbots in public spaces.","PeriodicalId":408689,"journal":{"name":"Proceedings of the 1st International Conference on Conversational User Interfaces","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116514383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Face-to-face conversation: why embodiment matters for conversational user interfaces","authors":"M. Foster","doi":"10.1145/3342775.3342810","DOIUrl":"https://doi.org/10.1145/3342775.3342810","url":null,"abstract":"Face-to-face conversation is the basic-and richest-form of human communication. While modern conversational user interfaces are increasingly able to incorporate more and more features of face-to-face conversation, including unrestricted verbal communication and continuous social coordination among the participants, most systems do not take full advantage of the interaction possibilities provided by multimodal, embodied, non-verbal communication. In this position paper, we discuss how this limitation affects the possible applications of conversational user interfaces, and describe how current research in embodied communication and social robotics has the potential to address this limitation, with possible benefits to both research communities.","PeriodicalId":408689,"journal":{"name":"Proceedings of the 1st International Conference on Conversational User Interfaces","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115560956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brendan Spillane, E. Gilmartin, Christian Saam, V. Wade
{"title":"Issues relating to trust in care agents for the elderly","authors":"Brendan Spillane, E. Gilmartin, Christian Saam, V. Wade","doi":"10.1145/3342775.3342808","DOIUrl":"https://doi.org/10.1145/3342775.3342808","url":null,"abstract":"There is increasing academic interest in and commercial development of care agents to assist with the care of the elderly in the home. This paper defines some of the under-explored questions and issues relating to trust. It raises specific questions to instigate debate and recommends directions for future research in the domain.","PeriodicalId":408689,"journal":{"name":"Proceedings of the 1st International Conference on Conversational User Interfaces","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122648307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Razan N. Jaber, Donald Mcmillan, Jordi Solsona Belenguer, Barry A. T. Brown
{"title":"Patterns of gaze in speech agent interaction","authors":"Razan N. Jaber, Donald Mcmillan, Jordi Solsona Belenguer, Barry A. T. Brown","doi":"10.1145/3342775.3342791","DOIUrl":"https://doi.org/10.1145/3342775.3342791","url":null,"abstract":"While gaze is an important part of human to human interaction, it has been neglected in the design of conversational agents. In this paper, we report on our experiments with adding gaze to a conventional speech agent system. Tama is a speech agent that makes use of users' gaze to initiate a query, rather than a wake word or phrase. In this paper, we analyse the patterns of detected gaze when interacting with the device. We use k-means clustering of the log data from ten users tested in a dual-participant discussion tasks. These patterns are verified and explained through close analysis of the video data of the trials. We present similarities of patterns between conditions both when querying the agent and listening to the answers. We also present the analysis of patterns detected when only in the gaze condition. Users can take advantage of their understanding of gaze in conversation to interact with a gaze-enabled agent but are also able to fluently adjust their use of gaze to interact with the technology successfully. Our results point to some patterns of interaction which can be used as a starting point to build gaze-awareness into voice-user interfaces.","PeriodicalId":408689,"journal":{"name":"Proceedings of the 1st International Conference on Conversational User Interfaces","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128312976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From sex and therapy bots to virtual assistants and tutors: how emotional should artificially intelligent agents be?","authors":"Stella George","doi":"10.1145/3342775.3342807","DOIUrl":"https://doi.org/10.1145/3342775.3342807","url":null,"abstract":"The question of whether intelligent agents should have an emotional capacity has been rehearsed for over 20 years. In that time moving in an affirming direction from 'should we?' to 'how will we?'. Less clear however, is process for developing emotional systems: how do we characterise levels of emotion; how do we relate emotion to an agent's intended function; and who should make these decisions about emotional sufficiency? Categorising the discussion in establishing emotional detection, emotional intelligence, the ability to emote and generate feelings provides a basic structure against which to consider how central developers and engineers are in the decision making about emotional sufficiency via conversational interfaces, and further it is essential in empowering this discussion with a wider community in understanding use (and potential misuse) of emotional capacity in AI.","PeriodicalId":408689,"journal":{"name":"Proceedings of the 1st International Conference on Conversational User Interfaces","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133049588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"In case of emergency, order pizza: an urgent case of action formation and recognition","authors":"Saul Albert, W. Housley, E. Stokoe","doi":"10.1145/3342775.3342800","DOIUrl":"https://doi.org/10.1145/3342775.3342800","url":null,"abstract":"The biggest challenge for voice technologies is action recognition. This is partly because current approaches prioritize abstract context over practical action, and tend to ignore the detailed, sequential structure of talk by emulating scripted, often stereotypical dialogue. This provocation paper analyzes an urgent case of how a caller and a 911 dispatcher work together to achieve action recognition. We outline their 'seen but unnoticed' interactional methods and suggest how computational systems can learn from conversation analysis and use micro-analytic detail to recognize social actions.","PeriodicalId":408689,"journal":{"name":"Proceedings of the 1st International Conference on Conversational User Interfaces","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121081273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristen M. Scott, S. Ashby, David A. Braude, M. Aylett
{"title":"Who owns your voice?: ethically sourced voices for non-commercial tts applications","authors":"Kristen M. Scott, S. Ashby, David A. Braude, M. Aylett","doi":"10.1145/3342775.3342793","DOIUrl":"https://doi.org/10.1145/3342775.3342793","url":null,"abstract":"We examine the ethical questions surrounding voice donation for speech synthesis technology, including questions of voice ownership, identity and unintended consequences. This is examined specifically in the context of non-professional volunteer voice donors in small communities. We propose a multi-step informed consent process that more fully engages with TTS voice donors.","PeriodicalId":408689,"journal":{"name":"Proceedings of the 1st International Conference on Conversational User Interfaces","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125441772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}