{"title":"The Heuristic of Sufficient Explanation: Implications for Human-Agent Interaction","authors":"Andrew J. Vonasch","doi":"10.1145/3527188.3561943","DOIUrl":"https://doi.org/10.1145/3527188.3561943","url":null,"abstract":"The heuristic of sufficient explanation (HOSE) is a process by which people learn hidden information about other agents. If the publicly available reasons for behaviour seem insufficient to explain the agent's behaviour, people look for hidden reasons that would explain it. I will present evidence for HOSE across several contexts, including economic decision-making, belief in conspiracy theories, judgments of other agents’ bad intentions, and romantic attraction. I will discuss implications for Human-Agent Interaction more broadly, including for perceptions of non-human agents’ motives.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128432184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Nalepka, N. Caruana, D. M. Kaplan, Rachel W. Kallen, E. Pellicano, Michael J. Richardson
{"title":"Neurodiverse Human-Machine Interaction and Collaborative Problem-Solving in Social VR","authors":"Patrick Nalepka, N. Caruana, D. M. Kaplan, Rachel W. Kallen, E. Pellicano, Michael J. Richardson","doi":"10.1145/3527188.3563933","DOIUrl":"https://doi.org/10.1145/3527188.3563933","url":null,"abstract":"Social motor coordination is an important mechanism responsible for creating shared understanding but can be a challenge for Autistic individuals. Social virtual reality (VR) provides an opportunity to create a safe and inclusive environment for which interactions can be augmented to promote social interactivity. Due to the bi-directional nature of social interaction and adaptation, we created a framework to explore social motor coordination with a virtual artificial agent which can exhibit human-like behaviors. In this experiment, we assessed the interactive behaviors of participants completing a collaborative problem-solving task with the agent using multidimensional cross-recurrence quantification analysis (mdCRQA). Our results show that participants who discovered novel solutions to the task exhibited greater coupling to the artificial agent regardless of participant characteristics. Future work will explore how social VR environments can be augmented to promote social coordination.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134069492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Does Media Format Matter? Investigating the Toxicity, Sentiment and Topic of Audio Versus Text Social Media Messages","authors":"Jamy J. Li, Karen Penaranda Valdivia","doi":"10.1145/3527188.3561927","DOIUrl":"https://doi.org/10.1145/3527188.3561927","url":null,"abstract":"Audio messaging and voice-based interactions are growing in popularity. Lexical features of a manually-curated dataset of real-world audio tweets, as well as text and video/image tweets from the same user accounts, are analyzed to explore how user-generated audio differs from text. The toxicity, sentiment, topic and length of audio tweet transcripts are compared with their accompanying text, date-matched text tweets from the same users and date-matched video/image tweets and their accompanying text. Audio tweets were significantly less toxic than both text tweets and text that accompanied the audio tweet, as well as significantly lower sentiment than their accompanying text. The topics and word counts of audio, text and video/image tweets also differed. These findings are then used to derive design implications for audio and conversational agent interaction. This research contributes preliminary insights about audio social media messages that may help researchers and designers of audio- and agent-based interaction better understand and design for different media formats.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131380837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kento Goto, Kazuki Mizumaru, Daisuke Sakamoto, T. Ono
{"title":"Angel and Devil Robots: Personifying a Dilemma to Influence Willpower","authors":"Kento Goto, Kazuki Mizumaru, Daisuke Sakamoto, T. Ono","doi":"10.1145/3527188.3563934","DOIUrl":"https://doi.org/10.1145/3527188.3563934","url":null,"abstract":"“Angels and Devils” express a human’s dilemma. In this paper, we produced this expression with multiple robots and investigated how humans change their decision and behavior in a moral dilemma environment. We divided the participants into Neutral-Neutral and Angel-Devil robot groups and investigated the willpower of the two groups by the alphabet writing task. We also examined the impression of the robots by Godspeed questionnaires and the results of the questionnaire.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122413057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robotic Arm Generative Painting Through Real-time Analysis of Music Performance","authors":"Richard J. Savery, A. Savery, J. Baird","doi":"10.1145/3527188.3563913","DOIUrl":"https://doi.org/10.1145/3527188.3563913","url":null,"abstract":"This paper describes a prototype audio-visual performance of a Ufactory Uarm Swift and a live musician. In this setting, the robotic arm was used as an AI agent to create a visual representation of a musical work in real-time. An A4 white canvas was gradually filled with a mixture of black, blue, red and yellow paints across the span of approximately eight minutes. The musician, performing on an acoustic violin, fitted with a custom built audio interface, performed multiple versions of an improvisatory work developed specifically for the prototype performance. The following sections discuss our technical approach to programming and implementing the Ufactory Uarm Swift as a painting arm, reflections of the musical process and propose future directions for this project.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125392954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perception of Emotional Relationships by Observing Body Expressions between Multiple Robots","authors":"Kazuki Mizumaru, Daisuke Sakamoto, T. Ono","doi":"10.1145/3527188.3561940","DOIUrl":"https://doi.org/10.1145/3527188.3561940","url":null,"abstract":"Emotional expressions are essential in augmenting a robot’s expression. Many robots with limited facial expression freedom, such as the Nao robot, can effectively express emotions using body movement. Previous studies have used a single robot and evaluated its expression. Multiple robots expressing emotion in their interactions may have a greater impact than robots expressing emotion only when interacting with humans. The relationships between these robots should allow them to develop more diverse modes of expression. However, it is unclear how people perceive relationships by observing robots’ emotional expressions. In this study, we applied every combination of four characteristic body emotion expressions (Sadness, Fear, Pride, and Happiness) based on Russell’s circumplex model to robots. Furthermore, we investigated how the relationships were evaluated in an online video-based experiment. The results show that the relationships between the two robots are influenced by each robot’s body emotional movement and can be interpreted using the valence-arousal model.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126993611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human-Social Robots Interaction: The Blurred Line between Necessary Anthropomorphization and Manipulation","authors":"Rachel Carli, A. Najjar, D. Calvaresi","doi":"10.1145/3527188.3563941","DOIUrl":"https://doi.org/10.1145/3527188.3563941","url":null,"abstract":"In the context of human-social robot interaction, it has been proven that an affable design and the ability to exhibit emotional and social skills are central to fostering acceptance and more efficient system performance. Nevertheless, these features may result in manipulative dynamics, able to impact the psychological sphere of the users, affecting their ability to make decisions and to exercise free, conscious will. This highlights the need to identify a legal framework that balances the interests at stake. To this end, the principle of human dignity is proposed here as a criterion to ensure (i) the protection of users’ fundamental rights, and (ii) an effective and truly human-friendly technological development.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125718692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Obremski, Helena Babette Hering, Paula Friedrich, Birgit Lugrin
{"title":"Mixed-Cultural Speech for Intelligent Virtual Agents - the Impact of Different Non-Native Accents Using Natural or Synthetic Speech in the English Language","authors":"David Obremski, Helena Babette Hering, Paula Friedrich, Birgit Lugrin","doi":"10.1145/3527188.3561921","DOIUrl":"https://doi.org/10.1145/3527188.3561921","url":null,"abstract":"This paper presents an exploratory study investigating the impact of non-native accented speech on the perception of Intelligent Virtual Agents (IVAs). In an online study, native English speakers watched a video of an IVA holding a monologue whilst speaking English with either a Spanish, Hindi or Mandarin accent that was either recorded by native speakers of that respective language (natural speech) or synthetically generated (synthetic speech). The results showed a significant impact of naturalness of speech on the IVAs perceived warmth and a significant interaction of accent and naturalness of speech on its perceived competence. The naturalness of speech impacted the participants’ perception of the IVA as a non-native speaker of English, and the correctness of the attributed mother tongue in the Spanish and the Mandarin accent condition. These results are a valuable contribution to research on mixed-cultural IVAs in general and non-native speech as a cultural cue more specifically.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"os-56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127720263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Effect of Exaggerated Nonverbal Cues on the Perception of the Robot Pepper","authors":"Sarah Fischer, Darja Stoeva, M. Gelautz","doi":"10.1145/3527188.3563929","DOIUrl":"https://doi.org/10.1145/3527188.3563929","url":null,"abstract":"This paper explores the effects of selected exaggerated nonverbal cues on the perception of the Pepper robot in a story-telling scenario. We conduct a video-based user study based on an online survey containing the Godspeed questionnaire and additional interviews. The results of the study indicate that using exaggeration as a method to design nonverbal cues can improve the perception of the robot in both robot-talking and robot-listening cases. We find that Pepper is perceived as animate and safe in both cases, and additionally as anthropomorphic for the case when the robot is talking.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134369298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marlena R. Fraune, Aralee Derflinger, Alexis Grosofsky
{"title":"A Scent to Impress: The Smell of Lavender enhances Trust of Robots","authors":"Marlena R. Fraune, Aralee Derflinger, Alexis Grosofsky","doi":"10.1145/3527188.3563936","DOIUrl":"https://doi.org/10.1145/3527188.3563936","url":null,"abstract":"Robots are being employed to assist people such as in doctors’ offices or hospitals. The robots will be most effective if people trust them. We examine how one understudied environmental factor, scent, can affect trust of a robot. In a room that was unscented or positively scented (lavender), participants (N = 22) answered the robot Nao's questions about their sleep quality, then indicated their trust of the robot. Results show that participants in the lavender scented room trusted the robot more than those in the unscented room. The researchers intend to collect more data, but these preliminary results indicate that researchers should be cautious of the scents in their experiments. Researchers and practitioners should also examine how to harness scents to enhance trust and acceptance of assistive robots.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129972038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}