{"title":"A very simple design for a very intelligent machine (but with a catch)","authors":"Douglas Campbell","doi":"10.1145/3527188.3561942","DOIUrl":"https://doi.org/10.1145/3527188.3561942","url":null,"abstract":"","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127627512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaurav Patil, Phillip Bagala, Patrick Nalepka, Rachel W. Kallen, Michael J. Richardson
{"title":"Evaluating Human-Artificial Agent Decision Congruence in a Coordinated Action Task","authors":"Gaurav Patil, Phillip Bagala, Patrick Nalepka, Rachel W. Kallen, Michael J. Richardson","doi":"10.1145/3527188.3563923","DOIUrl":"https://doi.org/10.1145/3527188.3563923","url":null,"abstract":"Recommender systems designed to augment human decision-making in multi-agent tasks need to not only recommend actions that align with the task goal, but which also maintain coordinative behaviors between agents. Further, if these systems are to be used for skill training, they need to impart implicit learning to its users. This work compared a recommender system trained using deep reinforcement learning to a heuristic-based system in recommending actions to human participants teaming with an artificial agent during a collaborative problem-solving task. In addition to evaluating task performance and learning, we also evaluate the extent to which the human action are congruent with the recommended actions.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126256204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Backchannel Generation Model for a Third Party Listener Agent","authors":"Divesh Lala, K. Inoue, T. Kawahara, Kei Sawada","doi":"10.1145/3527188.3561926","DOIUrl":"https://doi.org/10.1145/3527188.3561926","url":null,"abstract":"In this work we propose a listening agent which can be used in a conversation between two humans. We firstly conduct a corpus analysis to identify three different categories of backchannel which the agent can use - responsive interjections, expressive interjections and shared laughs. From this data we train and evaluate a continuous backchannel generation model consisting of separate timing and form prediction models. We then conduct a subjective experiment to compare our model to random, dyadic, and ground truth models. We find that our model outperforms a random baseline and is comparable to the dyadic model despite the low evaluation of expressive interjections. We suggest that the perception of expressive interjections contribute significantly to the perception of the agent’s empathy and understanding of the conversation. The results also show the need for a more robust model to generate expressive interjections, perhaps aided by the use of linguistic features.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127591934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interactive Perceptual Companion: Prototyping a Digital Perception for HAI","authors":"H. Akmal, Jonathan Shaw, E. B. Sandoval","doi":"10.1145/3527188.3563938","DOIUrl":"https://doi.org/10.1145/3527188.3563938","url":null,"abstract":"This research presents an exploration of a post-anthropocentric perspective towards the design of human-agent interaction (HAI), observed through the development of a wearable device prototype. The device argues for an alternative stance towards the orthodox human-centered perspectives in HAI executed through human-computer interaction (HCI). Taking a research through design approach the device presents a speculative scenario of agential interaction in the form of a perceptual companion interface. It overlaps with post-anthropocentric philosophical discourse around the phenomenon of agential perception present in contemporary imaginings of HCI. The prototype intends to act as a catalyst for imagining post-anthropocentric agential interactions within HAI.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133247024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Don’t Take it Personally: Resistance to Individually Targeted Recommendations from Conversational Recommender Agents","authors":"Guy Laban, Theo Araujo","doi":"10.1145/3527188.3561929","DOIUrl":"https://doi.org/10.1145/3527188.3561929","url":null,"abstract":"Conversational recommender agents are artificially intelligent recommender systems that provide users with individually-tailored recommendations by targeting individual needs and communicating in a flowing dialogue. These are widely available online, communicating with users while demonstrating human-like (anthropomorphic) social cues. Nevertheless, little is known about the effect of their anthropomorphic cues on users’ resistance to the system and recommendations. Accordingly, this study examined the extent to which conversational recommender agents’ anthropomorphic cues and the type of recommendations provided (user-initiated and system-initiated) influenced users’ perceptions of control, trustworthiness, and the risk of using the platform. The study assessed how these perceptions, in turn, influence users’ adherence to the recommendations. An online experiment was conducted among users with conversational recommender agents and web recommender platforms that provided user-initiated or system-initiated restaurant recommendations. The results entail that user-initiated recommendations, compared to system-initiated, are less likely to affect users’ resistance to the system and are more likely to affect their adherence to the recommendations provided. Furthermore, the study’s findings suggest that these effects are amplified for conversational recommender agents, demonstrating anthropomorphic cues, in contrast to traditional systems as web recommender platforms.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133448556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Partners Who Grow Together: Collaborative Machine Learning in Video Game AI Design","authors":"Jibing Shi, Richard J. Savery","doi":"10.1145/3527188.3563937","DOIUrl":"https://doi.org/10.1145/3527188.3563937","url":null,"abstract":"The majority of research at the intersection of AI and video games focuses on developing agents capable of playing games without human input, or developing AI game enemies. The research in this paper explores a counter approach, whereby a player trains an a AI partner during game play and learns to play cooperatively with the agent. We created a 2D video game that allows the player to cooperate with an AI agent manipulated by two underlying algorithms, either with reinforcement learning or a random process. For our reinforcement learning approach we used a Q-learning table, that is updated based on the player. We found that players engaged strongly with the idea of training their own custom AI agent and believe this shows significant potential for future exploration.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116758586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of Self-Experience and Situational Awareness on Empathic Help to Virtual Agents","authors":"J. Morita, Yuna Kano","doi":"10.1145/3527188.3561938","DOIUrl":"https://doi.org/10.1145/3527188.3561938","url":null,"abstract":"What factors induce human empathy for virtual agents? To explore this question, we examined the relationship between humans and agents through the inverted cyberball game, wherein a participant is able to help an ostracized agent. In particular, this study attempted to distinguish factors of self-experience from situational awareness as the basis for empathy. In one scenario, participants completed a pre-task in which they had the same experience as the ostracized agent, while in another scenario, participants just observed ostracizing relations between three agents. As a result, self-experience with the pre-task did not influence the occurrence of helping behavior for the ostracized agent in the main task. Rather, the factor inducing the helping behavior was whether the participants noticed the ostracizing relations. This attentiveness was also associated with an empathetic trait, measured as the empathy quotient (EQ). From these results, we concluded that empathy is an ability to adopt others’ perspectives, even without direct or first-hand experience with others’ pain.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124293150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aldo Chavez Gonzalez, Marlena R. Fraune, Ricarda Wullenkord
{"title":"Can Moral Rightness (Utilitarian Approach) Outweigh the Ingroup Favoritism Bias in Human-Agent Interaction","authors":"Aldo Chavez Gonzalez, Marlena R. Fraune, Ricarda Wullenkord","doi":"10.1145/3527188.3561930","DOIUrl":"https://doi.org/10.1145/3527188.3561930","url":null,"abstract":"As robots increasingly assist more people, tendencies of becoming attached to these robots and treat them well have risen; even to the point of treating robot teammates better than human opponents in laboratory settings. We examined how far this ingroup favoritism extends and how to mitigate it. We did this by making participants play an online game in teams of two humans and two robots against two humans and two robots. After the game, they selected someone to perform an additional unpleasant task (according to the results of our pilot test); we manipulated that task to be equally unpleasant for ingroup and outgroup members in one condition, and more unpleasant for outgroup than for ingroup members in the other condition. We did this to examine if the moral principle of utilitarianism (i.e., social justice and fairness) would outweigh ingroup favoritism. In the results, participants showed typical group dynamics like ingroup favoritism. The opportunity to behave in a utilitarian way failed to reverse the ingroup favoritism effect. Interestingly, participants sacrificed their ingroup robot more than they sacrificed even outgroup players. We speculate about why the study showed these unexpected findings and what it may mean for HRI.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125927896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robot Persuasiveness Depending on User Gender","authors":"Isabella Ågren, Sofia Thunberg","doi":"10.1145/3527188.3563939","DOIUrl":"https://doi.org/10.1145/3527188.3563939","url":null,"abstract":"Robot’s persuasive abilities have previously shown contradictory results with some depending on robot gender and some on user gender. Therefore, we conducted a replication study with the Furhat robot. The study measured differences in how persuasive (ethos, pathos, and logos) a more feminine and a more masculine looking robot was perceived by female or male participants. We hypothesised that a platform with both feminine/masculine faces and voices enables larger differences between the robot’s persuasiveness compared to a female/male NAO robot (original study). Results showed statistically significant differences regarding persuasiveness between participant gender but none for the robot gender. One difference, compared to the original study, was that men rated ethos higher than women did.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127797757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hype Dlive: XR Live Performance System for Improving Passenger Comfort in Autonomous Driving","authors":"Takuto Akiyoshi, Masashi Abe, Yuki Shimizu, Yusaku Takahama, Koki Nagata, Yosuke Okami, Taishi Sawabe","doi":"10.1145/3527188.3563916","DOIUrl":"https://doi.org/10.1145/3527188.3563916","url":null,"abstract":"One of the stress factors for passengers in an autonomous vehicle is the stress from vehicle behavior caused by unpredictable acceleration, deceleration, and route changes. Although there are researches on stress reduction methods using behavior control and information presentation to realize a comfortable autonomous vehicle, there are still few methods for stress reduction and dissipation through application to XR entertainment. In this work, we propose \"Hype Dlive\", a system that utilizes vehicle behavior as XR live music performance, and aims to improve passenger comfort by utilizing vehicle behavior as tactile and vestibular stimuli to generate a sense of presence and excitement.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"367 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114059207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}