Hoang-Long Cao, Thang Thien Tran, Thong Van Nguyen, Phuong Minh Nguyen, Tuan Van Nguyen, Vu Duc Truong, Hoang-Dung Nguyen, Chi-Ngon Nguyen
{"title":"Ethical Acceptability of Robot-Assisted Therapy for Children with Autism: A Survey From a Developing Country","authors":"Hoang-Long Cao, Thang Thien Tran, Thong Van Nguyen, Phuong Minh Nguyen, Tuan Van Nguyen, Vu Duc Truong, Hoang-Dung Nguyen, Chi-Ngon Nguyen","doi":"10.1007/s12369-023-01060-7","DOIUrl":"https://doi.org/10.1007/s12369-023-01060-7","url":null,"abstract":"","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134935525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rachel Hoi Yan Au, Marlena R. Fraune, Ricarda Wullenkord
{"title":"Unethical Robot Teammates: The Effects of Wrongdoer Identity and Entity Type on Whistleblowing and Intergroup Dynamics","authors":"Rachel Hoi Yan Au, Marlena R. Fraune, Ricarda Wullenkord","doi":"10.1007/s12369-023-01057-2","DOIUrl":"https://doi.org/10.1007/s12369-023-01057-2","url":null,"abstract":"","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135606846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinyi Zhang, Sun Kyong Lee, Hoyoung Maeng, Sowon Hahn
{"title":"Effects of Failure Types on Trust Repairs in Human–Robot Interactions","authors":"Xinyi Zhang, Sun Kyong Lee, Hoyoung Maeng, Sowon Hahn","doi":"10.1007/s12369-023-01059-0","DOIUrl":"https://doi.org/10.1007/s12369-023-01059-0","url":null,"abstract":"","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135200483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sanna Kuoppamäki, Razan Jaberibraheem, Mikaela Hellstrand, Donald McMillan
{"title":"Designing Multi-Modal Conversational Agents for the Kitchen with Older Adults: A Participatory Design Study","authors":"Sanna Kuoppamäki, Razan Jaberibraheem, Mikaela Hellstrand, Donald McMillan","doi":"10.1007/s12369-023-01055-4","DOIUrl":"https://doi.org/10.1007/s12369-023-01055-4","url":null,"abstract":"Abstract Conversational agents (CA) are increasingly used to manage and coordinate household chores and everyday activities at home. However, these technologies should be adaptive to age-specific characteristics in order to be considered beneficial for the ageing population. This study presents a participatory design of a conversational agent to provide cognitive support in recipe following and nutrition advice for adults aged 65 and over. Through a qualitative thematic analysis, the study explores older adults’ expectations, interactions and experiences with the agent in order to identify age-specific challenges of interacting with CAs. Data consists of a participatory design workshop with eight older adults (aged 65 and over), followed by a Wizard of Oz study with ten older adults interacting with the agent in the kitchen environment in a laboratory setting. Results demonstrate that older adults consider conversational agents as beneficial for providing personalised recipe recommendations, advising the user to choose appropriate ingredients and reminding them of their dietary intake. When interacting with the agent older adults displayed challenges with confirmation and repetition, questioning and correcting, the lack of conversational responses, and difficulties in hearing and understanding the multi-modal interaction. Older adults experience agents as collaborators, but not as conversational partners. The study concludes that the accessibility and inclusiveness of conversational agents regarding voice interaction could be improved by further developing participatory methods with older adults.","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134886848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julia G. Stapels, Angelika Penner, Niels Diekmann, Friederike Eyssel
{"title":"Never Trust Anything That Can Think for Itself, if You Can’t Control Its Privacy Settings: The Influence of a Robot’s Privacy Settings on Users’ Attitudes and Willingness to Self-disclose","authors":"Julia G. Stapels, Angelika Penner, Niels Diekmann, Friederike Eyssel","doi":"10.1007/s12369-023-01043-8","DOIUrl":"https://doi.org/10.1007/s12369-023-01043-8","url":null,"abstract":"Abstract When encountering social robots, potential users are often facing a dilemma between privacy and utility. That is, high utility often comes at the cost of lenient privacy settings, allowing the robot to store personal data and to connect to the internet permanently, which brings in associated data security risks. However, to date, it still remains unclear how this dilemma affects attitudes and behavioral intentions towards the respective robot. To shed light on the influence of a social robot’s privacy settings on robot-related attitudes and behavioral intentions, we conducted two online experiments with a total sample of N = 320 German university students. We hypothesized that strict privacy settings compared to lenient privacy settings of a social robot would result in more favorable attitudes and behavioral intentions towards the robot in Experiment 1. For Experiment 2, we expected more favorable attitudes and behavioral intentions for choosing independently the robot’s privacy settings in comparison to evaluating preset privacy settings. However, those two manipulations seemed to influence attitudes towards the robot in diverging domains: While strict privacy settings increased trust, decreased subjective ambivalence and increased the willingness to self-disclose compared to lenient privacy settings, the choice of privacy settings seemed to primarily impact robot likeability, contact intentions and the depth of potential self-disclosure. Strict compared to lenient privacy settings might reduce the risk associated with robot contact and thereby also reduce risk-related attitudes and increase trust-dependent behavioral intentions. However, if allowed to choose, people make the robot ‘their own’, through making a privacy-utility tradeoff. This tradeoff is likely a compromise between full privacy and full utility and thus does not reduce risks of robot-contact as much as strict privacy settings do. Future experiments should replicate these results using real-life human robot interaction and different scenarios to further investigate the psychological mechanisms causing such divergences.","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136059954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oshrat Ayalon, Hannah Hok, Alex Shaw, Goren Gordon
{"title":"When it is ok to give the Robot Less: Children’s Fairness Intuitions Towards Robots","authors":"Oshrat Ayalon, Hannah Hok, Alex Shaw, Goren Gordon","doi":"10.1007/s12369-023-01047-4","DOIUrl":"https://doi.org/10.1007/s12369-023-01047-4","url":null,"abstract":"Abstract Children develop intuitions about fairness relatively early in development. While we know that children believe other humans care about distributional fairness, considerably less is known about whether they believe other agents, such as robots, do as well. In two experiments (N = 273) we investigated 4- to 9-year-old children’s intuitions about whether robots would be upset about unfair treatment as human children. Children were told about a scenario in which resources were being split between a human child and a target recipient: either another child or a robot across two conditions. The target recipient (either child or robot) received less than another child. They were then asked to evaluate how fair the distribution was, and whether the target recipient would be upset. Both Experiment 1 and 2 used the same design, but Experiment 2 also included a video demonstrating the robot’s mechanistic “robotic” movements. Our results show that children thought it was more fair to share unequally when the disadvantaged recipient was a robot rather than a child (Experiment 1 and 2). Furthermore, children thought that the child would be more upset than the robot (Experiment 2). Finally, we found that this tendency to treat these two conditions differently became stronger with age (Experiment 2). These results suggest that young children treat robots and children similarly in resource allocation tasks, but increasingly differentiate them with age. Specifically, children evaluate inequality as less unfair when the target recipient is a robot, and think that robots will be less angry about inequality.","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136313809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Do Emotional Robots Get More Help? How a Robots Emotions Affect Collaborators Willingness to Help","authors":"Jacqueline Urakami","doi":"10.1007/s12369-023-01058-1","DOIUrl":"https://doi.org/10.1007/s12369-023-01058-1","url":null,"abstract":"","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135059146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Power of Personal Ontologies: Individual Traits Prevail Over Robot Traits in Shaping Robot Humanization Perceptions","authors":"Kate K. Mays, James J. Cummings","doi":"10.1007/s12369-023-01045-6","DOIUrl":"https://doi.org/10.1007/s12369-023-01045-6","url":null,"abstract":"","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135151963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Izidor Mlakar, Urška Smrke, Vojko Flis, Nina Kobilica, Samo Horvat, Bojan Ilijevec, Bojan Musil, Nejc Plohl
{"title":"Using Structural Equation Modeling to Explore Patients’ and Healthcare Professionals’ Expectations and Attitudes Towards Socially Assistive Humanoid Robots in Nursing and Care Routine","authors":"Izidor Mlakar, Urška Smrke, Vojko Flis, Nina Kobilica, Samo Horvat, Bojan Ilijevec, Bojan Musil, Nejc Plohl","doi":"10.1007/s12369-023-01039-4","DOIUrl":"https://doi.org/10.1007/s12369-023-01039-4","url":null,"abstract":"Abstract Healthcare systems around the world are currently witnessing various challenges, including population aging and workforce shortages. As a result, the existing, overworked staff are struggling to meet the ever-increasing demands and provide the desired quality of care. One of the promising technological solutions that could complement the human workforce and alleviate some of their workload, are socially assistive humanoid robots. However, despite their potential, the implementation of socially assistive humanoid robots is often challenging due to low acceptance among key stakeholders, namely, patients and healthcare professionals. Hence, the present study first investigated the extent to which these stakeholders accept the use of socially assistive humanoid robots in nursing and care routine, and second, explored the characteristics that contribute to higher/lower acceptance within these groups, with a particular emphasis on demographic variables, technology expectations, ethical acceptability, and negative attitudes. In study 1, conducted on a sample of 490 healthcare professionals, the results of structural equation modeling showed that acceptance is driven primarily by aspects of ethical acceptability, although education and technology expectations also exert an indirect effect. In study 2, conducted on a sample of 371 patients, expectations regarding capabilities and attitudes towards the social influence of robots emerged as important predictors of acceptance. Moreover, although acceptance rates differed between tasks, both studies show a relatively high acceptance of socially assistive humanoid robots. Despite certain limitations, the study findings provide essential knowledge that enhances our understanding of stakeholders’ perceptions and acceptance of socially assistive humanoid robots in hospital environments, and may guide their deployment.","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135395829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Cameron, Emily C. Collins, Stevienna de Saille, Iveta Eimontaite, Alice Greenwood, James Law
{"title":"The Social Triad Model: Considering the Deployer in a Novel Approach to Trust in Human–Robot Interaction","authors":"David Cameron, Emily C. Collins, Stevienna de Saille, Iveta Eimontaite, Alice Greenwood, James Law","doi":"10.1007/s12369-023-01048-3","DOIUrl":"https://doi.org/10.1007/s12369-023-01048-3","url":null,"abstract":"Abstract There is an increasing interest in considering, measuring, and implementing trust in human-robot interaction (HRI). New avenues in this field include identifying social means for robots to influence trust, and identifying social aspects of trust such as a perceptions of robots’ integrity, sincerity or even benevolence. However, questions remain regarding robots’ authenticity in obtaining trust through social means and their capacity to increase such experiences through social interaction with users. We propose that the dyadic model of HRI misses a key complexity: a robot’s trustworthiness may be contingent on the user’s relationship with, and opinion of, the individual or organisation deploying the robot (termed here, Deployer). We present a case study in three parts on researching HRI and a LEGO ® Serious ® Play focus group on care robotics to indicate how Users’ trust towards the Deployer can affect trust towards robots and robotic research. Our Social Triad model (User, Robot, Deployer) offers novel avenues for exploring trust in a social context.","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135784945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}