{"title":"人-无人机协同导航中的信任校准","authors":"Kazuo Okamura, S. Yamada","doi":"10.1109/RO-MAN47096.2020.9223509","DOIUrl":null,"url":null,"abstract":"Trust calibration is essential to successful cooperation between humans and autonomous systems such as those for self-driving cars and autonomous drones. If users over-estimate the capability of autonomous systems, over-trust occurs, and the users rely on the systems even in situations in which they could outperform the systems. On the contrary, if users under-estimate the capability of a system, undertrust occurs, and they tend not to use the system. Since both situations hamper cooperation in terms of safety and efficiency, it would be highly desirable to have a mechanism that facilitates users in keeping the appropriate level of trust in autonomous systems. In this paper, we first propose an adaptive trust calibration framework that can detect over/under-trust from users’ behaviors and encourage them to keep the appropriate trust level in a \"continuous\" cooperative task. Then, we conduct experiments to evaluate our method with semi-automatic drone navigation. In experiments, we introduce ABA situations of weather conditions to investigate our method in bidirectional trust changes. The results show that our method adaptively detected trust changes and encouraged users to calibrate their trust in a continuous cooperative task. We believe that the findings of this study will contribute to better user-interface designs for collaborative systems.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"23 8","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Calibrating Trust in Human-Drone Cooperative Navigation\",\"authors\":\"Kazuo Okamura, S. Yamada\",\"doi\":\"10.1109/RO-MAN47096.2020.9223509\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Trust calibration is essential to successful cooperation between humans and autonomous systems such as those for self-driving cars and autonomous drones. If users over-estimate the capability of autonomous systems, over-trust occurs, and the users rely on the systems even in situations in which they could outperform the systems. On the contrary, if users under-estimate the capability of a system, undertrust occurs, and they tend not to use the system. Since both situations hamper cooperation in terms of safety and efficiency, it would be highly desirable to have a mechanism that facilitates users in keeping the appropriate level of trust in autonomous systems. In this paper, we first propose an adaptive trust calibration framework that can detect over/under-trust from users’ behaviors and encourage them to keep the appropriate trust level in a \\\"continuous\\\" cooperative task. Then, we conduct experiments to evaluate our method with semi-automatic drone navigation. In experiments, we introduce ABA situations of weather conditions to investigate our method in bidirectional trust changes. The results show that our method adaptively detected trust changes and encouraged users to calibrate their trust in a continuous cooperative task. We believe that the findings of this study will contribute to better user-interface designs for collaborative systems.\",\"PeriodicalId\":383722,\"journal\":{\"name\":\"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)\",\"volume\":\"23 8\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/RO-MAN47096.2020.9223509\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN47096.2020.9223509","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Calibrating Trust in Human-Drone Cooperative Navigation
Trust calibration is essential to successful cooperation between humans and autonomous systems such as those for self-driving cars and autonomous drones. If users over-estimate the capability of autonomous systems, over-trust occurs, and the users rely on the systems even in situations in which they could outperform the systems. On the contrary, if users under-estimate the capability of a system, undertrust occurs, and they tend not to use the system. Since both situations hamper cooperation in terms of safety and efficiency, it would be highly desirable to have a mechanism that facilitates users in keeping the appropriate level of trust in autonomous systems. In this paper, we first propose an adaptive trust calibration framework that can detect over/under-trust from users’ behaviors and encourage them to keep the appropriate trust level in a "continuous" cooperative task. Then, we conduct experiments to evaluate our method with semi-automatic drone navigation. In experiments, we introduce ABA situations of weather conditions to investigate our method in bidirectional trust changes. The results show that our method adaptively detected trust changes and encouraged users to calibrate their trust in a continuous cooperative task. We believe that the findings of this study will contribute to better user-interface designs for collaborative systems.