Philipp Wintersberger, Hannah Nicklas, Thomas Martlbauer, Stephan Hammer, A. Riener
{"title":"可解释的自动化:个性化和自适应的用户界面,以促进对驾驶自动化系统的信任和理解","authors":"Philipp Wintersberger, Hannah Nicklas, Thomas Martlbauer, Stephan Hammer, A. Riener","doi":"10.1145/3409120.3410659","DOIUrl":null,"url":null,"abstract":"Recent research indicates that transparent information on the behavior of automated vehicles positively affects trust, but how such feedback should be composed and if user trust influences the amount of desired feedback is relatively unexplored. Consequently, we conducted an interview study with (N=56) participants, who were presented different videos of an automated vehicle from the ego-perspective. Subjects rated their trust in the vehicle in these situations and could arbitrarily select objects in the driving environment that should be included in augmented reality feedback systems, so that they are able to trust the vehicle and understand its actions. The results show an inverse correlation between situational trust and participants’ desire for feedback and further reveal reasons why certain objects should be included in feedback systems. The study also highlights the need for more adaptive in-vehicle interfaces for trust calibration and outlines necessary steps for automatically generating feedback in the future.","PeriodicalId":373501,"journal":{"name":"12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"32","resultStr":"{\"title\":\"Explainable Automation: Personalized and Adaptive UIs to Foster Trust and Understanding of Driving Automation Systems\",\"authors\":\"Philipp Wintersberger, Hannah Nicklas, Thomas Martlbauer, Stephan Hammer, A. Riener\",\"doi\":\"10.1145/3409120.3410659\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent research indicates that transparent information on the behavior of automated vehicles positively affects trust, but how such feedback should be composed and if user trust influences the amount of desired feedback is relatively unexplored. Consequently, we conducted an interview study with (N=56) participants, who were presented different videos of an automated vehicle from the ego-perspective. Subjects rated their trust in the vehicle in these situations and could arbitrarily select objects in the driving environment that should be included in augmented reality feedback systems, so that they are able to trust the vehicle and understand its actions. The results show an inverse correlation between situational trust and participants’ desire for feedback and further reveal reasons why certain objects should be included in feedback systems. The study also highlights the need for more adaptive in-vehicle interfaces for trust calibration and outlines necessary steps for automatically generating feedback in the future.\",\"PeriodicalId\":373501,\"journal\":{\"name\":\"12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications\",\"volume\":\"40 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"32\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3409120.3410659\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3409120.3410659","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Explainable Automation: Personalized and Adaptive UIs to Foster Trust and Understanding of Driving Automation Systems
Recent research indicates that transparent information on the behavior of automated vehicles positively affects trust, but how such feedback should be composed and if user trust influences the amount of desired feedback is relatively unexplored. Consequently, we conducted an interview study with (N=56) participants, who were presented different videos of an automated vehicle from the ego-perspective. Subjects rated their trust in the vehicle in these situations and could arbitrarily select objects in the driving environment that should be included in augmented reality feedback systems, so that they are able to trust the vehicle and understand its actions. The results show an inverse correlation between situational trust and participants’ desire for feedback and further reveal reasons why certain objects should be included in feedback systems. The study also highlights the need for more adaptive in-vehicle interfaces for trust calibration and outlines necessary steps for automatically generating feedback in the future.