{"title":"错误就是自动化:自动驾驶系统失灵后能否修复信任?","authors":"Peng Liu;Yueying Chu;Guanqun Wang;Zhigang Xu","doi":"10.1109/THMS.2024.3434680","DOIUrl":null,"url":null,"abstract":"Failures of the automated driving system (ADS) in automated vehicles (AVs) can damage driver–ADS cooperation (e.g., causing trust damage) and traffic safety. Researchers suggest infusing a human-like ability, active trust repair, into automated systems, to mitigate broken trust and other negative impacts resulting from their failures. Trust repair is regarded as a key ergonomic design in automated systems. Trust repair strategies (e.g., apology) are examined and supported by some evidence in controlled environments, however, rarely subjected to empirical evaluations in more naturalistic environments. To fill this gap, we conducted a test track study, invited participants (\n<italic>N</i>\n = 257) to experience an ADS failure, and tested the influence of the ADS’ trust repair on trust and other psychological responses. Half of participants (\n<italic>n</i>\n = 128) received the ADS’ verbal message (consisting of apology, explanation, and promise) by a human voice (\n<italic>n</i>\n = 63) or by Apple's Siri (\n<italic>n</i>\n = 65) after its failure. We measured seven psychological responses to AVs and ADS [e.g., trust and behavioral intention (BI)]. We found that both strategies cannot repair damaged trust. The human-voice-repair strategy can to some degree mitigate other detrimental influences (e.g., reductions in BI) resulting from the ADS failure, but this effect is only notable among participants without substantial driving experience. It points to the importance of conducting ecologically valid and field studies for validating human-like trust repair strategies in human–automation interaction and of developing trust repair strategies specific to safety-critical situations.","PeriodicalId":48916,"journal":{"name":"IEEE Transactions on Human-Machine Systems","volume":null,"pages":null},"PeriodicalIF":3.5000,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"To Err is Automation: Can Trust be Repaired by the Automated Driving System After its Failure?\",\"authors\":\"Peng Liu;Yueying Chu;Guanqun Wang;Zhigang Xu\",\"doi\":\"10.1109/THMS.2024.3434680\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Failures of the automated driving system (ADS) in automated vehicles (AVs) can damage driver–ADS cooperation (e.g., causing trust damage) and traffic safety. Researchers suggest infusing a human-like ability, active trust repair, into automated systems, to mitigate broken trust and other negative impacts resulting from their failures. Trust repair is regarded as a key ergonomic design in automated systems. Trust repair strategies (e.g., apology) are examined and supported by some evidence in controlled environments, however, rarely subjected to empirical evaluations in more naturalistic environments. To fill this gap, we conducted a test track study, invited participants (\\n<italic>N</i>\\n = 257) to experience an ADS failure, and tested the influence of the ADS’ trust repair on trust and other psychological responses. Half of participants (\\n<italic>n</i>\\n = 128) received the ADS’ verbal message (consisting of apology, explanation, and promise) by a human voice (\\n<italic>n</i>\\n = 63) or by Apple's Siri (\\n<italic>n</i>\\n = 65) after its failure. We measured seven psychological responses to AVs and ADS [e.g., trust and behavioral intention (BI)]. We found that both strategies cannot repair damaged trust. The human-voice-repair strategy can to some degree mitigate other detrimental influences (e.g., reductions in BI) resulting from the ADS failure, but this effect is only notable among participants without substantial driving experience. It points to the importance of conducting ecologically valid and field studies for validating human-like trust repair strategies in human–automation interaction and of developing trust repair strategies specific to safety-critical situations.\",\"PeriodicalId\":48916,\"journal\":{\"name\":\"IEEE Transactions on Human-Machine Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2024-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Human-Machine Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10636033/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Human-Machine Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10636033/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
To Err is Automation: Can Trust be Repaired by the Automated Driving System After its Failure?
Failures of the automated driving system (ADS) in automated vehicles (AVs) can damage driver–ADS cooperation (e.g., causing trust damage) and traffic safety. Researchers suggest infusing a human-like ability, active trust repair, into automated systems, to mitigate broken trust and other negative impacts resulting from their failures. Trust repair is regarded as a key ergonomic design in automated systems. Trust repair strategies (e.g., apology) are examined and supported by some evidence in controlled environments, however, rarely subjected to empirical evaluations in more naturalistic environments. To fill this gap, we conducted a test track study, invited participants (
N
= 257) to experience an ADS failure, and tested the influence of the ADS’ trust repair on trust and other psychological responses. Half of participants (
n
= 128) received the ADS’ verbal message (consisting of apology, explanation, and promise) by a human voice (
n
= 63) or by Apple's Siri (
n
= 65) after its failure. We measured seven psychological responses to AVs and ADS [e.g., trust and behavioral intention (BI)]. We found that both strategies cannot repair damaged trust. The human-voice-repair strategy can to some degree mitigate other detrimental influences (e.g., reductions in BI) resulting from the ADS failure, but this effect is only notable among participants without substantial driving experience. It points to the importance of conducting ecologically valid and field studies for validating human-like trust repair strategies in human–automation interaction and of developing trust repair strategies specific to safety-critical situations.
期刊介绍:
The scope of the IEEE Transactions on Human-Machine Systems includes the fields of human machine systems. It covers human systems and human organizational interactions including cognitive ergonomics, system test and evaluation, and human information processing concerns in systems and organizations.