基于人在环实验的人机信任与标定

Yifan Wang, Jianbin Guo, S. Zeng, Qirui Mao, Zhenping Lu, Zengkai Wang
{"title":"基于人在环实验的人机信任与标定","authors":"Yifan Wang, Jianbin Guo, S. Zeng, Qirui Mao, Zhenping Lu, Zengkai Wang","doi":"10.1109/SRSE56746.2022.10067635","DOIUrl":null,"url":null,"abstract":"While the automation system brings efficiency improvements, people's trust in the automation system has become an important factor affecting the safety of the human-machine system. The operator's unsuitable trust in the automation system (such as undertrust and overtrust) makes the human-automation system not always well matched. In this paper, we took the aircraft engine fire alarm system as the research scene, carried out the human-in-the-loop simulation experiment by injecting aircraft engine fire alarms, and used the subjective report method to measure the trust level of the subject. Then, based on the experimental data, we studied the laws of human-machine trust, including the law of trust anchoring (that is, in the case of anchoring with a known false alarm rate, the subject's trust fluctuation range is smaller than that of the unknown false alarm rate), trust elasticity, and primacy effect. A human-machine trust calibration method was proposed to prevent undertrust and overtrust in the process of human-machine interaction, and different forms of calibration methods were verified. It was found that reminding the subjects when the human error probability (HEP) ≥ 0.3 and at the same time declaring whether the source of human error is overtrust or undertrust is a more effective calibration method, which can generally reduce the human error probability.","PeriodicalId":147308,"journal":{"name":"2022 4th International Conference on System Reliability and Safety Engineering (SRSE)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Human-Machine Trust and Calibration Based on Human-in-the-Loop Experiment\",\"authors\":\"Yifan Wang, Jianbin Guo, S. Zeng, Qirui Mao, Zhenping Lu, Zengkai Wang\",\"doi\":\"10.1109/SRSE56746.2022.10067635\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While the automation system brings efficiency improvements, people's trust in the automation system has become an important factor affecting the safety of the human-machine system. The operator's unsuitable trust in the automation system (such as undertrust and overtrust) makes the human-automation system not always well matched. In this paper, we took the aircraft engine fire alarm system as the research scene, carried out the human-in-the-loop simulation experiment by injecting aircraft engine fire alarms, and used the subjective report method to measure the trust level of the subject. Then, based on the experimental data, we studied the laws of human-machine trust, including the law of trust anchoring (that is, in the case of anchoring with a known false alarm rate, the subject's trust fluctuation range is smaller than that of the unknown false alarm rate), trust elasticity, and primacy effect. A human-machine trust calibration method was proposed to prevent undertrust and overtrust in the process of human-machine interaction, and different forms of calibration methods were verified. It was found that reminding the subjects when the human error probability (HEP) ≥ 0.3 and at the same time declaring whether the source of human error is overtrust or undertrust is a more effective calibration method, which can generally reduce the human error probability.\",\"PeriodicalId\":147308,\"journal\":{\"name\":\"2022 4th International Conference on System Reliability and Safety Engineering (SRSE)\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 4th International Conference on System Reliability and Safety Engineering (SRSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SRSE56746.2022.10067635\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on System Reliability and Safety Engineering (SRSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SRSE56746.2022.10067635","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在自动化系统带来效率提升的同时,人们对自动化系统的信任程度也成为影响人机系统安全的重要因素。操作者对自动化系统的不适当信任(如信任不足和过度信任)使得人-自动化系统并不总是匹配良好。本文以飞机发动机火灾报警系统为研究场景,通过注入飞机发动机火灾报警进行人在环仿真实验,并采用主观报告法测量受试者的信任程度。然后,基于实验数据,我们研究了人机信任的规律,包括信任锚定规律(即在已知虚警率的锚定情况下,主体的信任波动幅度小于未知虚警率的锚定)、信任弹性和首因效应。为了防止人机交互过程中的欠信任和过度信任,提出了一种人机信任校准方法,并对不同形式的校准方法进行了验证。研究发现,当人为错误概率(HEP)≥0.3时提醒被试,同时声明人为错误的来源是过度信任还是欠信任是一种更有效的校准方法,可以普遍降低人为错误概率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Human-Machine Trust and Calibration Based on Human-in-the-Loop Experiment
While the automation system brings efficiency improvements, people's trust in the automation system has become an important factor affecting the safety of the human-machine system. The operator's unsuitable trust in the automation system (such as undertrust and overtrust) makes the human-automation system not always well matched. In this paper, we took the aircraft engine fire alarm system as the research scene, carried out the human-in-the-loop simulation experiment by injecting aircraft engine fire alarms, and used the subjective report method to measure the trust level of the subject. Then, based on the experimental data, we studied the laws of human-machine trust, including the law of trust anchoring (that is, in the case of anchoring with a known false alarm rate, the subject's trust fluctuation range is smaller than that of the unknown false alarm rate), trust elasticity, and primacy effect. A human-machine trust calibration method was proposed to prevent undertrust and overtrust in the process of human-machine interaction, and different forms of calibration methods were verified. It was found that reminding the subjects when the human error probability (HEP) ≥ 0.3 and at the same time declaring whether the source of human error is overtrust or undertrust is a more effective calibration method, which can generally reduce the human error probability.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信