Yifan Wang, Jianbin Guo, S. Zeng, Qirui Mao, Zhenping Lu, Zengkai Wang
{"title":"基于人在环实验的人机信任与标定","authors":"Yifan Wang, Jianbin Guo, S. Zeng, Qirui Mao, Zhenping Lu, Zengkai Wang","doi":"10.1109/SRSE56746.2022.10067635","DOIUrl":null,"url":null,"abstract":"While the automation system brings efficiency improvements, people's trust in the automation system has become an important factor affecting the safety of the human-machine system. The operator's unsuitable trust in the automation system (such as undertrust and overtrust) makes the human-automation system not always well matched. In this paper, we took the aircraft engine fire alarm system as the research scene, carried out the human-in-the-loop simulation experiment by injecting aircraft engine fire alarms, and used the subjective report method to measure the trust level of the subject. Then, based on the experimental data, we studied the laws of human-machine trust, including the law of trust anchoring (that is, in the case of anchoring with a known false alarm rate, the subject's trust fluctuation range is smaller than that of the unknown false alarm rate), trust elasticity, and primacy effect. A human-machine trust calibration method was proposed to prevent undertrust and overtrust in the process of human-machine interaction, and different forms of calibration methods were verified. It was found that reminding the subjects when the human error probability (HEP) ≥ 0.3 and at the same time declaring whether the source of human error is overtrust or undertrust is a more effective calibration method, which can generally reduce the human error probability.","PeriodicalId":147308,"journal":{"name":"2022 4th International Conference on System Reliability and Safety Engineering (SRSE)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Human-Machine Trust and Calibration Based on Human-in-the-Loop Experiment\",\"authors\":\"Yifan Wang, Jianbin Guo, S. Zeng, Qirui Mao, Zhenping Lu, Zengkai Wang\",\"doi\":\"10.1109/SRSE56746.2022.10067635\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While the automation system brings efficiency improvements, people's trust in the automation system has become an important factor affecting the safety of the human-machine system. The operator's unsuitable trust in the automation system (such as undertrust and overtrust) makes the human-automation system not always well matched. In this paper, we took the aircraft engine fire alarm system as the research scene, carried out the human-in-the-loop simulation experiment by injecting aircraft engine fire alarms, and used the subjective report method to measure the trust level of the subject. Then, based on the experimental data, we studied the laws of human-machine trust, including the law of trust anchoring (that is, in the case of anchoring with a known false alarm rate, the subject's trust fluctuation range is smaller than that of the unknown false alarm rate), trust elasticity, and primacy effect. A human-machine trust calibration method was proposed to prevent undertrust and overtrust in the process of human-machine interaction, and different forms of calibration methods were verified. It was found that reminding the subjects when the human error probability (HEP) ≥ 0.3 and at the same time declaring whether the source of human error is overtrust or undertrust is a more effective calibration method, which can generally reduce the human error probability.\",\"PeriodicalId\":147308,\"journal\":{\"name\":\"2022 4th International Conference on System Reliability and Safety Engineering (SRSE)\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 4th International Conference on System Reliability and Safety Engineering (SRSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SRSE56746.2022.10067635\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 4th International Conference on System Reliability and Safety Engineering (SRSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SRSE56746.2022.10067635","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Human-Machine Trust and Calibration Based on Human-in-the-Loop Experiment
While the automation system brings efficiency improvements, people's trust in the automation system has become an important factor affecting the safety of the human-machine system. The operator's unsuitable trust in the automation system (such as undertrust and overtrust) makes the human-automation system not always well matched. In this paper, we took the aircraft engine fire alarm system as the research scene, carried out the human-in-the-loop simulation experiment by injecting aircraft engine fire alarms, and used the subjective report method to measure the trust level of the subject. Then, based on the experimental data, we studied the laws of human-machine trust, including the law of trust anchoring (that is, in the case of anchoring with a known false alarm rate, the subject's trust fluctuation range is smaller than that of the unknown false alarm rate), trust elasticity, and primacy effect. A human-machine trust calibration method was proposed to prevent undertrust and overtrust in the process of human-machine interaction, and different forms of calibration methods were verified. It was found that reminding the subjects when the human error probability (HEP) ≥ 0.3 and at the same time declaring whether the source of human error is overtrust or undertrust is a more effective calibration method, which can generally reduce the human error probability.