A methodology to detect pilot perception of warning information by eye movement data and deep residual shrinkage networks

C.-Q. Yan, Y.-C. Sun, X. Zhang, H. Mao, J.-Y. Jiang
{"title":"A methodology to detect pilot perception of warning information by eye movement data and deep residual shrinkage networks","authors":"C.-Q. Yan, Y.-C. Sun, X. Zhang, H. Mao, J.-Y. Jiang","doi":"10.1017/aer.2022.101","DOIUrl":null,"url":null,"abstract":"Abstract This paper studied the use of eye movement data to form criteria for judging whether pilots perceive emergency information such as cockpit warnings. In the experiment, 12 subjects randomly encountered different warning information while flying a simulated helicopter, and their eye movement data were collected synchronously. Firstly, the importance of the eye movement features was calculated by ANOVA (analysis of variance). According to the sorting of the importance and the Euclidean distance of each eye movement feature, the warning information samples with different eye movement features were obtained. Secondly, the residual shrinkage network modules were added to CNN (convolutional neural network) to construct a DRSN (deep residual shrinkage networks) model. Finally, the processed warning information samples were used to train and test the DRSN model. In order to verify the superiority of this method, the DRSN model was compared with three machine learning models, namely SVM (support vector machine), RF (radom forest) and BPNN (backpropagation neural network). Among the four models, the DRSN model performed the best. When all eye movement features were selected, this model detected pilot perception of warning information with an average accuracy of 90.4%, of which the highest detection accuracy reached 96.4%. Experiments showed that the DRSN model had advantages in detecting pilot perception of warning information.","PeriodicalId":22567,"journal":{"name":"The Aeronautical Journal (1968)","volume":"1 1","pages":"1219 - 1233"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Aeronautical Journal (1968)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/aer.2022.101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract This paper studied the use of eye movement data to form criteria for judging whether pilots perceive emergency information such as cockpit warnings. In the experiment, 12 subjects randomly encountered different warning information while flying a simulated helicopter, and their eye movement data were collected synchronously. Firstly, the importance of the eye movement features was calculated by ANOVA (analysis of variance). According to the sorting of the importance and the Euclidean distance of each eye movement feature, the warning information samples with different eye movement features were obtained. Secondly, the residual shrinkage network modules were added to CNN (convolutional neural network) to construct a DRSN (deep residual shrinkage networks) model. Finally, the processed warning information samples were used to train and test the DRSN model. In order to verify the superiority of this method, the DRSN model was compared with three machine learning models, namely SVM (support vector machine), RF (radom forest) and BPNN (backpropagation neural network). Among the four models, the DRSN model performed the best. When all eye movement features were selected, this model detected pilot perception of warning information with an average accuracy of 90.4%, of which the highest detection accuracy reached 96.4%. Experiments showed that the DRSN model had advantages in detecting pilot perception of warning information.
一种利用眼动数据和深度残余收缩网络检测飞行员对预警信息感知的方法
摘要本文研究利用眼动数据形成判断飞行员是否感知驾驶舱警告等紧急信息的标准。在实验中,12名被试在驾驶模拟直升机时随机遇到不同的警告信息,并同步收集他们的眼动数据。首先,通过方差分析计算眼动特征的重要性。通过对每个眼动特征的重要性和欧氏距离进行排序,得到具有不同眼动特征的预警信息样本。其次,将剩余收缩网络模块加入到卷积神经网络CNN中,构建深度剩余收缩网络DRSN模型;最后,利用处理后的预警信息样本对DRSN模型进行训练和测试。为了验证该方法的优越性,将DRSN模型与支持向量机(SVM)、随机森林(RF)和反向传播神经网络(BPNN)三种机器学习模型进行了比较。四种模型中,DRSN模型表现最好。当选择所有眼动特征时,该模型检测飞行员对预警信息感知的平均准确率为90.4%,其中最高准确率为96.4%。实验表明,DRSN模型在检测飞行员对预警信息的感知方面具有优势。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信