Zhiyuan Shen, Xueyan Li, Junqi Bai, Kai Wang, Yifan Xu
{"title":"基于空中交通管制员视听数据流的疲劳特征提取动态交互融合模型","authors":"Zhiyuan Shen, Xueyan Li, Junqi Bai, Kai Wang, Yifan Xu","doi":"10.1049/bme2/7626919","DOIUrl":null,"url":null,"abstract":"<p>Fatigue among air traffic controllers is a factor contributing to civil aviation crashes. Existing methods for extracting and fuzing fatigue features encounter two main challenges: (1) the low accuracy of traditional single-mode fatigue recognition methods, and (2) disregarding multimodal data correlations in traditional multimodal methods for feature concatenation and fusion. This paper proposes an interactive algorithm for the fusion and recognition of multimode fatigue features that combines multihead attention (MHA) and cross-attention (XATTN) which are based on an improved speech and facial fatigue recognition model. First, an improved conformer model which combines a convolutional module with a transformer encoder is proposed using the radiotelephony communication data of controllers by employing the filter bank method for extracting profound speech features. Second, facial data of controllers are processed via pointwise convolutions employing a stack of inverted residual layers, which facilitates the extraction of facial features. Third, speech and facial features are fuzed interactively by combining MHA and XATTN, which achieves high accuracy of recognizing the fatigue state of controllers working in complex operational environments. A series of experiments were conducted with audiovisual data sets collected from actual air traffic control (ATC) missions. Comparing with four competing methods for fuzing multimodal features, the experimental results indicate that the proposed method for fuzing multimode features achieved a recognition accuracy of 99.2%, which was 8.9% higher than that for a speech single-mode model and 0.4% higher than that for a facial single-mode model.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/7626919","citationCount":"0","resultStr":"{\"title\":\"A Dynamic Interactive Fusion Model for Extracting Fatigue Features Based on the Audiovisual Data Flow of Air Traffic Controllers\",\"authors\":\"Zhiyuan Shen, Xueyan Li, Junqi Bai, Kai Wang, Yifan Xu\",\"doi\":\"10.1049/bme2/7626919\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Fatigue among air traffic controllers is a factor contributing to civil aviation crashes. Existing methods for extracting and fuzing fatigue features encounter two main challenges: (1) the low accuracy of traditional single-mode fatigue recognition methods, and (2) disregarding multimodal data correlations in traditional multimodal methods for feature concatenation and fusion. This paper proposes an interactive algorithm for the fusion and recognition of multimode fatigue features that combines multihead attention (MHA) and cross-attention (XATTN) which are based on an improved speech and facial fatigue recognition model. First, an improved conformer model which combines a convolutional module with a transformer encoder is proposed using the radiotelephony communication data of controllers by employing the filter bank method for extracting profound speech features. Second, facial data of controllers are processed via pointwise convolutions employing a stack of inverted residual layers, which facilitates the extraction of facial features. Third, speech and facial features are fuzed interactively by combining MHA and XATTN, which achieves high accuracy of recognizing the fatigue state of controllers working in complex operational environments. A series of experiments were conducted with audiovisual data sets collected from actual air traffic control (ATC) missions. Comparing with four competing methods for fuzing multimodal features, the experimental results indicate that the proposed method for fuzing multimode features achieved a recognition accuracy of 99.2%, which was 8.9% higher than that for a speech single-mode model and 0.4% higher than that for a facial single-mode model.</p>\",\"PeriodicalId\":48821,\"journal\":{\"name\":\"IET Biometrics\",\"volume\":\"2025 1\",\"pages\":\"\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2025-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/7626919\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Biometrics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/bme2/7626919\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Biometrics","FirstCategoryId":"94","ListUrlMain":"https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/bme2/7626919","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A Dynamic Interactive Fusion Model for Extracting Fatigue Features Based on the Audiovisual Data Flow of Air Traffic Controllers
Fatigue among air traffic controllers is a factor contributing to civil aviation crashes. Existing methods for extracting and fuzing fatigue features encounter two main challenges: (1) the low accuracy of traditional single-mode fatigue recognition methods, and (2) disregarding multimodal data correlations in traditional multimodal methods for feature concatenation and fusion. This paper proposes an interactive algorithm for the fusion and recognition of multimode fatigue features that combines multihead attention (MHA) and cross-attention (XATTN) which are based on an improved speech and facial fatigue recognition model. First, an improved conformer model which combines a convolutional module with a transformer encoder is proposed using the radiotelephony communication data of controllers by employing the filter bank method for extracting profound speech features. Second, facial data of controllers are processed via pointwise convolutions employing a stack of inverted residual layers, which facilitates the extraction of facial features. Third, speech and facial features are fuzed interactively by combining MHA and XATTN, which achieves high accuracy of recognizing the fatigue state of controllers working in complex operational environments. A series of experiments were conducted with audiovisual data sets collected from actual air traffic control (ATC) missions. Comparing with four competing methods for fuzing multimodal features, the experimental results indicate that the proposed method for fuzing multimode features achieved a recognition accuracy of 99.2%, which was 8.9% higher than that for a speech single-mode model and 0.4% higher than that for a facial single-mode model.
IET BiometricsCOMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
5.90
自引率
0.00%
发文量
46
审稿时长
33 weeks
期刊介绍:
The field of biometric recognition - automated recognition of individuals based on their behavioural and biological characteristics - has now reached a level of maturity where viable practical applications are both possible and increasingly available. The biometrics field is characterised especially by its interdisciplinarity since, while focused primarily around a strong technological base, effective system design and implementation often requires a broad range of skills encompassing, for example, human factors, data security and database technologies, psychological and physiological awareness, and so on. Also, the technology focus itself embraces diversity, since the engineering of effective biometric systems requires integration of image analysis, pattern recognition, sensor technology, database engineering, security design and many other strands of understanding.
The scope of the journal is intentionally relatively wide. While focusing on core technological issues, it is recognised that these may be inherently diverse and in many cases may cross traditional disciplinary boundaries. The scope of the journal will therefore include any topics where it can be shown that a paper can increase our understanding of biometric systems, signal future developments and applications for biometrics, or promote greater practical uptake for relevant technologies:
Development and enhancement of individual biometric modalities including the established and traditional modalities (e.g. face, fingerprint, iris, signature and handwriting recognition) and also newer or emerging modalities (gait, ear-shape, neurological patterns, etc.)
Multibiometrics, theoretical and practical issues, implementation of practical systems, multiclassifier and multimodal approaches
Soft biometrics and information fusion for identification, verification and trait prediction
Human factors and the human-computer interface issues for biometric systems, exception handling strategies
Template construction and template management, ageing factors and their impact on biometric systems
Usability and user-oriented design, psychological and physiological principles and system integration
Sensors and sensor technologies for biometric processing
Database technologies to support biometric systems
Implementation of biometric systems, security engineering implications, smartcard and associated technologies in implementation, implementation platforms, system design and performance evaluation
Trust and privacy issues, security of biometric systems and supporting technological solutions, biometric template protection
Biometric cryptosystems, security and biometrics-linked encryption
Links with forensic processing and cross-disciplinary commonalities
Core underpinning technologies (e.g. image analysis, pattern recognition, computer vision, signal processing, etc.), where the specific relevance to biometric processing can be demonstrated
Applications and application-led considerations
Position papers on technology or on the industrial context of biometric system development
Adoption and promotion of standards in biometrics, improving technology acceptance, deployment and interoperability, avoiding cross-cultural and cross-sector restrictions
Relevant ethical and social issues