Peizhuo Li, Chen Li, Guanlin Li, Kuo Guo, Jian Yang, Zexiang Liu
{"title":"An Efficient Human-Computer Interaction in Battlefield Environment via Multi-stream Learning","authors":"Peizhuo Li, Chen Li, Guanlin Li, Kuo Guo, Jian Yang, Zexiang Liu","doi":"10.1109/PRAI55851.2022.9904202","DOIUrl":null,"url":null,"abstract":"Human-computer interaction is fundamental for the increasingly complicating battlefield intelligent and automotive. This task, however, is challenging owing to the vibration and uneven illumination environment of weapon equipment. The traditional interaction methods cannot meet the needs of fast-paced, high flexibility and long-distance operation. In this paper, we propose a multi-stream learning algorithm that uses gesture recognition to realize human-computer interaction. Our method utilizes Time Domain Aggregation and Domain Adaption Fusion modules to solve the error-prone identification problem under uneven illumination and vibration environment respectively. Experiments using 10 types of gestures and more than 3000 infrared and visible light images under the real mobile vehicle environment datasets demonstrate the robustness, accuracy and efficiency of our method compared to previous state-of-the-art gesture recognition methods.","PeriodicalId":243612,"journal":{"name":"2022 5th International Conference on Pattern Recognition and Artificial Intelligence (PRAI)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 5th International Conference on Pattern Recognition and Artificial Intelligence (PRAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PRAI55851.2022.9904202","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Human-computer interaction is fundamental for the increasingly complicating battlefield intelligent and automotive. This task, however, is challenging owing to the vibration and uneven illumination environment of weapon equipment. The traditional interaction methods cannot meet the needs of fast-paced, high flexibility and long-distance operation. In this paper, we propose a multi-stream learning algorithm that uses gesture recognition to realize human-computer interaction. Our method utilizes Time Domain Aggregation and Domain Adaption Fusion modules to solve the error-prone identification problem under uneven illumination and vibration environment respectively. Experiments using 10 types of gestures and more than 3000 infrared and visible light images under the real mobile vehicle environment datasets demonstrate the robustness, accuracy and efficiency of our method compared to previous state-of-the-art gesture recognition methods.