{"title":"Few-label aerial target intention recognition based on self-supervised contrastive learning","authors":"Zihao Song, Yan Zhou, Yichao Cai, Wei Cheng, Changfei Wu, Jianguo Yin","doi":"10.1049/rsn2.12695","DOIUrl":null,"url":null,"abstract":"<p>Identifying the intentions of aerial targets is crucial for air situation understanding and decision making. Deep learning, with its powerful feature learning and representation capability, has become a key means to achieve higher performance in aerial target intention recognition (ATIR). However, conventional supervised deep learning methods rely on abundant labelled samples for training, which are difficult to quickly obtain in practical scenarios, posing a significant challenge to the effectiveness of training deep learning models. To address this issue, this paper proposes a novel few-label ATIR method based on deep contrastive learning, which combines the advantages of self-supervised learning and semi-supervised learning. Specifically, leveraging unlabelled samples, we first employ strong and weak data augmentation views and the temporal contrasting module to capture temporally relevant features, whereas the contextual contrasting module is utilised to learn discriminative representations. Subsequently, the network is fine-tuned with a limited set of labelled samples to further refine the learnt representations. Experimental results on an ATIR dataset demonstrate that our method significantly outperforms other few-label classification baselines in terms of recognition accuracy and Macro F1 score when the proportion of labelled samples is as low as 1% and 5%.</p>","PeriodicalId":50377,"journal":{"name":"Iet Radar Sonar and Navigation","volume":"19 1","pages":""},"PeriodicalIF":1.4000,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/rsn2.12695","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Iet Radar Sonar and Navigation","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/rsn2.12695","RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Identifying the intentions of aerial targets is crucial for air situation understanding and decision making. Deep learning, with its powerful feature learning and representation capability, has become a key means to achieve higher performance in aerial target intention recognition (ATIR). However, conventional supervised deep learning methods rely on abundant labelled samples for training, which are difficult to quickly obtain in practical scenarios, posing a significant challenge to the effectiveness of training deep learning models. To address this issue, this paper proposes a novel few-label ATIR method based on deep contrastive learning, which combines the advantages of self-supervised learning and semi-supervised learning. Specifically, leveraging unlabelled samples, we first employ strong and weak data augmentation views and the temporal contrasting module to capture temporally relevant features, whereas the contextual contrasting module is utilised to learn discriminative representations. Subsequently, the network is fine-tuned with a limited set of labelled samples to further refine the learnt representations. Experimental results on an ATIR dataset demonstrate that our method significantly outperforms other few-label classification baselines in terms of recognition accuracy and Macro F1 score when the proportion of labelled samples is as low as 1% and 5%.
期刊介绍:
IET Radar, Sonar & Navigation covers the theory and practice of systems and signals for radar, sonar, radiolocation, navigation, and surveillance purposes, in aerospace and terrestrial applications.
Examples include advances in waveform design, clutter and detection, electronic warfare, adaptive array and superresolution methods, tracking algorithms, synthetic aperture, and target recognition techniques.