Xueli Geng;Lingling Li;Licheng Jiao;Xu Liu;Fang Liu;Shuyuan Yang
{"title":"Knowledge-Aware Geometric Contourlet Semantic Learning for Hyperspectral Image Classification","authors":"Xueli Geng;Lingling Li;Licheng Jiao;Xu Liu;Fang Liu;Shuyuan Yang","doi":"10.1109/TCSVT.2024.3459009","DOIUrl":null,"url":null,"abstract":"Hyperspectral image (HSI) provides detailed spectral and spatial information, essential for precise earth observation and various applications. Deep learning has advanced HSI classification, but the scarcity of labeled data and large model parameters necessitate semi-supervised methods to enhance performance and generalization. In this paper, we propose a novel semi-supervised framework dubbed Knowledge-Aware Geometric Contourlet Semantic Learning (KGCSL), aiming to achieve high-precision HSI classification with limited samples leveraging geometric and semantic knowledge. Specifically, to fully leverage geometric knowledge, KGCSL incorporates multi-scale and multi-directional representations of the contourlet transform within the neural network, enhancing the robustness of feature extraction and interpretability. Furthermore, to fully utilize semantic knowledge, an entropy-weighted prototype loss function is designed that exploits the attribute relationships between labeled and unlabeled samples to guide the optimization of unlabeled samples, promoting comprehensive semantic learning. Comprehensive evaluations of the proposed KGCSL framework on three public HSI datasets show that it outperforms existing state-of-the-art HSI classification methods and exhibits excellent generalization capabilities in limited-sample scenarios. The source code is available at <uri>https://github.com/ShirlySmile/KGCSL</uri>.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 1","pages":"698-712"},"PeriodicalIF":8.3000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10679155/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Hyperspectral image (HSI) provides detailed spectral and spatial information, essential for precise earth observation and various applications. Deep learning has advanced HSI classification, but the scarcity of labeled data and large model parameters necessitate semi-supervised methods to enhance performance and generalization. In this paper, we propose a novel semi-supervised framework dubbed Knowledge-Aware Geometric Contourlet Semantic Learning (KGCSL), aiming to achieve high-precision HSI classification with limited samples leveraging geometric and semantic knowledge. Specifically, to fully leverage geometric knowledge, KGCSL incorporates multi-scale and multi-directional representations of the contourlet transform within the neural network, enhancing the robustness of feature extraction and interpretability. Furthermore, to fully utilize semantic knowledge, an entropy-weighted prototype loss function is designed that exploits the attribute relationships between labeled and unlabeled samples to guide the optimization of unlabeled samples, promoting comprehensive semantic learning. Comprehensive evaluations of the proposed KGCSL framework on three public HSI datasets show that it outperforms existing state-of-the-art HSI classification methods and exhibits excellent generalization capabilities in limited-sample scenarios. The source code is available at https://github.com/ShirlySmile/KGCSL.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.