{"title":"高光谱图像跨域分类的平移减少样本扩展域泛化网络","authors":"Yunxiao Qi;Dongyang Liu;Junping Zhang","doi":"10.1109/LGRS.2025.3598295","DOIUrl":null,"url":null,"abstract":"In practical applications, the variations in imaging conditions along with changes in ground object states cause spectral shifts within the same class across different domains of hyperspectral images (HSIs), resulting in substantial domain distribution discrepancies. Additionally, the annotation process for HSIs is time-consuming, yielding an insufficient amount of labeled data relative to the needs of strong models, making them prone to overfitting during training. To address these issues, the shift-reduced sample expansion domain generalization network (SSEDGnet) is proposed. Sample diversity is first enhanced by generating expanded domain (ED) samples. Then, feature extraction is jointly performed on multiple source-domain (SD) samples and ED samples to learn domain-invariant representations, which enhances adaptability to unseen target domains (TDs). Specifically, by modeling the full imaging process from stimulation to response, including signal transmission and ground object reflection, the ground object reflection is separately extracted and used to directly generate ED samples through stimulation, thereby obtaining samples with reduced domain shift. Subsequently, feature extraction and fusion at different levels are carried out on both the SDs and EDs. Finally, the classifier conducts the classification. The experimental results on four public HSI datasets show that the proposed method effectively learns a model with superior generalization ability and stability, outperforming state-of-the-art methods. The code will be released soon on the site of <uri>https://github.com/Cherrieqi/SSEDGnet</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4000,"publicationDate":"2025-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Shift-Reduced Sample Expansion Domain Generalization Network for Hyperspectral Image Cross-Domain Classification\",\"authors\":\"Yunxiao Qi;Dongyang Liu;Junping Zhang\",\"doi\":\"10.1109/LGRS.2025.3598295\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In practical applications, the variations in imaging conditions along with changes in ground object states cause spectral shifts within the same class across different domains of hyperspectral images (HSIs), resulting in substantial domain distribution discrepancies. Additionally, the annotation process for HSIs is time-consuming, yielding an insufficient amount of labeled data relative to the needs of strong models, making them prone to overfitting during training. To address these issues, the shift-reduced sample expansion domain generalization network (SSEDGnet) is proposed. Sample diversity is first enhanced by generating expanded domain (ED) samples. Then, feature extraction is jointly performed on multiple source-domain (SD) samples and ED samples to learn domain-invariant representations, which enhances adaptability to unseen target domains (TDs). Specifically, by modeling the full imaging process from stimulation to response, including signal transmission and ground object reflection, the ground object reflection is separately extracted and used to directly generate ED samples through stimulation, thereby obtaining samples with reduced domain shift. Subsequently, feature extraction and fusion at different levels are carried out on both the SDs and EDs. Finally, the classifier conducts the classification. The experimental results on four public HSI datasets show that the proposed method effectively learns a model with superior generalization ability and stability, outperforming state-of-the-art methods. The code will be released soon on the site of <uri>https://github.com/Cherrieqi/SSEDGnet</uri>\",\"PeriodicalId\":91017,\"journal\":{\"name\":\"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society\",\"volume\":\"22 \",\"pages\":\"1-5\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2025-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11124263/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11124263/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Shift-Reduced Sample Expansion Domain Generalization Network for Hyperspectral Image Cross-Domain Classification
In practical applications, the variations in imaging conditions along with changes in ground object states cause spectral shifts within the same class across different domains of hyperspectral images (HSIs), resulting in substantial domain distribution discrepancies. Additionally, the annotation process for HSIs is time-consuming, yielding an insufficient amount of labeled data relative to the needs of strong models, making them prone to overfitting during training. To address these issues, the shift-reduced sample expansion domain generalization network (SSEDGnet) is proposed. Sample diversity is first enhanced by generating expanded domain (ED) samples. Then, feature extraction is jointly performed on multiple source-domain (SD) samples and ED samples to learn domain-invariant representations, which enhances adaptability to unseen target domains (TDs). Specifically, by modeling the full imaging process from stimulation to response, including signal transmission and ground object reflection, the ground object reflection is separately extracted and used to directly generate ED samples through stimulation, thereby obtaining samples with reduced domain shift. Subsequently, feature extraction and fusion at different levels are carried out on both the SDs and EDs. Finally, the classifier conducts the classification. The experimental results on four public HSI datasets show that the proposed method effectively learns a model with superior generalization ability and stability, outperforming state-of-the-art methods. The code will be released soon on the site of https://github.com/Cherrieqi/SSEDGnet