Xing Hu , Zhixuan Li , Lingkun Luo , Hamid Reza Karimi , Dawei Zhang
{"title":"用于高光谱异常检测的字典训练注意力约束低等级稀疏自动编码器","authors":"Xing Hu , Zhixuan Li , Lingkun Luo , Hamid Reza Karimi , Dawei Zhang","doi":"10.1016/j.neunet.2024.106797","DOIUrl":null,"url":null,"abstract":"<div><div>Dictionary representations and deep learning Autoencoder (AE) models have proven effective in hyperspectral anomaly detection. Dictionary representations offer self-explanation but struggle with complex scenarios. Conversely, autoencoders can capture details in complex scenes but lack self-explanation. Complex scenarios often involve extensive spatial information, making its utilization crucial in hyperspectral anomaly detection. To effectively combine the advantages of both methods and address the insufficient use of spatial information, we propose an attention constrained low-rank and sparse autoencoder for hyperspectral anomaly detection. This model includes two encoders: an attention constrained low-rank autoencoder (AClrAE) trained with a background dictionary and incorporating a Global Self-Attention Module (GAM) to focus on global spatial information, resulting in improved background reconstruction; and an attention constrained sparse autoencoder (ACsAE) trained with an anomaly dictionary and incorporating a Local Self-Attention Module (LAM) to focus on local spatial information, resulting in enhanced anomaly reconstruction. Finally, to merge the detection results from both encoders, a nonlinear fusion scheme is employed. Experiments on multiple real and synthetic datasets demonstrate the effectiveness and feasibility of the proposed method.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dictionary trained attention constrained low rank and sparse autoencoder for hyperspectral anomaly detection\",\"authors\":\"Xing Hu , Zhixuan Li , Lingkun Luo , Hamid Reza Karimi , Dawei Zhang\",\"doi\":\"10.1016/j.neunet.2024.106797\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Dictionary representations and deep learning Autoencoder (AE) models have proven effective in hyperspectral anomaly detection. Dictionary representations offer self-explanation but struggle with complex scenarios. Conversely, autoencoders can capture details in complex scenes but lack self-explanation. Complex scenarios often involve extensive spatial information, making its utilization crucial in hyperspectral anomaly detection. To effectively combine the advantages of both methods and address the insufficient use of spatial information, we propose an attention constrained low-rank and sparse autoencoder for hyperspectral anomaly detection. This model includes two encoders: an attention constrained low-rank autoencoder (AClrAE) trained with a background dictionary and incorporating a Global Self-Attention Module (GAM) to focus on global spatial information, resulting in improved background reconstruction; and an attention constrained sparse autoencoder (ACsAE) trained with an anomaly dictionary and incorporating a Local Self-Attention Module (LAM) to focus on local spatial information, resulting in enhanced anomaly reconstruction. Finally, to merge the detection results from both encoders, a nonlinear fusion scheme is employed. Experiments on multiple real and synthetic datasets demonstrate the effectiveness and feasibility of the proposed method.</div></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2024-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608024007214\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608024007214","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Dictionary trained attention constrained low rank and sparse autoencoder for hyperspectral anomaly detection
Dictionary representations and deep learning Autoencoder (AE) models have proven effective in hyperspectral anomaly detection. Dictionary representations offer self-explanation but struggle with complex scenarios. Conversely, autoencoders can capture details in complex scenes but lack self-explanation. Complex scenarios often involve extensive spatial information, making its utilization crucial in hyperspectral anomaly detection. To effectively combine the advantages of both methods and address the insufficient use of spatial information, we propose an attention constrained low-rank and sparse autoencoder for hyperspectral anomaly detection. This model includes two encoders: an attention constrained low-rank autoencoder (AClrAE) trained with a background dictionary and incorporating a Global Self-Attention Module (GAM) to focus on global spatial information, resulting in improved background reconstruction; and an attention constrained sparse autoencoder (ACsAE) trained with an anomaly dictionary and incorporating a Local Self-Attention Module (LAM) to focus on local spatial information, resulting in enhanced anomaly reconstruction. Finally, to merge the detection results from both encoders, a nonlinear fusion scheme is employed. Experiments on multiple real and synthetic datasets demonstrate the effectiveness and feasibility of the proposed method.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.