Weihao Xu, Songmao Chen, Dingjie Wang, Yuyuan Tian, Ning Zhang, Wei Hao, Xiuqin Su
{"title":"利用潜空间图模型增强非视线成像特征","authors":"Weihao Xu, Songmao Chen, Dingjie Wang, Yuyuan Tian, Ning Zhang, Wei Hao, Xiuqin Su","doi":"10.1016/j.optlastec.2024.111538","DOIUrl":null,"url":null,"abstract":"Non-line-of-sight (NLoS) imaging reveals hidden scene from indirect diffusion signals. However, it is still challenging to balance noise suppression, detail preservation, and reconstruction efficiency. In this work, a robust framework which is centered on feature extractor and enhancement is proposed. In the framework, the feature extractor exploits the graph model in latent space for efficient noise suppression and detail preservation, the enhancement collaboratively learns the feature and data statistics by considering the extractor to define regularization. The reconstruction results on the publicly accessible datasets show that the proposed framework outperforms the state-of-art methods considering both quality and efficiency.","PeriodicalId":19597,"journal":{"name":"Optics & Laser Technology","volume":"45 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Feature enhanced non-line-of-sight imaging using graph model in latent space\",\"authors\":\"Weihao Xu, Songmao Chen, Dingjie Wang, Yuyuan Tian, Ning Zhang, Wei Hao, Xiuqin Su\",\"doi\":\"10.1016/j.optlastec.2024.111538\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Non-line-of-sight (NLoS) imaging reveals hidden scene from indirect diffusion signals. However, it is still challenging to balance noise suppression, detail preservation, and reconstruction efficiency. In this work, a robust framework which is centered on feature extractor and enhancement is proposed. In the framework, the feature extractor exploits the graph model in latent space for efficient noise suppression and detail preservation, the enhancement collaboratively learns the feature and data statistics by considering the extractor to define regularization. The reconstruction results on the publicly accessible datasets show that the proposed framework outperforms the state-of-art methods considering both quality and efficiency.\",\"PeriodicalId\":19597,\"journal\":{\"name\":\"Optics & Laser Technology\",\"volume\":\"45 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Optics & Laser Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1016/j.optlastec.2024.111538\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Optics & Laser Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.optlastec.2024.111538","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Feature enhanced non-line-of-sight imaging using graph model in latent space
Non-line-of-sight (NLoS) imaging reveals hidden scene from indirect diffusion signals. However, it is still challenging to balance noise suppression, detail preservation, and reconstruction efficiency. In this work, a robust framework which is centered on feature extractor and enhancement is proposed. In the framework, the feature extractor exploits the graph model in latent space for efficient noise suppression and detail preservation, the enhancement collaboratively learns the feature and data statistics by considering the extractor to define regularization. The reconstruction results on the publicly accessible datasets show that the proposed framework outperforms the state-of-art methods considering both quality and efficiency.