{"title":"基于物理成像的虚拟传感器构建网络,实现图像超级分辨率","authors":"Guozhi Tang;Hongwei Ge;Liang Sun;Yaqing Hou;Mingde Zhao","doi":"10.1109/TIP.2024.3472494","DOIUrl":null,"url":null,"abstract":"Image imaging in the real world is based on physical imaging mechanisms. Existing super-resolution methods mainly focus on designing complex network structures to extract and fuse image features more effectively, but ignore the guiding role of physical imaging mechanisms for model design, and cannot mine features from a physical perspective. Inspired by the mechanism of physical imaging, we propose a novel network architecture called Virtual-Sensor Construction network (VSCNet) to simulate the sensor array inside the camera. Specifically, VSCNet first generates different splitting directions to distribute photons to construct virtual sensors, and then performs a multi-stage adaptive fine-tuning operation to fine-tune the number of photons on the virtual sensors to increase the photosensitive area and eliminate photon cross-talk, and finally converts the obtained photon distributions into RGB images. These operations can naturally be regarded as the virtual expansion of the camera’s sensor array in the feature space, which makes our VSCNet bridge the physical space and feature space, and uses their complementarity to mine more effective features to improve performance. Extensive experiments on various datasets show that the proposed VSCNet achieves state-of-the-art performance with fewer parameters. Moreover, we perform experiments to validate the connection between the proposed VSCNet and the physical imaging mechanism. The implementation code is available at \n<uri>https://github.com/GZ-T/VSCNet</uri>\n.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5864-5877"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Virtual-Sensor Construction Network Based on Physical Imaging for Image Super-Resolution\",\"authors\":\"Guozhi Tang;Hongwei Ge;Liang Sun;Yaqing Hou;Mingde Zhao\",\"doi\":\"10.1109/TIP.2024.3472494\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Image imaging in the real world is based on physical imaging mechanisms. Existing super-resolution methods mainly focus on designing complex network structures to extract and fuse image features more effectively, but ignore the guiding role of physical imaging mechanisms for model design, and cannot mine features from a physical perspective. Inspired by the mechanism of physical imaging, we propose a novel network architecture called Virtual-Sensor Construction network (VSCNet) to simulate the sensor array inside the camera. Specifically, VSCNet first generates different splitting directions to distribute photons to construct virtual sensors, and then performs a multi-stage adaptive fine-tuning operation to fine-tune the number of photons on the virtual sensors to increase the photosensitive area and eliminate photon cross-talk, and finally converts the obtained photon distributions into RGB images. These operations can naturally be regarded as the virtual expansion of the camera’s sensor array in the feature space, which makes our VSCNet bridge the physical space and feature space, and uses their complementarity to mine more effective features to improve performance. Extensive experiments on various datasets show that the proposed VSCNet achieves state-of-the-art performance with fewer parameters. Moreover, we perform experiments to validate the connection between the proposed VSCNet and the physical imaging mechanism. The implementation code is available at \\n<uri>https://github.com/GZ-T/VSCNet</uri>\\n.\",\"PeriodicalId\":94032,\"journal\":{\"name\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"volume\":\"33 \",\"pages\":\"5864-5877\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10709890/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10709890/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Virtual-Sensor Construction Network Based on Physical Imaging for Image Super-Resolution
Image imaging in the real world is based on physical imaging mechanisms. Existing super-resolution methods mainly focus on designing complex network structures to extract and fuse image features more effectively, but ignore the guiding role of physical imaging mechanisms for model design, and cannot mine features from a physical perspective. Inspired by the mechanism of physical imaging, we propose a novel network architecture called Virtual-Sensor Construction network (VSCNet) to simulate the sensor array inside the camera. Specifically, VSCNet first generates different splitting directions to distribute photons to construct virtual sensors, and then performs a multi-stage adaptive fine-tuning operation to fine-tune the number of photons on the virtual sensors to increase the photosensitive area and eliminate photon cross-talk, and finally converts the obtained photon distributions into RGB images. These operations can naturally be regarded as the virtual expansion of the camera’s sensor array in the feature space, which makes our VSCNet bridge the physical space and feature space, and uses their complementarity to mine more effective features to improve performance. Extensive experiments on various datasets show that the proposed VSCNet achieves state-of-the-art performance with fewer parameters. Moreover, we perform experiments to validate the connection between the proposed VSCNet and the physical imaging mechanism. The implementation code is available at
https://github.com/GZ-T/VSCNet
.