Salman Siddique Khan, V. Boominathan, A. Veeraraghavan, K. Mitra
{"title":"超薄高速无镜头相机的光学与算法设计","authors":"Salman Siddique Khan, V. Boominathan, A. Veeraraghavan, K. Mitra","doi":"10.1109/ICME55011.2023.00273","DOIUrl":null,"url":null,"abstract":"There is a growing demand for small, light-weight and low-latency cameras in the robotics and AR/VR community. Mask-based lensless cameras, by design, provide a combined advantage of form-factor, weight and speed. They do so by replacing the classical lens with a thin optical mask and computation. Recent works have explored deep learning based post-processing operations on lensless captures that allow high quality scene reconstruction. However, the ability of deep learning to find the optimal optics for thin lensless cameras has not been explored. In this work, we propose a learning based framework for designing the optics of thin lensless cameras. To highlight the effectiveness of our framework, we learn the optical phase mask for multiple tasks using physics-based neural networks. Specifically, we learn the optimal mask using a weighted loss defined for the following tasks-2D scene reconstructions, optical flow estimation and face detection. We show that mask learned through this framework is better than heuristically designed masks especially for small sensors sizes that allow lower bandwidth and faster readout. Finally, we verify the performance of our learned phase-mask on real data.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Designing Optics and Algorithm for Ultra-Thin, High-Speed Lensless Cameras\",\"authors\":\"Salman Siddique Khan, V. Boominathan, A. Veeraraghavan, K. Mitra\",\"doi\":\"10.1109/ICME55011.2023.00273\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There is a growing demand for small, light-weight and low-latency cameras in the robotics and AR/VR community. Mask-based lensless cameras, by design, provide a combined advantage of form-factor, weight and speed. They do so by replacing the classical lens with a thin optical mask and computation. Recent works have explored deep learning based post-processing operations on lensless captures that allow high quality scene reconstruction. However, the ability of deep learning to find the optimal optics for thin lensless cameras has not been explored. In this work, we propose a learning based framework for designing the optics of thin lensless cameras. To highlight the effectiveness of our framework, we learn the optical phase mask for multiple tasks using physics-based neural networks. Specifically, we learn the optimal mask using a weighted loss defined for the following tasks-2D scene reconstructions, optical flow estimation and face detection. We show that mask learned through this framework is better than heuristically designed masks especially for small sensors sizes that allow lower bandwidth and faster readout. Finally, we verify the performance of our learned phase-mask on real data.\",\"PeriodicalId\":321830,\"journal\":{\"name\":\"2023 IEEE International Conference on Multimedia and Expo (ICME)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Conference on Multimedia and Expo (ICME)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICME55011.2023.00273\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Multimedia and Expo (ICME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME55011.2023.00273","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Designing Optics and Algorithm for Ultra-Thin, High-Speed Lensless Cameras
There is a growing demand for small, light-weight and low-latency cameras in the robotics and AR/VR community. Mask-based lensless cameras, by design, provide a combined advantage of form-factor, weight and speed. They do so by replacing the classical lens with a thin optical mask and computation. Recent works have explored deep learning based post-processing operations on lensless captures that allow high quality scene reconstruction. However, the ability of deep learning to find the optimal optics for thin lensless cameras has not been explored. In this work, we propose a learning based framework for designing the optics of thin lensless cameras. To highlight the effectiveness of our framework, we learn the optical phase mask for multiple tasks using physics-based neural networks. Specifically, we learn the optimal mask using a weighted loss defined for the following tasks-2D scene reconstructions, optical flow estimation and face detection. We show that mask learned through this framework is better than heuristically designed masks especially for small sensors sizes that allow lower bandwidth and faster readout. Finally, we verify the performance of our learned phase-mask on real data.