Ming Meng , Yi Zhou , Dongshi Zuo , Zhaoxin Li , Zhong Zhou
{"title":"Structure recovery from single omnidirectional image with distortion-aware learning","authors":"Ming Meng , Yi Zhou , Dongshi Zuo , Zhaoxin Li , Zhong Zhou","doi":"10.1016/j.jksuci.2024.102151","DOIUrl":null,"url":null,"abstract":"<div><p>Recovering structures from images with 180<span><math><msup><mrow></mrow><mrow><mo>∘</mo></mrow></msup></math></span> or 360<span><math><msup><mrow></mrow><mrow><mo>∘</mo></mrow></msup></math></span> FoV is pivotal in computer vision and computational photography, particularly for VR/AR/MR and autonomous robotics applications. Due to varying distortions and the complexity of indoor scenes, recovering flexible structures from a single image is challenging. We introduce OmniSRNet, a comprehensive deep learning framework that merges distortion-aware learning with bidirectional LSTM. Utilizing a curated dataset with optimized panorama and expanded fisheye images, our framework features a distortion-aware module (DAM) for extracting features and a horizontal and vertical step module (HVSM) of LSTM for contextual predictions. OmniSRNet excels in applications such as VR-based house viewing and MR-based video surveillance, achieving leading results on cuboid and non-cuboid datasets. The code and dataset can be accessed at <span><span>https://github.com/mmlph/OmniSRNet/</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 7","pages":"Article 102151"},"PeriodicalIF":5.2000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002404/pdfft?md5=7e463774b7098668fef54fdff2ad3e21&pid=1-s2.0-S1319157824002404-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of King Saud University-Computer and Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1319157824002404","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Recovering structures from images with 180 or 360 FoV is pivotal in computer vision and computational photography, particularly for VR/AR/MR and autonomous robotics applications. Due to varying distortions and the complexity of indoor scenes, recovering flexible structures from a single image is challenging. We introduce OmniSRNet, a comprehensive deep learning framework that merges distortion-aware learning with bidirectional LSTM. Utilizing a curated dataset with optimized panorama and expanded fisheye images, our framework features a distortion-aware module (DAM) for extracting features and a horizontal and vertical step module (HVSM) of LSTM for contextual predictions. OmniSRNet excels in applications such as VR-based house viewing and MR-based video surveillance, achieving leading results on cuboid and non-cuboid datasets. The code and dataset can be accessed at https://github.com/mmlph/OmniSRNet/.
期刊介绍:
In 2022 the Journal of King Saud University - Computer and Information Sciences will become an author paid open access journal. Authors who submit their manuscript after October 31st 2021 will be asked to pay an Article Processing Charge (APC) after acceptance of their paper to make their work immediately, permanently, and freely accessible to all. The Journal of King Saud University Computer and Information Sciences is a refereed, international journal that covers all aspects of both foundations of computer and its practical applications.