{"title":"多视角多曝光立体","authors":"Alejandro J. Troccoli, S. B. Kang, S. Seitz","doi":"10.1109/3DPVT.2006.98","DOIUrl":null,"url":null,"abstract":"Multi-view stereo algorithms typically rely on same-exposure images as inputs due to the brightness constancy assumption. While state-of-the-art depth results are excellent, they do not produce high-dynamic range textures required for high-quality view reconstruction. In this paper, we propose a technique that adapts multi-view stereo for different exposure inputs to simultaneously recover reliable dense depth and high dynamic range textures. In our technique, we use an exposure-invariant similarity statistic to establish correspondences, through which we robustly extract the camera radiometric response function and the image exposures. This enables us to then convert all images to radiance space and selectively use the radiance data for dense depth and high dynamic range texture recovery. We show results for synthetic and real scenes.","PeriodicalId":346673,"journal":{"name":"Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"44","resultStr":"{\"title\":\"Multi-View Multi-Exposure Stereo\",\"authors\":\"Alejandro J. Troccoli, S. B. Kang, S. Seitz\",\"doi\":\"10.1109/3DPVT.2006.98\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multi-view stereo algorithms typically rely on same-exposure images as inputs due to the brightness constancy assumption. While state-of-the-art depth results are excellent, they do not produce high-dynamic range textures required for high-quality view reconstruction. In this paper, we propose a technique that adapts multi-view stereo for different exposure inputs to simultaneously recover reliable dense depth and high dynamic range textures. In our technique, we use an exposure-invariant similarity statistic to establish correspondences, through which we robustly extract the camera radiometric response function and the image exposures. This enables us to then convert all images to radiance space and selectively use the radiance data for dense depth and high dynamic range texture recovery. We show results for synthetic and real scenes.\",\"PeriodicalId\":346673,\"journal\":{\"name\":\"Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)\",\"volume\":\"67 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-06-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"44\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/3DPVT.2006.98\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DPVT.2006.98","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi-view stereo algorithms typically rely on same-exposure images as inputs due to the brightness constancy assumption. While state-of-the-art depth results are excellent, they do not produce high-dynamic range textures required for high-quality view reconstruction. In this paper, we propose a technique that adapts multi-view stereo for different exposure inputs to simultaneously recover reliable dense depth and high dynamic range textures. In our technique, we use an exposure-invariant similarity statistic to establish correspondences, through which we robustly extract the camera radiometric response function and the image exposures. This enables us to then convert all images to radiance space and selectively use the radiance data for dense depth and high dynamic range texture recovery. We show results for synthetic and real scenes.