H. Mihara, Takuya Funatomi, Kenichiro Tanaka, Hiroyuki Kubo, Y. Mukaigawa, H. Nagahara
{"title":"4D light field segmentation with spatial and angular consistencies","authors":"H. Mihara, Takuya Funatomi, Kenichiro Tanaka, Hiroyuki Kubo, Y. Mukaigawa, H. Nagahara","doi":"10.1109/ICCPHOT.2016.7492872","DOIUrl":null,"url":null,"abstract":"In this paper, we describe a supervised four-dimensional (4D) light field segmentation method that uses a graph-cut algorithm. Since 4D light field data has implicit depth information and contains redundancy, it differs from simple 4D hyper-volume. In order to preserve redundancy, we define two neighboring ray types (spatial and angular) in light field data. To obtain higher segmentation accuracy, we also design a learning-based likelihood, called objectness, which utilizes appearance and disparity cues. We show the effectiveness of our method via numerical evaluation and some light field editing applications using both synthetic and real-world light fields.","PeriodicalId":156635,"journal":{"name":"2016 IEEE International Conference on Computational Photography (ICCP)","volume":"214 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Conference on Computational Photography (ICCP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCPHOT.2016.7492872","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 29
Abstract
In this paper, we describe a supervised four-dimensional (4D) light field segmentation method that uses a graph-cut algorithm. Since 4D light field data has implicit depth information and contains redundancy, it differs from simple 4D hyper-volume. In order to preserve redundancy, we define two neighboring ray types (spatial and angular) in light field data. To obtain higher segmentation accuracy, we also design a learning-based likelihood, called objectness, which utilizes appearance and disparity cues. We show the effectiveness of our method via numerical evaluation and some light field editing applications using both synthetic and real-world light fields.