{"title":"Depth from Coupled Optical Differentiation","authors":"Junjie Luo, Yuxuan Liu, Emma Alexander, Qi Guo","doi":"arxiv-2409.10725","DOIUrl":null,"url":null,"abstract":"We propose depth from coupled optical differentiation, a low-computation\npassive-lighting 3D sensing mechanism. It is based on our discovery that\nper-pixel object distance can be rigorously determined by a coupled pair of\noptical derivatives of a defocused image using a simple, closed-form\nrelationship. Unlike previous depth-from-defocus (DfD) methods that leverage\nspatial derivatives of the image to estimate scene depths, the proposed\nmechanism's use of only optical derivatives makes it significantly more robust\nto noise. Furthermore, unlike many previous DfD algorithms with requirements on\naperture code, this relationship is proved to be universal to a broad range of\naperture codes. We build the first 3D sensor based on depth from coupled optical\ndifferentiation. Its optical assembly includes a deformable lens and a\nmotorized iris, which enables dynamic adjustments to the optical power and\naperture radius. The sensor captures two pairs of images: one pair with a\ndifferential change of optical power and the other with a differential change\nof aperture scale. From the four images, a depth and confidence map can be\ngenerated with only 36 floating point operations per output pixel (FLOPOP),\nmore than ten times lower than the previous lowest passive-lighting depth\nsensing solution to our knowledge. Additionally, the depth map generated by the\nproposed sensor demonstrates more than twice the working range of previous DfD\nmethods while using significantly lower computation.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10725","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We propose depth from coupled optical differentiation, a low-computation
passive-lighting 3D sensing mechanism. It is based on our discovery that
per-pixel object distance can be rigorously determined by a coupled pair of
optical derivatives of a defocused image using a simple, closed-form
relationship. Unlike previous depth-from-defocus (DfD) methods that leverage
spatial derivatives of the image to estimate scene depths, the proposed
mechanism's use of only optical derivatives makes it significantly more robust
to noise. Furthermore, unlike many previous DfD algorithms with requirements on
aperture code, this relationship is proved to be universal to a broad range of
aperture codes. We build the first 3D sensor based on depth from coupled optical
differentiation. Its optical assembly includes a deformable lens and a
motorized iris, which enables dynamic adjustments to the optical power and
aperture radius. The sensor captures two pairs of images: one pair with a
differential change of optical power and the other with a differential change
of aperture scale. From the four images, a depth and confidence map can be
generated with only 36 floating point operations per output pixel (FLOPOP),
more than ten times lower than the previous lowest passive-lighting depth
sensing solution to our knowledge. Additionally, the depth map generated by the
proposed sensor demonstrates more than twice the working range of previous DfD
methods while using significantly lower computation.