{"title":"Quasiconvex alignment of multimodal skin images for quantitative dermatology","authors":"S. Madan, Kristin J. Dana, G. O. Cula","doi":"10.1109/CVPRW.2009.5204346","DOIUrl":null,"url":null,"abstract":"In quantitative dermatology, high resolution sensors provide images that capture fine scale features like pores, birthmarks, and moles. Breathing and minute movements result in misregistration of micro level features. Many computer vision methods for dermatology such as change detection, appearance capture, and multi sensor fusion require high accuracy point-wise registration of micro level features. However, most computer vision algorithms are based on macro level features such as eyes, nose, and lips, and aren't suitable for registering micro level features. In this paper, we develop a practical robust algorithm to align face regions using skin texture with mostly indistinct micro level features. In computer vision, these regions would typically be considered featureless regions. Our method approximates the face surface as a collection of quasi-planar skin patches and uses quasiconvex optimization and the L∞ norm for estimation of spatially varying homographies. We have assembled a unique dataset of high resolution dermatology images comprised of over 100 human subjects. The image pairs vary in imaging modality (crossed, parallel and no polarization) and are misregistered due to the natural non-rigid human movement between image capture. This method of polarization based image capture is commonly used in dermatology to image surface and subsurface structure. Using this dataset, we show high quality alignment of “featureless” regions and demonstrate that the algorithm works robustly over a large set of subjects with different skin texture appearance, not just a few test images.","PeriodicalId":431981,"journal":{"name":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW.2009.5204346","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
In quantitative dermatology, high resolution sensors provide images that capture fine scale features like pores, birthmarks, and moles. Breathing and minute movements result in misregistration of micro level features. Many computer vision methods for dermatology such as change detection, appearance capture, and multi sensor fusion require high accuracy point-wise registration of micro level features. However, most computer vision algorithms are based on macro level features such as eyes, nose, and lips, and aren't suitable for registering micro level features. In this paper, we develop a practical robust algorithm to align face regions using skin texture with mostly indistinct micro level features. In computer vision, these regions would typically be considered featureless regions. Our method approximates the face surface as a collection of quasi-planar skin patches and uses quasiconvex optimization and the L∞ norm for estimation of spatially varying homographies. We have assembled a unique dataset of high resolution dermatology images comprised of over 100 human subjects. The image pairs vary in imaging modality (crossed, parallel and no polarization) and are misregistered due to the natural non-rigid human movement between image capture. This method of polarization based image capture is commonly used in dermatology to image surface and subsurface structure. Using this dataset, we show high quality alignment of “featureless” regions and demonstrate that the algorithm works robustly over a large set of subjects with different skin texture appearance, not just a few test images.