Claudio Ferrari , Stefano Berretti , Pietro Pala , Alberto Del Bimbo
{"title":"利用表情康复训练的RGB图像测量三维面部变形","authors":"Claudio Ferrari , Stefano Berretti , Pietro Pala , Alberto Del Bimbo","doi":"10.1016/j.vrih.2022.05.004","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><p>The accurate (quantitative) analysis of 3D face deformation is a problem of increasing interest in many applications. In particular, defining a 3D model of the face deformation into a 2D target image to capture local and asymmetric deformations remains a challenge in existing literature. A measure of such local deformations may be a relevant index for monitoring the rehabilitation exercises of patients suffering from Parkinson’s or Alzheimer’s disease or those recovering from a stroke.</p></div><div><h3>Methods</h3><p>In this paper, a complete framework that allows the construction of a 3D morphable shape model (3DMM) of the face is presented for fitting to a target RGB image. The model has the specific characteristic of being based on localized components of deformation. The fitting transformation is performed from 3D to 2D and guided by the correspondence between landmarks detected in the target image and those manually annotated on the average 3DMM. The fitting also has the distinction of being performed in two steps to disentangle face deformations related to the identity of the target subject from those induced by facial actions.</p></div><div><h3>Results</h3><p>The method was experimentally validated using the MICC-3D dataset, which includes 11 subjects. Each subject was imaged in one neutral pose and while performing 18 facial actions that deform the face in localized and asymmetric ways. For each acquisition, 3DMM was fit to an RGB frame whereby, from the apex facial action and the neutral frame, the extent of the deformation was computed. The results indicate that the proposed approach can accurately capture face deformation, even localized and asymmetric deformations.</p></div><div><h3>Conclusion</h3><p>The proposed framework demonstrated that it is possible to measure deformations of a reconstructed 3D face model to monitor facial actions performed in response to a set of targets. Interestingly, these results were obtained using only RGB targets, without the need for 3D scans captured with costly devices. This paves the way for the use of the proposed tool in remote medical rehabilitation monitoring.</p></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2096579622000456/pdf?md5=10f3974adc62709cdc0d135e68fc356c&pid=1-s2.0-S2096579622000456-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Measuring 3D face deformations from RGB images of expression rehabilitation exercises\",\"authors\":\"Claudio Ferrari , Stefano Berretti , Pietro Pala , Alberto Del Bimbo\",\"doi\":\"10.1016/j.vrih.2022.05.004\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background</h3><p>The accurate (quantitative) analysis of 3D face deformation is a problem of increasing interest in many applications. In particular, defining a 3D model of the face deformation into a 2D target image to capture local and asymmetric deformations remains a challenge in existing literature. A measure of such local deformations may be a relevant index for monitoring the rehabilitation exercises of patients suffering from Parkinson’s or Alzheimer’s disease or those recovering from a stroke.</p></div><div><h3>Methods</h3><p>In this paper, a complete framework that allows the construction of a 3D morphable shape model (3DMM) of the face is presented for fitting to a target RGB image. The model has the specific characteristic of being based on localized components of deformation. The fitting transformation is performed from 3D to 2D and guided by the correspondence between landmarks detected in the target image and those manually annotated on the average 3DMM. The fitting also has the distinction of being performed in two steps to disentangle face deformations related to the identity of the target subject from those induced by facial actions.</p></div><div><h3>Results</h3><p>The method was experimentally validated using the MICC-3D dataset, which includes 11 subjects. Each subject was imaged in one neutral pose and while performing 18 facial actions that deform the face in localized and asymmetric ways. For each acquisition, 3DMM was fit to an RGB frame whereby, from the apex facial action and the neutral frame, the extent of the deformation was computed. The results indicate that the proposed approach can accurately capture face deformation, even localized and asymmetric deformations.</p></div><div><h3>Conclusion</h3><p>The proposed framework demonstrated that it is possible to measure deformations of a reconstructed 3D face model to monitor facial actions performed in response to a set of targets. Interestingly, these results were obtained using only RGB targets, without the need for 3D scans captured with costly devices. This paves the way for the use of the proposed tool in remote medical rehabilitation monitoring.</p></div>\",\"PeriodicalId\":33538,\"journal\":{\"name\":\"Virtual Reality Intelligent Hardware\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2096579622000456/pdf?md5=10f3974adc62709cdc0d135e68fc356c&pid=1-s2.0-S2096579622000456-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Virtual Reality Intelligent Hardware\",\"FirstCategoryId\":\"1093\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2096579622000456\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Virtual Reality Intelligent Hardware","FirstCategoryId":"1093","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2096579622000456","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
Measuring 3D face deformations from RGB images of expression rehabilitation exercises
Background
The accurate (quantitative) analysis of 3D face deformation is a problem of increasing interest in many applications. In particular, defining a 3D model of the face deformation into a 2D target image to capture local and asymmetric deformations remains a challenge in existing literature. A measure of such local deformations may be a relevant index for monitoring the rehabilitation exercises of patients suffering from Parkinson’s or Alzheimer’s disease or those recovering from a stroke.
Methods
In this paper, a complete framework that allows the construction of a 3D morphable shape model (3DMM) of the face is presented for fitting to a target RGB image. The model has the specific characteristic of being based on localized components of deformation. The fitting transformation is performed from 3D to 2D and guided by the correspondence between landmarks detected in the target image and those manually annotated on the average 3DMM. The fitting also has the distinction of being performed in two steps to disentangle face deformations related to the identity of the target subject from those induced by facial actions.
Results
The method was experimentally validated using the MICC-3D dataset, which includes 11 subjects. Each subject was imaged in one neutral pose and while performing 18 facial actions that deform the face in localized and asymmetric ways. For each acquisition, 3DMM was fit to an RGB frame whereby, from the apex facial action and the neutral frame, the extent of the deformation was computed. The results indicate that the proposed approach can accurately capture face deformation, even localized and asymmetric deformations.
Conclusion
The proposed framework demonstrated that it is possible to measure deformations of a reconstructed 3D face model to monitor facial actions performed in response to a set of targets. Interestingly, these results were obtained using only RGB targets, without the need for 3D scans captured with costly devices. This paves the way for the use of the proposed tool in remote medical rehabilitation monitoring.