Dorothea Obwegeser, Radu Timofte, Christoph Mayer, Michael M. Bornstein, Marc A. Schätzle, Raphael Patcas
{"title":"利用深度卷积神经网络为面部吸引力评分:在标准化图像上进行训练如何减少面部表情的偏差。","authors":"Dorothea Obwegeser, Radu Timofte, Christoph Mayer, Michael M. Bornstein, Marc A. Schätzle, Raphael Patcas","doi":"10.1111/ocr.12820","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Objective</h3>\n \n <p>In many medical disciplines, facial attractiveness is part of the diagnosis, yet its scoring might be confounded by facial expressions. The intent was to apply deep convolutional neural networks (CNN) to identify how facial expressions affect facial attractiveness and to explore whether a dedicated training of the CNN is able to reduce the bias of facial expressions.</p>\n </section>\n \n <section>\n \n <h3> Materials and Methods</h3>\n \n <p>Frontal facial images (<i>n</i> = 840) of 40 female participants (mean age 24.5 years) were taken adapting a neutral facial expression and the six universal facial expressions. Facial attractiveness was computed by means of a face detector, deep convolutional neural networks, standard support vector regression for facial beauty, visual regularized collaborative filtering and a regression technique for handling visual queries without rating history. CNN was first trained on random facial photographs from a dating website and then further trained on the Chicago Face Database (CFD) to increase its suitability to medical conditions. Both algorithms scored every image for attractiveness.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Facial expressions affect facial attractiveness scores significantly. Scores from CNN additionally trained on CFD had less variability between the expressions (range 54.3–60.9 compared to range: 32.6–49.5) and less variance within the scores (<i>P</i> ≤ .05), but also caused a shift in the ranking of the expressions' facial attractiveness.</p>\n </section>\n \n <section>\n \n <h3> Conclusion</h3>\n \n <p>Facial expressions confound attractiveness scores. Training on norming images generated scores less susceptible to distortion, but more difficult to interpret. Scoring facial attractiveness based on CNN seems promising, but AI solutions must be developed on CNN trained to recognize facial expressions as distractors.</p>\n </section>\n </div>","PeriodicalId":19652,"journal":{"name":"Orthodontics & Craniofacial Research","volume":"27 S2","pages":"25-32"},"PeriodicalIF":2.4000,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ocr.12820","citationCount":"0","resultStr":"{\"title\":\"Scoring facial attractiveness with deep convolutional neural networks: How training on standardized images reduces the bias of facial expressions\",\"authors\":\"Dorothea Obwegeser, Radu Timofte, Christoph Mayer, Michael M. Bornstein, Marc A. Schätzle, Raphael Patcas\",\"doi\":\"10.1111/ocr.12820\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Objective</h3>\\n \\n <p>In many medical disciplines, facial attractiveness is part of the diagnosis, yet its scoring might be confounded by facial expressions. The intent was to apply deep convolutional neural networks (CNN) to identify how facial expressions affect facial attractiveness and to explore whether a dedicated training of the CNN is able to reduce the bias of facial expressions.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Materials and Methods</h3>\\n \\n <p>Frontal facial images (<i>n</i> = 840) of 40 female participants (mean age 24.5 years) were taken adapting a neutral facial expression and the six universal facial expressions. Facial attractiveness was computed by means of a face detector, deep convolutional neural networks, standard support vector regression for facial beauty, visual regularized collaborative filtering and a regression technique for handling visual queries without rating history. CNN was first trained on random facial photographs from a dating website and then further trained on the Chicago Face Database (CFD) to increase its suitability to medical conditions. Both algorithms scored every image for attractiveness.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>Facial expressions affect facial attractiveness scores significantly. Scores from CNN additionally trained on CFD had less variability between the expressions (range 54.3–60.9 compared to range: 32.6–49.5) and less variance within the scores (<i>P</i> ≤ .05), but also caused a shift in the ranking of the expressions' facial attractiveness.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusion</h3>\\n \\n <p>Facial expressions confound attractiveness scores. Training on norming images generated scores less susceptible to distortion, but more difficult to interpret. Scoring facial attractiveness based on CNN seems promising, but AI solutions must be developed on CNN trained to recognize facial expressions as distractors.</p>\\n </section>\\n </div>\",\"PeriodicalId\":19652,\"journal\":{\"name\":\"Orthodontics & Craniofacial Research\",\"volume\":\"27 S2\",\"pages\":\"25-32\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2024-06-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ocr.12820\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Orthodontics & Craniofacial Research\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/ocr.12820\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"DENTISTRY, ORAL SURGERY & MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Orthodontics & Craniofacial Research","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/ocr.12820","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
Scoring facial attractiveness with deep convolutional neural networks: How training on standardized images reduces the bias of facial expressions
Objective
In many medical disciplines, facial attractiveness is part of the diagnosis, yet its scoring might be confounded by facial expressions. The intent was to apply deep convolutional neural networks (CNN) to identify how facial expressions affect facial attractiveness and to explore whether a dedicated training of the CNN is able to reduce the bias of facial expressions.
Materials and Methods
Frontal facial images (n = 840) of 40 female participants (mean age 24.5 years) were taken adapting a neutral facial expression and the six universal facial expressions. Facial attractiveness was computed by means of a face detector, deep convolutional neural networks, standard support vector regression for facial beauty, visual regularized collaborative filtering and a regression technique for handling visual queries without rating history. CNN was first trained on random facial photographs from a dating website and then further trained on the Chicago Face Database (CFD) to increase its suitability to medical conditions. Both algorithms scored every image for attractiveness.
Results
Facial expressions affect facial attractiveness scores significantly. Scores from CNN additionally trained on CFD had less variability between the expressions (range 54.3–60.9 compared to range: 32.6–49.5) and less variance within the scores (P ≤ .05), but also caused a shift in the ranking of the expressions' facial attractiveness.
Conclusion
Facial expressions confound attractiveness scores. Training on norming images generated scores less susceptible to distortion, but more difficult to interpret. Scoring facial attractiveness based on CNN seems promising, but AI solutions must be developed on CNN trained to recognize facial expressions as distractors.
期刊介绍:
Orthodontics & Craniofacial Research - Genes, Growth and Development is published to serve its readers as an international forum for the presentation and critical discussion of issues pertinent to the advancement of the specialty of orthodontics and the evidence-based knowledge of craniofacial growth and development. This forum is based on scientifically supported information, but also includes minority and conflicting opinions.
The objective of the journal is to facilitate effective communication between the research community and practicing clinicians. Original papers of high scientific quality that report the findings of clinical trials, clinical epidemiology, and novel therapeutic or diagnostic approaches are appropriate submissions. Similarly, we welcome papers in genetics, developmental biology, syndromology, surgery, speech and hearing, and other biomedical disciplines related to clinical orthodontics and normal and abnormal craniofacial growth and development. In addition to original and basic research, the journal publishes concise reviews, case reports of substantial value, invited essays, letters, and announcements.
The journal is published quarterly. The review of submitted papers will be coordinated by the editor and members of the editorial board. It is policy to review manuscripts within 3 to 4 weeks of receipt and to publish within 3 to 6 months of acceptance.