Petra Urbanova , Tomas Goldmann , Dominik Cerny , Martin Drahansky
{"title":"Head poses and grimaces: Challenges for automated face identification algorithms?","authors":"Petra Urbanova , Tomas Goldmann , Dominik Cerny , Martin Drahansky","doi":"10.1016/j.scijus.2024.06.002","DOIUrl":null,"url":null,"abstract":"<div><p>In today’s biometric and commercial settings, state-of-the-art image processing relies solely on artificial intelligence and machine learning which provides a high level of accuracy. However, these principles are deeply rooted in abstract, complex “black-box systems”. When applied to forensic image identification, concerns about transparency and accountability emerge. This study explores the impact of two challenging factors in automated facial identification: facial expressions and head poses. The sample comprised 3D faces with nine prototype expressions, collected from 41 participants (13 males, 28 females) of European descent aged 19.96 to 50.89 years. Pre-processing involved converting 3D models to 2D color images (256 × 256 px). Probes included a set of 9 images per individual with head poses varying by 5° in both left-to-right (yaw) and up-and-down (pitch) directions for neutral expressions. A second set of 3,610 images per individual covered viewpoints in 5° increments from −45° to 45° for head movements and different facial expressions, forming the targets. Pair-wise comparisons using ArcFace, a state-of-the-art face identification algorithm yielded 54,615,690 dissimilarity scores. Results indicate that minor head deviations in probes have minimal impact. However, the performance diminished as targets deviated from the frontal position. Right-to-left movements were less influential than up and down, with downward pitch showing less impact than upward movements. The lowest accuracy was for upward pitch at 45°. Dissimilarity scores were consistently higher for males than for females across all studied factors. The performance particularly diverged in upward movements, starting at 15°. Among tested facial expressions, happiness and contempt performed best, while disgust exhibited the lowest AUC values.</p></div>","PeriodicalId":49565,"journal":{"name":"Science & Justice","volume":null,"pages":null},"PeriodicalIF":1.9000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science & Justice","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1355030624000522","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICINE, LEGAL","Score":null,"Total":0}
引用次数: 0
Abstract
In today’s biometric and commercial settings, state-of-the-art image processing relies solely on artificial intelligence and machine learning which provides a high level of accuracy. However, these principles are deeply rooted in abstract, complex “black-box systems”. When applied to forensic image identification, concerns about transparency and accountability emerge. This study explores the impact of two challenging factors in automated facial identification: facial expressions and head poses. The sample comprised 3D faces with nine prototype expressions, collected from 41 participants (13 males, 28 females) of European descent aged 19.96 to 50.89 years. Pre-processing involved converting 3D models to 2D color images (256 × 256 px). Probes included a set of 9 images per individual with head poses varying by 5° in both left-to-right (yaw) and up-and-down (pitch) directions for neutral expressions. A second set of 3,610 images per individual covered viewpoints in 5° increments from −45° to 45° for head movements and different facial expressions, forming the targets. Pair-wise comparisons using ArcFace, a state-of-the-art face identification algorithm yielded 54,615,690 dissimilarity scores. Results indicate that minor head deviations in probes have minimal impact. However, the performance diminished as targets deviated from the frontal position. Right-to-left movements were less influential than up and down, with downward pitch showing less impact than upward movements. The lowest accuracy was for upward pitch at 45°. Dissimilarity scores were consistently higher for males than for females across all studied factors. The performance particularly diverged in upward movements, starting at 15°. Among tested facial expressions, happiness and contempt performed best, while disgust exhibited the lowest AUC values.
期刊介绍:
Science & Justice provides a forum to promote communication and publication of original articles, reviews and correspondence on subjects that spark debates within the Forensic Science Community and the criminal justice sector. The journal provides a medium whereby all aspects of applying science to legal proceedings can be debated and progressed. Science & Justice is published six times a year, and will be of interest primarily to practising forensic scientists and their colleagues in related fields. It is chiefly concerned with the publication of formal scientific papers, in keeping with its international learned status, but will not accept any article describing experimentation on animals which does not meet strict ethical standards.
Promote communication and informed debate within the Forensic Science Community and the criminal justice sector.
To promote the publication of learned and original research findings from all areas of the forensic sciences and by so doing to advance the profession.
To promote the publication of case based material by way of case reviews.
To promote the publication of conference proceedings which are of interest to the forensic science community.
To provide a medium whereby all aspects of applying science to legal proceedings can be debated and progressed.
To appeal to all those with an interest in the forensic sciences.