The false promise of firearms examination validation studies: Lay controls, simplistic comparisons, and the failure to soundly measure misidentification rates
{"title":"The false promise of firearms examination validation studies: Lay controls, simplistic comparisons, and the failure to soundly measure misidentification rates","authors":"Richard E. Gutierrez JD, Emily J. Prokesch JD","doi":"10.1111/1556-4029.15531","DOIUrl":null,"url":null,"abstract":"<p>Several studies have recently attempted to estimate practitioner accuracy when comparing fired ammunition. But whether this research has included sufficiently challenging comparisons dependent upon expertise for accurate conclusions regarding source remains largely unexplored in the literature. Control groups of lay people comprise one means of vetting this question, of assessing whether comparison samples were at least challenging enough to distinguish between experts and novices. This article therefore utilizes such a group, specifically 82 attorneys, as a post hoc control and juxtaposes their performance on a comparison set of cartridge case images from one commonly cited study (Duez et al. in J Forensic Sci. 2018;63:1069–1084) with that of the original participant pool of professionals. Despite lacking the kind of formalized training and experience common to the latter, our lay participants displayed an ability, generally, to distinguish between cartridge cases fired by the same versus different guns in the 327 comparisons they performed. And while their accuracy rates lagged substantially behind those of the original participant pool of professionals on same-source comparisons, their performance on different-source comparisons was essentially indistinguishable from that of trained examiners. This indicates that although the study we vetted may provide useful information about professional accuracy when performing same-source comparisons, it has little to offer in terms of measuring examiners' ability to distinguish between cartridge cases fired by different guns. If similar issues pervade other accuracy studies, then there is little reason to rely on the false-positive rates they have generated.</p>","PeriodicalId":15743,"journal":{"name":"Journal of forensic sciences","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/1556-4029.15531","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of forensic sciences","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/1556-4029.15531","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICINE, LEGAL","Score":null,"Total":0}
引用次数: 0
Abstract
Several studies have recently attempted to estimate practitioner accuracy when comparing fired ammunition. But whether this research has included sufficiently challenging comparisons dependent upon expertise for accurate conclusions regarding source remains largely unexplored in the literature. Control groups of lay people comprise one means of vetting this question, of assessing whether comparison samples were at least challenging enough to distinguish between experts and novices. This article therefore utilizes such a group, specifically 82 attorneys, as a post hoc control and juxtaposes their performance on a comparison set of cartridge case images from one commonly cited study (Duez et al. in J Forensic Sci. 2018;63:1069–1084) with that of the original participant pool of professionals. Despite lacking the kind of formalized training and experience common to the latter, our lay participants displayed an ability, generally, to distinguish between cartridge cases fired by the same versus different guns in the 327 comparisons they performed. And while their accuracy rates lagged substantially behind those of the original participant pool of professionals on same-source comparisons, their performance on different-source comparisons was essentially indistinguishable from that of trained examiners. This indicates that although the study we vetted may provide useful information about professional accuracy when performing same-source comparisons, it has little to offer in terms of measuring examiners' ability to distinguish between cartridge cases fired by different guns. If similar issues pervade other accuracy studies, then there is little reason to rely on the false-positive rates they have generated.
期刊介绍:
The Journal of Forensic Sciences (JFS) is the official publication of the American Academy of Forensic Sciences (AAFS). It is devoted to the publication of original investigations, observations, scholarly inquiries and reviews in various branches of the forensic sciences. These include anthropology, criminalistics, digital and multimedia sciences, engineering and applied sciences, pathology/biology, psychiatry and behavioral science, jurisprudence, odontology, questioned documents, and toxicology. Similar submissions dealing with forensic aspects of other sciences and the social sciences are also accepted, as are submissions dealing with scientifically sound emerging science disciplines. The content and/or views expressed in the JFS are not necessarily those of the AAFS, the JFS Editorial Board, the organizations with which authors are affiliated, or the publisher of JFS. All manuscript submissions are double-blind peer-reviewed.