{"title":"Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part I: Statistical Issues.","authors":"Christoph Leonhard","doi":"10.1007/s11065-023-09601-7","DOIUrl":null,"url":null,"abstract":"<p><p>Forensic neuropsychological examinations with determination of malingering have tremendous social, legal, and economic consequences. Thousands of studies have been published aimed at developing and validating methods to diagnose malingering in forensic settings, based largely on approximately 50 validity tests, including embedded and stand-alone performance validity tests. This is the first part of a two-part review. Part I explores three statistical issues related to the validation of validity tests as predictors of malingering, including (a) the need to report a complete set of classification accuracy statistics, (b) how to detect and handle collinearity among validity tests, and (c) how to assess the classification accuracy of algorithms for aggregating information from multiple validity tests. In the Part II companion paper, three closely related research methodological issues will be examined. Statistical issues are explored through conceptual analysis, statistical simulations, and through reanalysis of findings from prior validation studies. Findings suggest extant neuropsychological validity tests are collinear and contribute redundant information to the prediction of malingering among forensic examinees. Findings further suggest that existing diagnostic algorithms may miss diagnostic accuracy targets under most realistic conditions. The review makes several recommendations to address these concerns, including (a) reporting of full confusion table statistics with 95% confidence intervals in diagnostic trials, (b) the use of logistic regression, and (c) adoption of the consensus model on the \"transparent reporting of multivariate prediction models for individual prognosis or diagnosis\" (TRIPOD) in the malingering literature.</p>","PeriodicalId":49754,"journal":{"name":"Neuropsychology Review","volume":null,"pages":null},"PeriodicalIF":5.4000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuropsychology Review","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1007/s11065-023-09601-7","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/8/24 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 1
Abstract
Forensic neuropsychological examinations with determination of malingering have tremendous social, legal, and economic consequences. Thousands of studies have been published aimed at developing and validating methods to diagnose malingering in forensic settings, based largely on approximately 50 validity tests, including embedded and stand-alone performance validity tests. This is the first part of a two-part review. Part I explores three statistical issues related to the validation of validity tests as predictors of malingering, including (a) the need to report a complete set of classification accuracy statistics, (b) how to detect and handle collinearity among validity tests, and (c) how to assess the classification accuracy of algorithms for aggregating information from multiple validity tests. In the Part II companion paper, three closely related research methodological issues will be examined. Statistical issues are explored through conceptual analysis, statistical simulations, and through reanalysis of findings from prior validation studies. Findings suggest extant neuropsychological validity tests are collinear and contribute redundant information to the prediction of malingering among forensic examinees. Findings further suggest that existing diagnostic algorithms may miss diagnostic accuracy targets under most realistic conditions. The review makes several recommendations to address these concerns, including (a) reporting of full confusion table statistics with 95% confidence intervals in diagnostic trials, (b) the use of logistic regression, and (c) adoption of the consensus model on the "transparent reporting of multivariate prediction models for individual prognosis or diagnosis" (TRIPOD) in the malingering literature.
期刊介绍:
Neuropsychology Review is a quarterly, refereed publication devoted to integrative review papers on substantive content areas in neuropsychology, with particular focus on populations with endogenous or acquired conditions affecting brain and function and on translational research providing a mechanistic understanding of clinical problems. Publication of new data is not the purview of the journal. Articles are written by international specialists in the field, discussing such complex issues as distinctive functional features of central nervous system disease and injury; challenges in early diagnosis; the impact of genes and environment on function; risk factors for functional impairment; treatment efficacy of neuropsychological rehabilitation; the role of neuroimaging, neuroelectrophysiology, and other neurometric modalities in explicating function; clinical trial design; neuropsychological function and its substrates characteristic of normal development and aging; and neuropsychological dysfunction and its substrates in neurological, psychiatric, and medical conditions. The journal''s broad perspective is supported by an outstanding, multidisciplinary editorial review board guided by the aim to provide students and professionals, clinicians and researchers with scholarly articles that critically and objectively summarize and synthesize the strengths and weaknesses in the literature and propose novel hypotheses, methods of analysis, and links to other fields.