Robert L Denney, Sundeep Thinda, Patrick M Finn, Rachel L Fazio, Michelle J Chen, Michael R Walsh
{"title":"Development of a measure for assessing malingered incompetency in criminal proceedings: Denney competency related test (D-CRT).","authors":"Robert L Denney, Sundeep Thinda, Patrick M Finn, Rachel L Fazio, Michelle J Chen, Michael R Walsh","doi":"10.1080/13803395.2024.2314731","DOIUrl":"10.1080/13803395.2024.2314731","url":null,"abstract":"<p><strong>Introduction: </strong>Experts frequently assess competency in criminal settings where the rate of feigning cognitive deficit is demonstrably elevated. We describe the construction and validation of the Denney Competency Related Test (D-CRT) to assess feigned incompetency of defendants in the criminal adjudicative setting. It was expected the D-CRT would prove effective at identifying feigned incompetence based on its two alternative, forced-choice and performance curve characteristics.</p><p><strong>Method: </strong>Development and validation of the D-CRT occurred in described phases. Items were developed to measure competency based upon expert review. Item analysis and adjustments were completed with 304 young teenage volunteers to obtain a proper spread of item difficulty in preparation for eventual performance curve analysis (PCA). Test-retest reliability was assessed with 44 adult community volunteers. Validation included an analog simulation design with 101 jail detainees using MacArthur Competency Assessment Test-Criminal Adjudication and Word Memory Test as criterion measures. Effects of racial/ethnic demographic differences were examined in a separate study of 208 undergraduate volunteers. D-CRT specificity was identified with 46 elderly clinic referrals diagnosed with mild cognitive impairment and dementia.</p><p><strong>Results: </strong>Item development, adjustment, and repeat analysis resulted in item probabilities evenly spread from .28 to 1.0. Test-retest correlation was good (.83). Internal consistency of items was excellent (KR-20 > .91). D-CRT demonstrated convergent validity in regard to measuring competency related information and as well as malingering. The test successfully differentiated between jail inmates asked to perforfm their best and inmates asked to simulate incompetency (AUC = .945). There were no statistically significant differences found in performance across racial/ethnic backgrounds. D-CRT specificity remained excellent among elderly clinic referrals with significant cognitive compromise at the recommended total score cutoff.</p><p><strong>Conclusions: </strong>D-CRT is an effective measure of feigned criminal incompetency in the context of potential cognitive deficiency, and PCA is assistive in the determination. Additional validation using knowns groups designs with various mental health-related conditions are needed.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139722822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaley Boress, Owen Gaasedelen, Jeong Hye Kim, Michael R Basso, Douglas M Whiteside
{"title":"Examination of the relationship between symptom and performance validity measures across referral subtypes.","authors":"Kaley Boress, Owen Gaasedelen, Jeong Hye Kim, Michael R Basso, Douglas M Whiteside","doi":"10.1080/13803395.2023.2261633","DOIUrl":"10.1080/13803395.2023.2261633","url":null,"abstract":"<p><strong>Introduction: </strong>The extent to which performance validity (PVT) and symptom validity (SVT) tests measure separate constructs is unclear. Prior research using the Minnesota Multiphasic Personality Inventory (MMPI-2 & RF) suggested that PVTs and SVTs are separate but related constructs. However, the relationship between Personality Assessment Inventory (PAI) SVTs and PVTs has not been explored. This study aimed to replicate previous MMPI research using the PAI, exploring the relationship between PVTs and overreporting SVTs across three subsamples, neurodevelopmental (attention deficit-hyperactivity disorder (ADHD)/learning disorder), psychiatric, and mild traumatic brain injury (mTBI).</p><p><strong>Methods: </strong>Participants included 561 consecutive referrals who completed the Test of Memory Malingering (TOMM) and the PAI. Three subgroups were created based on referral question. The relationship between PAI SVTs and the PVT was evaluated through multiple regression analysis.</p><p><strong>Results: </strong>The results demonstrated the relationship between PAI symptom overreporting SVTs, including Negative Impression Management (NIM), Malingering Index (MAL), and Cognitive Bias Scale (CBS), and PVTs varied by referral subgroup. Specifically, overreporting on CBS but not NIM and MAL significantly predicted poorer PVT performance in the full sample and the mTBI sample. In contrast, none of the overreporting SVTs significantly predicted PVT performance in the ADHD/learning disorder sample but conversely, all SVTs predicted PVT performance in the psychiatric sample.</p><p><strong>Conclusions: </strong>The results partially replicated prior research comparing SVTs and PVTs and suggested that constructs measured by SVTs and PVTs vary depending upon population. The results support the necessity of both PVTs and SVTs in clinical neuropsychological practice.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41132625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert D Shura, Alison Sapp, Paul B Ingram, Timothy W Brearly
{"title":"Evaluation of telehealth administration of MMPI symptom validity scales.","authors":"Robert D Shura, Alison Sapp, Paul B Ingram, Timothy W Brearly","doi":"10.1080/13803395.2024.2314734","DOIUrl":"10.1080/13803395.2024.2314734","url":null,"abstract":"<p><strong>Introduction: </strong>Telehealth assessment (TA) is a quickly emerging practice, offered with increasing frequency across many different clinical contexts. TA is also well-received by most patients, and there are numerous guidelines and training opportunities which can support effective telehealth practice. Although there are extensive recommended practices, these guidelines have rarely been evaluated empirically, particularly on personality measures. While existing research is limited, it does generally support the idea that TA and in-person assessment (IA) produce fairly equitable test scores. The MMPI-3, a recently released and highly popular personality and psychopathology measure has been the subject of several of those experimental or student (non-client) based studies; however, no study to date has evaluated these trends within a clinical sample. This study empirically tests for differences in TA and IA test scores on the MMPI-3 validity scores when following recommended administration procedures.</p><p><strong>Method: </strong>Data were from a retrospective chart review. Veterans (<i>n</i> = 550) who underwent psychological assessment in a Veterans Affairs Medical Center ADHD evaluation clinic were contrasted between in person and telehealth assessment modalities on the MMPI-2-RF and MMPI-3. Groups were compared using <i>t</i> tests, chi square, and base rates.</p><p><strong>Results: </strong>Results suggest that there were minimal differences in elevation rates or mean scores across modality, supporting the use of TA.</p><p><strong>Conclusions: </strong>This study's findings support the use of the MMPI via TA with ADHD evaluations, Veterans, and in neuro/psychological evaluation settings more generally. Observed elevation rates and mean scores of this study were notably different from those seen in other VA service clinics sampled nationally, which is an area of future investigation.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139905739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John-Christopher A Finley, Brian M Cerny, Julia M Brooks, Maximillian A Obolsky, Aya Haneda, Gabriel P Ovsiew, Devin M Ulrich, Zachary J Resch, Jason R Soble
{"title":"Cross-validating the Clinical Assessment of Attention Deficit-Adult symptom validity scales for assessment of attention deficit/hyperactivity disorder in adults.","authors":"John-Christopher A Finley, Brian M Cerny, Julia M Brooks, Maximillian A Obolsky, Aya Haneda, Gabriel P Ovsiew, Devin M Ulrich, Zachary J Resch, Jason R Soble","doi":"10.1080/13803395.2023.2283940","DOIUrl":"10.1080/13803395.2023.2283940","url":null,"abstract":"<p><strong>Introduction: </strong>The Clinical Assessment of Attention Deficit-Adult is among the few questionnaires that offer validity indicators (i.e., Negative Impression [NI], Infrequency [IF], and Positive Impression [PI]) for classifying underreporting and overreporting of attention-deficit/hyperactivity disorder (ADHD) symptoms. This is the first study to cross-validate the NI, IF, and PI scales in a sample of adults with suspected or known ADHD.</p><p><strong>Method: </strong>Univariate and multivariate analyses were conducted to examine the independent and combined value of the NI, IF, and PI scores in predicting invalid symptom reporting and neurocognitive performance in a sample of 543 adults undergoing ADHD evaluation.</p><p><strong>Results: </strong>The NI scale demonstrated better classification accuracy than the IF scale in discriminating patients with and without valid scores on measures of overreporting. Only NI scores significantly predicted validity status when used in combination with IF scores. Optimal cut-scores for the NI (≤51; 30% sensitivity / 90% specificity) and IF (≥4; 18% sensitivity / 90% specificity) scales were consistent with those reported in the original manual; however, these indicators poorly discriminated patients with invalid and valid neurocognitive performance. The PI scale demonstrated acceptable classification accuracy in discriminating patients with invalid and valid scores on measures of underreporting, albeit with an optimal cut-score (≥27; 36% sensitivity / 90% specificity) lower than that described in the manual.</p><p><strong>Conclusion: </strong>Findings provide preliminary evidence of construct validity for these scales as embedded validity indicators of symptom overreporting and underreporting. However, these scales should not be used to guide clinical judgment regarding the validity of neurocognitive test performance.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138295303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael R Basso, Daniel Guzman, Jordan Hoffmeister, Ryan Mulligan, Douglas M Whiteside, Dennis Combs
{"title":"Use of perceptual memory as a performance validity indicator: initial validation with simulated mild traumatic brain injury.","authors":"Michael R Basso, Daniel Guzman, Jordan Hoffmeister, Ryan Mulligan, Douglas M Whiteside, Dennis Combs","doi":"10.1080/13803395.2024.2314991","DOIUrl":"10.1080/13803395.2024.2314991","url":null,"abstract":"<p><strong>Introduction: </strong>Many commonly employed performance validity tests (PVTs) are several decades old and vulnerable to compromise, leading to a need for novel instruments. Because implicit/non-declarative memory may be robust to brain damage, tasks that rely upon such memory may serve as an effective PVT. Using a simulation design, this experiment evaluated whether novel tasks that rely upon perceptual memory hold promise as PVTs.</p><p><strong>Method: </strong>Sixty healthy participants were provided instructions to simulate symptoms of mild traumatic brain injury (TBI), and they were compared to a group of 20 honest responding individuals. Simulator groups received varying levels of information concerning TBI symptoms, resulting in naïve, sophisticated, and test-coached groups. The Word Memory Test, Test of Memory Malingering, and California Verbal Learning Test-II Forced Choice Recognition Test were administered. To assess perceptual memory, selected images from the Gollin Incomplete Figures and Mooney Closure Test were presented as visual perception tasks. After brief delays, memory for the images was assessed.</p><p><strong>Results: </strong>No group differences emerged on the perception trials of the Gollin and Mooney figures, but simulators remembered fewer images than the honest responders. Simulator groups differed on the standard PVTs, but they performed equivalently on the Gollin and Mooney figures, implying robustness to coaching. Relying upon a criterion of 90% specificity, the Gollin and Mooney figures achieved at least 90% sensitivity, comparing favorably to the standard PVTs.</p><p><strong>Conclusions: </strong>The Gollin and Mooney figures hold promise as novel PVTs. As perceptual memory tests, they may be relatively robust to brain damage, but future research involving clinical samples is necessary to substantiate this assertion.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139722851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin L Rohling, George J Demakis, Jennifer Langhinrichsen-Rohling
{"title":"Lowered cutoffs to reduce false positives on the Word Memory Test.","authors":"Martin L Rohling, George J Demakis, Jennifer Langhinrichsen-Rohling","doi":"10.1080/13803395.2024.2314736","DOIUrl":"10.1080/13803395.2024.2314736","url":null,"abstract":"<p><strong>Objective: </strong>To adjust the decision criterion for the Word Memory Test (WMT, Green, 2003) to minimize the frequency of false positives.</p><p><strong>Method: </strong>Archival data were combined into a database (<i>n</i> = 3,210) to examine the best cut score for the WMT. We compared results based on the original scoring rules and those based on adjusted scoring rules using a criterion based on 16 performance validity tests (PVTs) exclusive of the WMT. Cutoffs based on peer-reviewed publications and test manuals were used. The resulting PVT composite was considered the best estimate of validity status. We focused on a specificity of .90 with a false-positive rate of less than .10 across multiple samples.</p><p><strong>Results: </strong>Each examinee was administered the WMT, as well as on average 5.5 (SD = 2.5) other PVTs. Based on the original scoring rules of the WMT, 31.8% of examinees failed. Using a single failure on the criterion PVT (C-PVT), the base rate of failure was 45.9%. When requiring two or more failures on the C-PVT, the failure rate dropped to 22.8%. Applying a contingency analysis (i.e., X<sup>2</sup>) to the two failures model on the C-PVT measure and using the original rules for the WMT resulted in only 65.3% agreement. However, using our adjusted rules for the WMT, which consisted of relying on only the IR and DR WMT subtest scores with a cutoff of 77.5%, agreement between the adjusted and the C-PVT criterion equaled 80.8%, for an improvement of 12.1% identified. The adjustmeny resulted in a 49.2% reduction in false positives while preserving a sensitivity of 53.6%. The specificity for the new rules was 88.8%, for a false positive rate of 11.2%.</p><p><strong>Conclusions: </strong>Results supported lowering of the cut score for correct responding from 82.5% to 77.5% correct. We also recommend discontinuing the use of the Consistency subtest score in the determination of WMT failure.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139741172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ashley M Peak, Janice C Marceaux, Cammy Chicota-Carroll, Jason R Soble
{"title":"Cross-validation of the Trail Making Test as a non-memory-based embedded performance validity test among veterans with and without cognitive impairment.","authors":"Ashley M Peak, Janice C Marceaux, Cammy Chicota-Carroll, Jason R Soble","doi":"10.1080/13803395.2023.2287784","DOIUrl":"10.1080/13803395.2023.2287784","url":null,"abstract":"<p><strong>Objective: </strong>This study cross-validated multiple Trail Making Test (TMT) Parts A and B scores as non-memory-based embedded performance validity tests (PVTs) for detecting invalid neuropsychological performance among veterans with and without cognitive impairment.</p><p><strong>Method: </strong>Data were collected from a demographically and diagnostically diverse mixed clinical sample of 100 veterans undergoing outpatient neuropsychological evaluation at a Southwestern VA Medical Center. As part of a larger battery of neuropsychological tests, all veterans completed TMT A and B and four independent criterion PVTs, which were used to classify veterans into valid (<i>n</i> = 75) and invalid (<i>n</i> = 25) groups. Among the valid group 47% (<i>n</i> = 35) were cognitively impaired.</p><p><strong>Results: </strong>Among the overall sample, all embedded PVTs derived from TMT A and B raw and demographically corrected T-scores significantly differed between validity groups (ηp<sup>2</sup> = .21-.31) with significant areas under the curve (AUCs) of .72-.78 and 32-48% sensitivity (≥91% specificity) at optimal cut-scores. When subdivided by cognitive impairment status (i.e., valid-unimpaired vs. invalid; valid-impaired vs. invalid), all TMT scores yielded significant AUCs of .80-.88 and 56%-72% sensitivity (≥90% specificity) at optimal cut-scores. Among veterans with cognitive impairment, neither TMT A or B raw scores were able to significantly differentiate the invalid from the valid-cognitively impaired group; however, demographically corrected T-scores were able to significantly differentiate groups but had poor classification accuracy (AUCs = .66-.68) and reduced sensitivity of 28%-44% (≥91% specificity).</p><p><strong>Conclusions: </strong>Embedded PVTs derived from TMT Parts A and B raw and T-scores were able to accurately differentiate valid from invalid neuropsychological performance among veterans without cognitive impairment; however, the demographically corrected T-scores generally were more robust and consistent with prior studies compared to raw scores. By contrast, TMT embedded PVTs had poor accuracy and low sensitivity among veterans with cognitive impairment, suggesting limited utility as PVTs among populations with cognitive dysfunction.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138440750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah D Patrick, Lisa J Rapport, Robin A Hanks, Robert J Kanser
{"title":"Detecting feigned cognitive impairment using pupillometry on the Warrington Recognition Memory Test for Words.","authors":"Sarah D Patrick, Lisa J Rapport, Robin A Hanks, Robert J Kanser","doi":"10.1080/13803395.2024.2312624","DOIUrl":"10.1080/13803395.2024.2312624","url":null,"abstract":"<p><strong>Objective: </strong>Pupillometry provides information about physiological and psychological processes related to cognitive load, familiarity, and deception, and it is outside of conscious control. This study examined pupillary dilation patterns during a performance validity test (PVT) among adults with true and feigned impairment of traumatic brain injury (TBI).</p><p><strong>Participants and methods: </strong>Participants were 214 adults in three groups: adults with bona fide moderate to severe TBI (TBI; <i>n</i> = 51), healthy comparisons instructed to perform their best (HC; <i>n</i> = 72), and healthy adults instructed and incentivized to simulate cognitive impairment due to TBI (SIM; <i>n</i> = 91). The Recognition Memory Test (RMT) was administered in the context of a comprehensive neuropsychological battery. Three pupillary indices were evaluated. Two pure pupil dilation (PD) indices assessed a simple measure of baseline arousal (PD-Baseline) and a nuanced measure of dynamic engagement (PD-Range). A pupillary-behavioral index was also evaluated. Dilation-response inconsistency (DRI) captured the frequency with which examinees displayed a pupillary familiarity response to the correct answer but selected the unfamiliar stimulus (incorrect answer).</p><p><strong>Results: </strong>All three indices differed significantly among the groups, with medium-to-large effect sizes. PD-Baseline appeared sensitive to oculomotor dysfunction due to TBI; adults with TBI displayed significantly lower chronic arousal as compared to the two groups of healthy adults (SIM, HC). Dynamic engagement (PD-Range) yielded a hierarchical structure such that SIM were more dynamically engaged than TBI followed by HC. As predicted, simulators engaged in DRI significantly more frequently than other groups. Moreover, subgroup analyses indicated that DRI differed significantly for simulators who scored in the invalid range on the RMT (<i>n</i> = 45) versus adults with genuine TBI who scored invalidly (<i>n</i> = 15).</p><p><strong>Conclusions: </strong>The findings support continued research on the application of pupillometry to performance validity assessment: Overall, the findings highlight the promise of biometric indices in multimethod assessments of performance validity.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11087194/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139972003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The impact of race and other demographic factors on the false positive rates of five embedded Performance Validity Tests (PVTs) in a Veteran sample.","authors":"John H Denning, Michael David Horner","doi":"10.1080/13803395.2024.2314737","DOIUrl":"10.1080/13803395.2024.2314737","url":null,"abstract":"<p><strong>Introduction: </strong>It is common to use normative adjustments based on race to maintain accuracy when interpreting cognitive test results during neuropsychological assessment. However, embedded performance validity tests (PVTs) do not adjust for these racial differences and may result in elevated rates of false positives in African American/Black (AA) samples compared to European American/White (EA) samples.</p><p><strong>Methods: </strong>Veterans without Major Neurocognitive Disorder completed an outpatient neuropsychological assessment and were deemed to be performing in a valid manner (e.g., passing both the Test of Memory Malingering Trial 1 (TOMM1) and the Medical Symptom Validity Test (MSVT), (<i>n</i> = 531, EA = 473, AA = 58). Five embedded PVTs were administered to all patients: WAIS-III/IV Processing Speed Index (PSI), Brief Visuospatial Memory Test-Revised: Discrimination Index (BVMT-R), TMT-A (secs), California Verbal Learning Test-II (CVLT-II) Forced Choice, and WAIS-III/IV Digit Span Scaled Score. Individual PVT false positive rates, as well as the rate of failing two or more embedded PVTs, were calculated.</p><p><strong>Results: </strong>Failure rates of two embedded PVTs (PSI, TMT-A), and the total number of PVTs failed, were higher in the AA sample. The PSI and TMT-A remained significantly impacted by race after accounting for age, education, sex, and presence of Mild Neurocognitive Disorder. There were PVT failure rates greater than 10% (and considered false positives) in both groups (AA: PSI, TMT-A, and BVMT-R, 12-24%; EA: BVMT-R, 17%). Failing 2 or more PVTs (AA = 9%, EA = 4%) was impacted by education and Mild Neurocognitive Disorder but not by race.</p><p><strong>Conclusions: </strong>Individual (timed) PVTs showed higher false positive rates in the AA sample even after accounting for demographic factors and diagnosis of Mild Neurocognitive Disorder. Requiring failure on 2 or more embedded PVTs reduced false positive rates to acceptable levels across both groups (10% or less) and was not significantly influenced by race.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139729778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Savanna M Tierney, Anastasia Matchanova, Brian I Miller, Maya Troyanskaya, Jennifer Romesser, Anita Sim, Nicholas J Pastorek
{"title":"Cognitive \"success\" in the setting of performance validity test failure.","authors":"Savanna M Tierney, Anastasia Matchanova, Brian I Miller, Maya Troyanskaya, Jennifer Romesser, Anita Sim, Nicholas J Pastorek","doi":"10.1080/13803395.2023.2244161","DOIUrl":"10.1080/13803395.2023.2244161","url":null,"abstract":"<p><strong>Background: </strong>Although studies have shown unique variance contributions from performance invalidity, it is difficult to interpret the meaning of cognitive data in the setting of performance validity test (PVT) failure. The current study aimed to examine cognitive outcomes in this context.</p><p><strong>Method: </strong>Two hundred and twenty-two veterans with a history of mild traumatic brain injury referred for clinical evaluation completed cognitive and performance validity measures. Standardized scores were characterized as Within Normal Limits (≥16<sup>th</sup> normative percentile) and Below Normal Limits (<16<sup>th</sup> percentile). Cognitive outcomes are examined across four commonly used PVTs. Self-reported employment and student status were used as indicators of \"productivity\" to assess potential functional differences related to lower cognitive performance.</p><p><strong>Results: </strong>Among participants who performed in the invalid range on Test of Memory Malingering trial 1, Word Memory Test, Wechsler Adult Intelligence Scale-Fourth Edition Digit Span aged corrected scaled score, and the California Verbal Learning Test-Second Edition Forced Choice index, 16-88% earned broadly within normal limits scores across cognitive testing. Depending on which PVT measure was applied, the average number of cognitive performances below the 16<sup>th</sup> percentile ranged from 5 to 7 of 14 tasks. There were no differences in the total number of below normal limits performances on cognitive measures between \"productive\" and \"non-productive\" participants (T = 1.65, <i>p</i> = 1.00).</p><p><strong>Conclusions: </strong>Results of the current study suggest that the range of within normal limits cognitive performance in the context of failed PVTs varies greatly. Importantly, our findings indicate that neurocognitive data may still provide important practical information regarding cognitive abilities, despite poor PVT outcomes. Further, given that rates of below normal limits cognitive performance did not differ among \"productivity\" groups, results have important implications for functional abilities and recommendations in a clinical setting.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9951545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}