AssessmentPub Date : 2026-05-08DOI: 10.1177/10731911261442022
Paul T Cirino, Cristina E Boada, Cassidy M Salentine
{"title":"The Executive Function RPM Parent Rating Scale for Children: Purpose and Properties.","authors":"Paul T Cirino, Cristina E Boada, Cassidy M Salentine","doi":"10.1177/10731911261442022","DOIUrl":"https://doi.org/10.1177/10731911261442022","url":null,"abstract":"<p><p>The goal of this study is to provide the psychometric properties of a new measure of executive function (EF), based on the representation, planning, monitoring/execution (RPM) process framework. Two versions of the EF RPM scale are contrasted with a different rating scale of EF, and further related to relevant associated outcomes. Results show positive distributional, reliability, and validity evidence for the EF RPM scale. Confirmatory factor analysis showed that items aligned with their assigned factor, and the factors were strongly related to one another as expected with a unitary EF process. The two versions of the EF RPM scale showed good convergent validity with another EF scale and expected relations with functional outcomes and demographic characteristics. The EF RPM scale also showed stronger discriminant validity relative to a different EF scale for non-EF behavioral symptomatology. Future work needs to gather additional evidence for these scales in different contexts and settings (e.g., in specific populations, in clinics) and in relation to other outcomes (e.g., achievement, daily life outcomes, cognitive variables).</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"10731911261442022"},"PeriodicalIF":3.4,"publicationDate":"2026-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147832889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AssessmentPub Date : 2026-04-24DOI: 10.1177/10731911261436687
Brett T Litz, Hannah E Walker, Luke Rusowicz-Orazem, Zoe R Styler, Elliot Fielstein, Benjamin Darnell, Keith G Meador, Jason A Nieuwsma
{"title":"Establishing Clinically Significant Change Benchmarks for the Moral Injury Outcome Scale in VA Behavioral Health Settings.","authors":"Brett T Litz, Hannah E Walker, Luke Rusowicz-Orazem, Zoe R Styler, Elliot Fielstein, Benjamin Darnell, Keith G Meador, Jason A Nieuwsma","doi":"10.1177/10731911261436687","DOIUrl":"https://doi.org/10.1177/10731911261436687","url":null,"abstract":"<p><p>This study aimed to establish benchmarks for clinically significant change for the Moral Injury Outcome Scale (MIOS) using national data from Veterans treated in U.S. Department of Veterans Affairs (VA) behavioral health settings. We analyzed archival electronic health record data from 2,521 Veterans administered the MIOS between July 2022 and March 2025. A subset of 361 Veterans who completed at least two MIOS administrations within 4 months constituted the episode-of-care cohort. Reliable change indices (RCIs) and functional recovery thresholds were calculated using the Jacobson and Truax method. A change score of 13 points on the MIOS indicated clinically significant improvement and the critical value suggesting functional recovery for endpoint scores was ≤9. Most Veterans were unchanged (81%), with 11.9% showing reliable improvement, 4.2% probable recovery, and 2.8% deterioration. In the larger cohort, nearly half met the criterion for probable moral injury. MIOS administration was most common in general mental health and post-traumatic stress disorder (PTSD) specialty care clinics. These initial findings provide the first clinically significant change benchmarks for the MIOS, supporting its integration into measurement-based care and routine outcome monitoring for moral injury in Veterans.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"10731911261436687"},"PeriodicalIF":3.4,"publicationDate":"2026-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147760669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AssessmentPub Date : 2026-04-21DOI: 10.1177/10731911261436691
GuiQuan Huo, Ruixue Zhao, Jiayi Li, Jingjie Wang, Jiameng Wang
{"title":"Development of a Heart Rate Variability-Based Predictive Model for Depressive Symptoms in Chinese University Students.","authors":"GuiQuan Huo, Ruixue Zhao, Jiayi Li, Jingjie Wang, Jiameng Wang","doi":"10.1177/10731911261436691","DOIUrl":"https://doi.org/10.1177/10731911261436691","url":null,"abstract":"<p><p>The current depression assessment tools are limited by subjectivity and potential bias. This study investigated the relationship between heart rate variability (HRV), body composition, and self-reported depressive symptoms to develop an objective depression screening model for Chinese university students. Data from 2,094 students, including demographics, body composition, Self-Rating Depression Scale (SDS) scores, and HRV indicators, were analyzed using SPSS 26.0 to construct a predictive regression model, with accuracy validated in GraphPad Prism 9.4.1. Subsequently, a subgroup of 359 students with depressive symptoms was screened using the model. The results showed no significant differences between the predicted and actual SDS scores (<i>p</i> > .05), with over 91% of the predicted scores falling within the 95% confidence interval of the actual scores. The strong correlation between HRV and SDS scores supports the use of HRV as a reliable indicator for depression screening. Overall, the model demonstrated a prediction accuracy of 92.61%, highlighting its potential for objective mental health assessment among university populations.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"10731911261436691"},"PeriodicalIF":3.4,"publicationDate":"2026-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147760715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AssessmentPub Date : 2026-04-16DOI: 10.1177/10731911261437561
Bassam Khoury, Rodrigo C Vergara
{"title":"Suffering Experiences Questionnaires: Scales Development and Validation.","authors":"Bassam Khoury, Rodrigo C Vergara","doi":"10.1177/10731911261437561","DOIUrl":"https://doi.org/10.1177/10731911261437561","url":null,"abstract":"<p><p>Suffering is a universal, subjective experience distinct from symptoms or diagnoses. Most existing measures reduce it to single items or symptom checklists, and even recent advances fail to capture its full breadth or the dimension of overcoming suffering. The Suffering Experiences Questionnaire (SEQ) provides a comprehensive, theory-agnostic instrument assessing both suffering and overcoming across contexts. The SEQ was developed in four steps. First, the initial pool of over 100 items was generated from clinical experience. Second, graduate-students reduced it to 91 items. Third, an expert panel of 14 scholars and clinicians refined it to 64 items divided between suffering and overcoming. Finally, three graduate students reviewed items for readability. Two validation studies assessed psychometric properties and utility. Study 1 reduced the SEQ to 20 items and established its structure and internal consistency, along with a 10-item short form (SEQ-10). Study 2 confirmed structure, reliability, and convergent and concurrent validity. Both versions converged with and predicted symptomatology and well-being outcomes and were meaningful from Western and Eastern perspectives. The SEQ and SEQ-10 are the first non-symptom-based measures integrating suffering and overcoming. Both demonstrate strong psychometric properties and clinical utility. Strengths, limitations, implications, and future directions are discussed.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"10731911261437561"},"PeriodicalIF":3.4,"publicationDate":"2026-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147697391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AssessmentPub Date : 2026-04-16DOI: 10.1177/10731911261436688
Kevin G Stephenson, Soo Youn Kim, Moira J Wendel, Eric M Butter, Eric A Youngstrom
{"title":"Construct Validity of IQ in the Presence of Scatter: A Measurement Invariance Approach.","authors":"Kevin G Stephenson, Soo Youn Kim, Moira J Wendel, Eric M Butter, Eric A Youngstrom","doi":"10.1177/10731911261436688","DOIUrl":"https://doi.org/10.1177/10731911261436688","url":null,"abstract":"<p><p>Practitioners of IQ tests are often taught that patterns of scatter among IQ subtests can lead to noncohesive full-scale IQ (FSIQ) scores. This notion exists despite a lack of empirical support and evidence for the robustness of FSIQ in the presence of scatter. However, to date, no study has directly tested the measurement invariance of groups varying in the amount of subtest scatter. We aimed to use multigroup confirmatory factor analysis to test the measurement invariance of the Stanford-Binet Intelligence Scales, fifth Edition (SB-5) for individuals with high vs. low scatter in a clinical sample of 5,352 individuals (ages 2-22) referred for comprehensive evaluations due to concerns for neurodevelopmental conditions. There was overall evidence for non-invariance in our sample, suggesting a limited impact of subtest scatter on the construct validity of IQ as measured by the SB-5. In addition, the explained variance of specific factors was low, even for individuals with high scatter among subtests. This study provides additional support for the robustness of FSIQ, even in cases of high scatter.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"10731911261436688"},"PeriodicalIF":3.4,"publicationDate":"2026-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147697436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AssessmentPub Date : 2026-04-06DOI: 10.1177/10731911261423124
Michael D Barnett, Harrison G Boynton
{"title":"Prospective Memory for Virtual Cooking Tasks in Relation to Adaptive Functioning.","authors":"Michael D Barnett, Harrison G Boynton","doi":"10.1177/10731911261423124","DOIUrl":"https://doi.org/10.1177/10731911261423124","url":null,"abstract":"<p><p>Prospective memory (PM), the ability to remember to carry out future intentions, may be critical for everyday independence. The extent to which performance on virtual reality (VR) measures of prospective memory correlates with effective everyday functioning (i.e., adaptive functioning) remains debated. This study examined the association between a function-led measure of prospective memory in a VR environment that more closely resembles everyday tasks (the Virtual Kitchen Protocol for Prospective Memory; VKP-PM) and commonly used clinical-based measures of adaptive functioning among clinic-referred adults. A clinical-based sample of adults (<i>N</i> = 115; <i>M<sub>age</sub></i> = 70.1) was administered the VKP-PM, three self-reported measures (Instrumental Activities of Daily Living Scale [IADLS], Functional Activities Questionnaire [FAQ], Measurement of Everyday Cognition [ECOG]), and a performance-based measure of adaptive functioning (Texas Functional Living Scale [TFLS]). A subset (<i>n</i> = 50-56) had an informant who completed an identical version of each reported measure (see Methods limitations). Multivariate general linear models were conducted to demonstrate the association between the VKP-PM and the four dependent variables (representing adaptive functioning), while adjusting for demographic characteristics (age and gender), depression (Geriatric Depression Scale-15; GDS), and VR-related comfort. Primary analyses revealed that higher VKP-PM scores were associated with better performance-based adaptive functioning, beyond covariates. Associations between VKP-PM and self-reported adaptive functioning were weaker and less consistent, with only the IADLS showing a significant association. Post hoc moderation analyses indicated that GDS exerted broad negative effects on self-report measures. The informant model showed stronger convergence with VKP-PM. Across analytic frameworks, the VKP-PM, representing a function-led, ecologically oriented VR measure of prospective memory, was reliably associated with adaptive functioning, most strongly with performance-based measures, weaker/mixed with self-reported measures, and moderately with informant-reported measures. These findings support the ecological veridicality of the VKP-PM assessment and underscore the influence of affective and methodological factors on self-perceived functioning, emphasizing the need for a multimodal approach to the adaptive functioning assessment in a clinical sample referred for a neuropsychological evaluation.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"10731911261423124"},"PeriodicalIF":3.4,"publicationDate":"2026-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147621590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AssessmentPub Date : 2026-04-01Epub Date: 2025-03-29DOI: 10.1177/10731911251327255
Martin Hochheimer, Justin C Strickland, Jennifer D Ellis, Jill A Rabinowitz, J Gregory Hobelmann, Andrew S Huhn
{"title":"Normative Values and Psychometric Properties of the Penn State Worry Questionnaire in Substance Use Disorder Treatment Population.","authors":"Martin Hochheimer, Justin C Strickland, Jennifer D Ellis, Jill A Rabinowitz, J Gregory Hobelmann, Andrew S Huhn","doi":"10.1177/10731911251327255","DOIUrl":"10.1177/10731911251327255","url":null,"abstract":"<p><p>This study evaluated the Penn State Worry Questionnaire (PSWQ) as a tool for measuring worry and anxiety levels among individuals entering treatment for substance use disorders (SUDs). The sample included 75,047 individuals admitted to SUD treatment centers, with assessments conducted weekly. Individuals entering SUD treatment exhibited higher baseline levels of worry; however, worry levels declined over the course of treatment. The PSWQ demonstrated good internal consistency, high test-retest reliability, and good discriminant validity when correlated with measures of depression and stress. The factor structure analysis confirmed that the PSWQ measures the same underlying construct of worry in the SUD treatment population, with a single-factor model showing satisfactory fit. This extends the reach of the PSWQ to the SUD treatment population by reaffirming its reliability, validity, and factor structure, with the expectation of higher levels of worry compared to a non-SUD population at the beginning of treatment, which decline over time.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"458-470"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143742080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AssessmentPub Date : 2026-04-01Epub Date: 2025-03-29DOI: 10.1177/10731911251321930
Jack T Waddell, Scott E King, William R Corbin
{"title":"Initial Development and Preliminary Validation of the Physical Drinking Contexts Scale.","authors":"Jack T Waddell, Scott E King, William R Corbin","doi":"10.1177/10731911251321930","DOIUrl":"10.1177/10731911251321930","url":null,"abstract":"<p><p>Literature on the location and contextual features of drinking events (i.e., physical context) remains scant and underdeveloped. This study developed and provided preliminary validation of a measure typical physical drinking contexts. Participants (<i>N</i> = 1,642) self-reported their typical physical drinking context (via items generated), their drinking behavior, demographics, and typical social drinking context. Three samples (total <i>N</i> = 1,642) assessed factor structure, measurement invariance, and validity testing. Factor analyses suggested a four-factor structure, indicative of high arousal private (e.g., at a large house party), high arousal public (e.g., at a concert), low arousal private (e.g., at home), and low arousal public (e.g., on a date) contexts. Measurement invariance was established across sex, race/ethnicity, and drinking frequency, and convergent and discriminant validity was evaluated via bivariate correlations with social/solitary drinking frequency. High arousal contexts were associated with heavier/binge drinking, whereas high arousal private contexts and low arousal contexts were associated with greater negative consequences. Relations between high arousal contexts and heavier drinking remained above and beyond the overall drinking frequency and social context items. Findings lay the framework for future validation and longitudinal/diary studies to test how (and for whom) relations between physical drinking contexts and drinking behavior operate.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"424-438"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143742078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AssessmentPub Date : 2026-04-01Epub Date: 2025-04-16DOI: 10.1177/10731911251333315
Ella M Dickison, Martin Sellbom
{"title":"Operationalizing Psychopathy Through a Multi-Method Approach.","authors":"Ella M Dickison, Martin Sellbom","doi":"10.1177/10731911251333315","DOIUrl":"10.1177/10731911251333315","url":null,"abstract":"<p><p>We examined the operationalization of psychopathy through a multi-method framework in a community sample of 250 participants, who were oversampled for psychopathic traits. Psychopathy was operationalized through clinician-rated measures, including the Psychopathy Checklist: Screening Version and the Comprehensive Assessment of Psychopathic Personality (CAPP): Symptom Rating Scale, as well as the Triarchic Psychopathy Measure and the CAPP-Self Report. Using Exploratory Structural Equation Modeling and controlling for self-report and clinical rating method variances, a four-factor model of psychopathy emerged with factors representing Boldness, Disinhibition, Affective, and Interpersonal traits. We examined the validity of the four-factor model by investigating associations between each factor and conceptually relevant scales, and the results generally supported construct validity. The Interpersonal factor was considered to contribute to the model theoretically in the factor analysis, but the incremental validity of this factor above and beyond the Boldness and Affective factors was not supported by available criterion measures.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"323-338"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12924883/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143955166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AssessmentPub Date : 2026-04-01Epub Date: 2025-04-15DOI: 10.1177/10731911251329977
Guyin Zhang, Amanda J Fairchild, Bo Zhang, Dingjing Shi, Dexin Shi
{"title":"Comparing Likert and Slider Response Formats in Clinical Assessment: Evidence From Measuring Depression Symptoms Using CES-D 8.","authors":"Guyin Zhang, Amanda J Fairchild, Bo Zhang, Dingjing Shi, Dexin Shi","doi":"10.1177/10731911251329977","DOIUrl":"10.1177/10731911251329977","url":null,"abstract":"<p><p>This study compared various response formats in fitting confirmatory factor analysis models. Participants responded to the eight-item center for epidemiologic studies depression scale across five different response formats in a within-subjects experimental design: the Likert-type scale, three types of slider response formats, and a number-entry response format. We compared the different response formats based on item-level scores, factor structure and psychometric properties of the scale, mean comparisons across groups, and individuals' sum scores. Similar results were observed across the response formats with respect to factor structure, measurement invariance, reliability, and validity of test scores. However, inconsistent results were found regarding group mean comparisons across groups. Individuals' item scores and sum scores also varied across different response formats, as did participants' subjective evaluations of response formats in terms of perceived accuracy, enjoyment, difficulty, and mental exhaustion. Based on study findings, we provide recommendations and discuss implications for researchers designing and conducting clinical assessments.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":" ","pages":"388-407"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12924885/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143960460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}