Craig J Bryan, Christina Rose Bauder, Jaryd Hiser, M David Rudd, Justin C Baker
{"title":"Reliable and clinically significant change in Suicide Cognitions Scale-Revised (SCS-R) scores among high-risk psychiatric outpatients.","authors":"Craig J Bryan, Christina Rose Bauder, Jaryd Hiser, M David Rudd, Justin C Baker","doi":"10.1037/pas0001456","DOIUrl":"10.1037/pas0001456","url":null,"abstract":"<p><p>A key challenge for clinical practitioners is the lack of assessment methods that can be used to determine if a high-risk patient has experienced meaningful reductions in suicide risk. The Suicide Cognitions Scale-Revised has been shown to differentiate patients who will attempt suicide, but its utility for monitoring treatment response remains unknown. We used data from three independent samples, including two samples of psychiatric outpatients reporting recent suicidal ideation and/or behaviors (Study 1, <i>n</i> = 96; Study 2, <i>n</i> = 44) and one primary care sample (Study 3, <i>n</i> = 2,744) to calculate the reliable change index and clinically significant change thresholds for the Suicide Cognitions Scale-Revised. In both Studies 1 and 2, change scores ≥20 indicated reliable change, and total scores ≤21 indicated patients were more likely to belong to the nonsuicidal population than the suicidal population. Participants meeting clinically significant change criteria had significantly lower suicide attempt rates in Studies 1 and 3 and reported significantly better social-occupational functioning in Studies 1 and 2. Results suggest the clinically significant change threshold is a useful marker of reduced suicide risk among high-risk patients. (PsycInfo Database Record (c) 2026 APA, all rights reserved).</p>","PeriodicalId":20770,"journal":{"name":"Psychological Assessment","volume":" ","pages":"343-349"},"PeriodicalIF":3.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146053432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Further validation of the MMPI-3 personality disorder syndrome scales in a community mental health sample.","authors":"Janelle M Tinker, Martin Sellbom","doi":"10.1037/pas0001448","DOIUrl":"10.1037/pas0001448","url":null,"abstract":"<p><p>The purpose of the present study was to replicate and extend research on the Minnesota Multiphasic Personality Inventory-3 (MMPI-3) Personality Disorder (PD) Syndrome scales. A total of 289 participants were recruited from the community in Dunedin, New Zealand-all of whom were engaged in mental health treatment. We evaluated the validity of the PD Syndrome scales against various measures of traditional PDs, personality traits, and personality dysfunction using self-report, clinical-rating, and informant-report criteria. The findings provide support for the criterion validity of the MMPI-3 PD scales, as most scales demonstrated strong and meaningful correlations with their corresponding latent PD factors, with the exception of the Schizotypal PD scale. Convergent validity was also supported, with most scales positively correlating with personality impairment and aligning with expected maladaptive personality trait domains. Discriminant validity was generally supported; however, several scales also showed notable correlations with nontarget PD factors and nonhypothesized trait domains, some of which were larger than the correlations with their intended target constructs. Overall, the MMPI-3 PD Syndrome scales can assist clinicians with generating diagnostic hypotheses about traditional PDs, which will ultimately enhance clinical understanding and outcomes for patients during the transition to dimensional frameworks. (PsycInfo Database Record (c) 2026 APA, all rights reserved).</p>","PeriodicalId":20770,"journal":{"name":"Psychological Assessment","volume":" ","pages":"283-294"},"PeriodicalIF":3.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145934275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gudrun Eisele,Robin Achterhof,Aleksandra M Lachowicz,Joana De Calheiros Velozo,Thomas Vaessen,Lisa Peeters,Inez Myin-Germeys
{"title":"The distracted participant? Experience sampling response behavior and participant disturbance in social situations.","authors":"Gudrun Eisele,Robin Achterhof,Aleksandra M Lachowicz,Joana De Calheiros Velozo,Thomas Vaessen,Lisa Peeters,Inez Myin-Germeys","doi":"10.1037/pas0001463","DOIUrl":"https://doi.org/10.1037/pas0001463","url":null,"abstract":"Clinical assessment using self-reports hinges on the assumption that participants pay sufficient attention to questionnaires to provide valid data. This assumption is particularly tenuous in experience sampling method studies, where participants complete questionnaires in daily life across a range of potentially distracting situations. Previous research suggests that participants may be particularly distracted when responding to experience sampling method questionnaires in social situations, especially when engaging in social interactions. Yet, the effects of these environmental distractions on response behavior and, consequently, data quality remain poorly understood. We investigated the effects of distracting environments on disturbance and response behavior across various social and nonsocial situations. Experience sampling method data from three young adult samples (combined N = 293) and a general population youth sample (N = 1,903) were analyzed with multilevel (logistic) regressions. In line with previous research, adults were significantly more disturbed by assessments when in company compared to when alone, especially when also interacting with their company. In addition, we found significant differences in response behavior between social settings in adults, with changes pointing toward lower data quality when in company. Interestingly, patterns were different, in some cases even reversed, in school-going adolescents. While our findings suggest that the distraction of social settings affects participant burden and response behavior, the influence on data quality seemed minor. Differences across samples suggest that the setting of the social experience needs to be considered. Preparing participants for sampling in distracting (social) environments may help safeguard data quality and reduce participant burden in ambulatory clinical assessment. (PsycInfo Database Record (c) 2026 APA, all rights reserved).","PeriodicalId":20770,"journal":{"name":"Psychological Assessment","volume":"15 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2026-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147495253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yizhou Lyu, Dennis Combs, Dawn Neumann, Yuan Chang Leong
{"title":"Automated scoring of the Ambiguous Intentions Hostility Questionnaire with fine-tuned large language models.","authors":"Yizhou Lyu, Dennis Combs, Dawn Neumann, Yuan Chang Leong","doi":"10.1037/pas0001466","DOIUrl":"https://doi.org/10.1037/pas0001466","url":null,"abstract":"<p><p>Hostile attribution bias is the tendency to interpret social interactions as intentionally hostile. The Ambiguous Intentions Hostility Questionnaire (AIHQ) is a commonly used instrument to measure hostile attribution bias and includes open-ended questions where participants describe the perceived intentions behind a negative social situation and how they would respond. While these questions provide insights into the contents of hostile attributions, they require time-intensive scoring by human raters. In this study, we assessed whether large language models can automate the scoring of AIHQ open-ended responses. We used a previously collected data set in which individuals with traumatic brain injury (TBI) and non-TBI controls completed the AIHQ and had their open-ended responses rated by trained human raters. We used half of these responses to fine-tune the two models on human-generated ratings and tested the fine-tuned models on the remaining half of AIHQ responses. Results showed that model-generated ratings aligned with human ratings for both attributions of hostility and aggression responses, with fine-tuned models showing higher alignment. This alignment was consistent across ambiguous, intentional, and accidental scenario types and replicated previous findings on group differences in attributions of hostility and aggression responses between TBI and non-TBI groups. The fine-tuned models also generalized well to an independent nonclinical data set. To support broader adoption, we provide an accessible scoring interface that includes both local and cloud-based options. Together, our findings suggest that large language models can streamline AIHQ scoring in both research and clinical contexts, revealing their potential to facilitate psychological assessments across different populations. (PsycInfo Database Record (c) 2026 APA, all rights reserved).</p>","PeriodicalId":20770,"journal":{"name":"Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":3.3,"publicationDate":"2026-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147475125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Claudia Lazarides, Claudia Niessner, Simon Kolb, Jannik H. Orzek, Alexander Woll, Stephen G. West, Manuel C. Voelkle
{"title":"Measurement invariance of the Strengths and Difficulties Questionnaire (SDQ) across age groups in a German representative sample: An application of confirmatory factor analysis using k-fold cross-validation.","authors":"Claudia Lazarides, Claudia Niessner, Simon Kolb, Jannik H. Orzek, Alexander Woll, Stephen G. West, Manuel C. Voelkle","doi":"10.1037/pas0001446","DOIUrl":"https://doi.org/10.1037/pas0001446","url":null,"abstract":"","PeriodicalId":20770,"journal":{"name":"Psychological Assessment","volume":"26 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147380532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brian Allen, Jane F. Silovsky, David J. Kolko, Lucy Berliner, Rachel Wamser
{"title":"The development and psychometric validation of the Assessment of Sexual Behavior–Child version (ASB-C).","authors":"Brian Allen, Jane F. Silovsky, David J. Kolko, Lucy Berliner, Rachel Wamser","doi":"10.1037/pas0001461","DOIUrl":"https://doi.org/10.1037/pas0001461","url":null,"abstract":"","PeriodicalId":20770,"journal":{"name":"Psychological Assessment","volume":"76 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147380923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spencer C. Evans, Ashley R. Karlovich, Katherine A. Corteselli, Amanda Jensen-Doss, John R. Weisz
{"title":"Taking measurement-based care to school: Evaluating teacher-report versions of the behavior and feelings survey and the top problems assessment.","authors":"Spencer C. Evans, Ashley R. Karlovich, Katherine A. Corteselli, Amanda Jensen-Doss, John R. Weisz","doi":"10.1037/pas0001447","DOIUrl":"https://doi.org/10.1037/pas0001447","url":null,"abstract":"","PeriodicalId":20770,"journal":{"name":"Psychological Assessment","volume":"25 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147380920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dana Charles McCoy, Marta Dormal, Kristen Hinckley, Milagros Alvarado, Lena Jäggi, Daniel Mäusezahl, Stella Maria Hartinger, Günther Fink
{"title":"Measuring parenting interactions in resource-constrained settings: Evidence from an observational tool implemented in Andean Peru.","authors":"Dana Charles McCoy, Marta Dormal, Kristen Hinckley, Milagros Alvarado, Lena Jäggi, Daniel Mäusezahl, Stella Maria Hartinger, Günther Fink","doi":"10.1037/pas0001462","DOIUrl":"https://doi.org/10.1037/pas0001462","url":null,"abstract":"","PeriodicalId":20770,"journal":{"name":"Psychological Assessment","volume":"54 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147380534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sofie Glatt, Ashley L. Greene, Chi C. Chan, Gregory K. Brown, Marianne Goodman
{"title":"Do the Beck Depression Inventory-II and Beck Hopelessness Scale reliably capture systematic change and within-person variation? Evidence from generalizability theory.","authors":"Sofie Glatt, Ashley L. Greene, Chi C. Chan, Gregory K. Brown, Marianne Goodman","doi":"10.1037/pas0001457","DOIUrl":"https://doi.org/10.1037/pas0001457","url":null,"abstract":"","PeriodicalId":20770,"journal":{"name":"Psychological Assessment","volume":"264 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2026-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147380919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brian Barger,Terri Deocampo Pigott,R Christopher Sheldrick,Jonathan Campbell,Alexa Gonzalez Laca,Jon Starnes,Betsy Davis,Rachel Waford,John Olmstead
{"title":"Estimating the impact of missed cases on the accuracy of autism screening tools.","authors":"Brian Barger,Terri Deocampo Pigott,R Christopher Sheldrick,Jonathan Campbell,Alexa Gonzalez Laca,Jon Starnes,Betsy Davis,Rachel Waford,John Olmstead","doi":"10.1037/pas0001445","DOIUrl":"https://doi.org/10.1037/pas0001445","url":null,"abstract":"A common problem for assessing psychiatric screening tools is that initial diagnostic accuracy estimates are conducted on cross-sectional data and later found to be lower than expected when longitudinal analyses are conducted. This article uses prevalence estimates to identify potentially missed cases and adjust diagnostic accuracy metrics. To display this approach, we meta-analyze and assess 27 population-level autism screening studies identified via an umbrella review and contrast four population adjustments (none, national, U.S.-centric, and world prevalence). Studies with no positive screen adjustments displayed adequate sensitivity (.75), but poor sensitivity resulted when applying national (.52), world (.50), and U.S.-centric (.33) population adjustments. We also address missing positive screen cases due to the common problem of attrition (Sheldrick et al., 2023). Positive screen adjustments suggested a range of sensitivity impacts ranging from very poor (.32) to good (.89). Across analyses, positive predictive value estimates also ranged from .20 to .73 depending on population and missed case assumptions. The strengths and benefits of blending epidemiology and psychometric perspectives to identify screening tools that are weaker than expected are discussed. (PsycInfo Database Record (c) 2026 APA, all rights reserved).","PeriodicalId":20770,"journal":{"name":"Psychological Assessment","volume":"67 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147359397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}