Sarah L Kopelovich, Rachel M Brian, Mike Tanana, Roisín Slevin, Brian Pace, Shannon K Stewart, Victoria Shepard, Dror Ben-Zeev, Scott A Baldwin, Christina S Soma, Sarah Stanco, Zac Imel
{"title":"Development and validation of a cognitive behavioral therapy for psychosis online training with automated feedback.","authors":"Sarah L Kopelovich, Rachel M Brian, Mike Tanana, Roisín Slevin, Brian Pace, Shannon K Stewart, Victoria Shepard, Dror Ben-Zeev, Scott A Baldwin, Christina S Soma, Sarah Stanco, Zac Imel","doi":"10.1037/pst0000548","DOIUrl":null,"url":null,"abstract":"<p><p>The accessibility of training and fidelity assessment is critical to implementing and sustaining empirically supported psychotherapies like cognitive behavioral therapy for psychosis (CBTp). We describe the development of an online CBTp training tool that incorporates behavioral rehearsal tasks to enable deliberate practice of cognitive and behavioral techniques for psychosis. The development process consisted of designing content, inclusive of didactics, client profiles, and learner prompts; constructing standardized performance tasks and metrics; collecting responses to learner prompts; establishing intraclass correlation (ICC) of responses among trained raters; and training a transformer-based machine learning (ML) model to meet or surpass human ICC. Authenticity ratings of each simulated client surpassed benchmarks. CBTp trainers (<i>n</i> = 12), clinicians (<i>n</i> = 78), and nonclinicians (<i>n</i> = 119) generated 3,958 unique verbal responses to 28 unique prompts (7 skills × 4 simulated clients), of which the coding team rated 1,961. Human ICC across all skills was high (mean ICC = 0.77). On average, there was a high correlation between ML and human ratings of fidelity (<i>r<sub>s</sub></i> = .74). Similarly, the average percentage of human agreement was high at 96% (range = 87%-102%), where values greater than 100 indicate that the ML model agreed with a human rater more than two human raters agreed with each other. Results suggest that it is possible to reliably measure discrete CBTp skills in response to simulated client vignettes while capturing expected variation in skill utilization across participants. These findings pave the way for a standardized, asynchronous training that incorporates automated feedback on learners' rehearsal of CBTp skills. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20910,"journal":{"name":"Psychotherapy","volume":"62 1","pages":"1-11"},"PeriodicalIF":2.6000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11921910/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychotherapy","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/pst0000548","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, CLINICAL","Score":null,"Total":0}
引用次数: 0
Abstract
The accessibility of training and fidelity assessment is critical to implementing and sustaining empirically supported psychotherapies like cognitive behavioral therapy for psychosis (CBTp). We describe the development of an online CBTp training tool that incorporates behavioral rehearsal tasks to enable deliberate practice of cognitive and behavioral techniques for psychosis. The development process consisted of designing content, inclusive of didactics, client profiles, and learner prompts; constructing standardized performance tasks and metrics; collecting responses to learner prompts; establishing intraclass correlation (ICC) of responses among trained raters; and training a transformer-based machine learning (ML) model to meet or surpass human ICC. Authenticity ratings of each simulated client surpassed benchmarks. CBTp trainers (n = 12), clinicians (n = 78), and nonclinicians (n = 119) generated 3,958 unique verbal responses to 28 unique prompts (7 skills × 4 simulated clients), of which the coding team rated 1,961. Human ICC across all skills was high (mean ICC = 0.77). On average, there was a high correlation between ML and human ratings of fidelity (rs = .74). Similarly, the average percentage of human agreement was high at 96% (range = 87%-102%), where values greater than 100 indicate that the ML model agreed with a human rater more than two human raters agreed with each other. Results suggest that it is possible to reliably measure discrete CBTp skills in response to simulated client vignettes while capturing expected variation in skill utilization across participants. These findings pave the way for a standardized, asynchronous training that incorporates automated feedback on learners' rehearsal of CBTp skills. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
期刊介绍:
Psychotherapy Theory, Research, Practice, Training publishes a wide variety of articles relevant to the field of psychotherapy. The journal strives to foster interactions among individuals involved with training, practice theory, and research since all areas are essential to psychotherapy. This journal is an invaluable resource for practicing clinical and counseling psychologists, social workers, and mental health professionals.