Asghar Ahmadi, M. Noetel, M. Schellekens, P. Parker, D. Antczak, M. Beauchamp, Theresa Dicke, Carmel M. Diezmann, A. Maeder, N. Ntoumanis, A. Yeung, C. Lonsdale
{"title":"A Systematic Review of Machine Learning for Assessment and Feedback of Treatment Fidelity","authors":"Asghar Ahmadi, M. Noetel, M. Schellekens, P. Parker, D. Antczak, M. Beauchamp, Theresa Dicke, Carmel M. Diezmann, A. Maeder, N. Ntoumanis, A. Yeung, C. Lonsdale","doi":"10.5093/pi2021a4","DOIUrl":null,"url":null,"abstract":"Many psychological treatments have been shown to be cost-effective and efficacious, as long as they are implemented faithfully. Assessing fidelity and providing feedback is expensive and time-consuming. Machine learning has been used to assess treatment fidelity, but the reliability and generalisability is unclear. We collated and critiqued all implementations of machine learning to assess the verbal behaviour of all helping professionals, with particular emphasis on treatment fidelity for therapists. We conducted searches using nine electronic databases for automated approaches of coding verbal behaviour in therapy and similar contexts. We completed screening, extraction, and quality assessment in duplicate. Fifty-two studies met our inclusion criteria (65.3% in psychotherapy). Automated coding methods performed better than chance, and some methods showed near human-level performance; performance tended to be better with larger data sets, a smaller number of codes, conceptually simple codes, and when predicting session-level ratings than utterance-level ones. Few studies adhered to best-practice machine learning guidelines. Machine learning demonstrated promising results, particularly where there are large, annotated datasets and a modest number of concrete features to code. These methods are novel, cost-effective, scalable ways of assessing fidelity and providing therapists with individualised, prompt, and objective feedback.","PeriodicalId":51641,"journal":{"name":"Psychosocial Intervention","volume":null,"pages":null},"PeriodicalIF":3.6000,"publicationDate":"2021-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychosocial Intervention","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.5093/pi2021a4","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 3
Abstract
Many psychological treatments have been shown to be cost-effective and efficacious, as long as they are implemented faithfully. Assessing fidelity and providing feedback is expensive and time-consuming. Machine learning has been used to assess treatment fidelity, but the reliability and generalisability is unclear. We collated and critiqued all implementations of machine learning to assess the verbal behaviour of all helping professionals, with particular emphasis on treatment fidelity for therapists. We conducted searches using nine electronic databases for automated approaches of coding verbal behaviour in therapy and similar contexts. We completed screening, extraction, and quality assessment in duplicate. Fifty-two studies met our inclusion criteria (65.3% in psychotherapy). Automated coding methods performed better than chance, and some methods showed near human-level performance; performance tended to be better with larger data sets, a smaller number of codes, conceptually simple codes, and when predicting session-level ratings than utterance-level ones. Few studies adhered to best-practice machine learning guidelines. Machine learning demonstrated promising results, particularly where there are large, annotated datasets and a modest number of concrete features to code. These methods are novel, cost-effective, scalable ways of assessing fidelity and providing therapists with individualised, prompt, and objective feedback.
期刊介绍:
Psychosocial Intervention is a peer-reviewed journal that publishes papers in all areas relevant to psychosocial intervention at the individual, family, social networks, organization, community, and population levels. The Journal emphasizes an evidence-based perspective and welcomes papers reporting original basic and applied research, program evaluation, and intervention results. The journal will also feature integrative reviews, and specialized papers on theoretical advances and methodological issues. Psychosocial Intervention is committed to advance knowledge, and to provide scientific evidence informing psychosocial interventions tackling social and community problems, and promoting social welfare and quality of life. Psychosocial Intervention welcomes contributions from all areas of psychology and allied disciplines, such as sociology, social work, social epidemiology, and public health. Psychosocial Intervention aims to be international in scope, and will publish papers both in Spanish and English.