{"title":"Scalable training for child sexual abuse interviews in Japan: Using AI-driven avatars to test multiple behavioral modeling interventions","authors":"Shumpei Haginoya , Tatsuro Ibe , Shota Yamamoto , Naruyo Yoshimoto , Hazuki Mizushi , Pekka Santtila","doi":"10.1016/j.chipro.2025.100188","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Interviewer training using automated avatars and interventions has emerged as a potentially scalable approach to improving questioning skills in child sexual abuse interviews. Although behavioral modeling has been proven to be an effective part of this training, the efficacy of its individual components remains unexplored.</div></div><div><h3>Objective</h3><div>We aimed to demonstrate the scalability of an interviewer training approach using AI-driven avatars and to examine the effectiveness of different components of modeling in improving the use of open questions.</div></div><div><h3>Participants and setting</h3><div>1168 lay participants recruited via crowdsourcing platforms were randomly assigned to 28 conditions varying the combination of modeling components.</div></div><div><h3>Methods</h3><div>Each participant conducted one simulated child sexual abuse interview online after receiving one combination of the modeling components. The modeling components consisted of reading learning points regarding good and bad interview approaches, watching example videos of good and bad interviews, and reading the case outcomes (i.e. what had happened to the avatars interviewed in the example videos).</div></div><div><h3>Results</h3><div>Correlation and regression analyses found positive impact of videos showing good interview practices on the quality of the participants' subsequent interviews while little effect was found of the learning points and the case outcomes. Surprisingly, we found a negative impact of videos showing bad interview practices on the quality of the participants’ interviews.</div></div><div><h3>Conclusions</h3><div>The results demonstrated the scalability of interviewer training using automated avatars and the effectiveness of some modeling components in improving interviewer behavior. Overall, interviewers tended to follow the modeled behaviors regardless of whether these were positive or negative which resulted in improved interview skills through positive models but detrimental effects after negative models. However, the negative impact of bad modeling in the reproduction of learned behaviors in interview simulations should still be investigated in the context of transfer.</div></div>","PeriodicalId":100237,"journal":{"name":"Child Protection and Practice","volume":"5 ","pages":"Article 100188"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Child Protection and Practice","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2950193825000956","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background
Interviewer training using automated avatars and interventions has emerged as a potentially scalable approach to improving questioning skills in child sexual abuse interviews. Although behavioral modeling has been proven to be an effective part of this training, the efficacy of its individual components remains unexplored.
Objective
We aimed to demonstrate the scalability of an interviewer training approach using AI-driven avatars and to examine the effectiveness of different components of modeling in improving the use of open questions.
Participants and setting
1168 lay participants recruited via crowdsourcing platforms were randomly assigned to 28 conditions varying the combination of modeling components.
Methods
Each participant conducted one simulated child sexual abuse interview online after receiving one combination of the modeling components. The modeling components consisted of reading learning points regarding good and bad interview approaches, watching example videos of good and bad interviews, and reading the case outcomes (i.e. what had happened to the avatars interviewed in the example videos).
Results
Correlation and regression analyses found positive impact of videos showing good interview practices on the quality of the participants' subsequent interviews while little effect was found of the learning points and the case outcomes. Surprisingly, we found a negative impact of videos showing bad interview practices on the quality of the participants’ interviews.
Conclusions
The results demonstrated the scalability of interviewer training using automated avatars and the effectiveness of some modeling components in improving interviewer behavior. Overall, interviewers tended to follow the modeled behaviors regardless of whether these were positive or negative which resulted in improved interview skills through positive models but detrimental effects after negative models. However, the negative impact of bad modeling in the reproduction of learned behaviors in interview simulations should still be investigated in the context of transfer.