{"title":"Artificial Intelligence-based Psychotherapy: A Qualitative Exploration of Usability, Personalization, and the Perception of Therapeutic Progress.","authors":"Mirza Jahanzeb Beg, Manish Kumar Verma","doi":"10.1177/02537176251357477","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>AI-based psychotherapy apps offer accessibility and structured interventions but face challenges regarding emotional depth, personalization, engagement, and ethical concerns. This study critically examines user experiences, identifying key advantages, limitations, and areas for refinement.</p><p><strong>Methods: </strong>A qualitative approach was employed, using thematic analysis of semi-structured interviews with 17 participants (aged 18-45) who had used AI-based psychotherapy apps for at least four weeks. Ten participants had prior clinical diagnoses (e.g., anxiety, depression, adjustment disorder), while others reported subclinical psychological distress. Engagement duration ranged from 2 to 11 months, with most using the apps two to five times per week.</p><p><strong>Results: </strong>Ten core themes emerged, revealing a paradox of accessibility versus therapeutic depth. While users valued immediacy and anonymity, they struggled with fragmented therapeutic narratives, scripted empathy, and algorithmic stagnation in personalization. The over-reliance on CBT frameworks limited adaptability to diverse emotional needs, while linguistic and cultural microaggressions led to disengagement. Privacy concerns stemmed from a mismatch between perceived and actual risks, and AI-induced dependence raised ethical questions about user autonomy.</p><p><strong>Conclusions: </strong>The AI psychotherapy must evolve beyond static, standardized interventions by integrating emotionally responsive, culturally adaptive, and ethically responsible AI models. Enhancing therapeutic continuity, adaptive learning, and human-AI hybrid models can bridge the gap between accessibility and authentic engagement. These findings inform future AI-driven mental health innovations, ensuring they align with psychological, ethical, and cultural expectations.</p>","PeriodicalId":13476,"journal":{"name":"Indian Journal of Psychological Medicine","volume":" ","pages":"02537176251357477"},"PeriodicalIF":2.0000,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12274205/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Indian Journal of Psychological Medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/02537176251357477","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHIATRY","Score":null,"Total":0}
引用次数: 0
Abstract
Background: AI-based psychotherapy apps offer accessibility and structured interventions but face challenges regarding emotional depth, personalization, engagement, and ethical concerns. This study critically examines user experiences, identifying key advantages, limitations, and areas for refinement.
Methods: A qualitative approach was employed, using thematic analysis of semi-structured interviews with 17 participants (aged 18-45) who had used AI-based psychotherapy apps for at least four weeks. Ten participants had prior clinical diagnoses (e.g., anxiety, depression, adjustment disorder), while others reported subclinical psychological distress. Engagement duration ranged from 2 to 11 months, with most using the apps two to five times per week.
Results: Ten core themes emerged, revealing a paradox of accessibility versus therapeutic depth. While users valued immediacy and anonymity, they struggled with fragmented therapeutic narratives, scripted empathy, and algorithmic stagnation in personalization. The over-reliance on CBT frameworks limited adaptability to diverse emotional needs, while linguistic and cultural microaggressions led to disengagement. Privacy concerns stemmed from a mismatch between perceived and actual risks, and AI-induced dependence raised ethical questions about user autonomy.
Conclusions: The AI psychotherapy must evolve beyond static, standardized interventions by integrating emotionally responsive, culturally adaptive, and ethically responsible AI models. Enhancing therapeutic continuity, adaptive learning, and human-AI hybrid models can bridge the gap between accessibility and authentic engagement. These findings inform future AI-driven mental health innovations, ensuring they align with psychological, ethical, and cultural expectations.
期刊介绍:
The Indian Journal of Psychological Medicine (ISSN 0253-7176) was started in 1978 as the official publication of the Indian Psychiatric Society South Zonal Branch. The journal allows free access (Open Access) and is published Bimonthly. The Journal includes but is not limited to review articles, original research, opinions, and letters. The Editor and publisher accept no legal responsibility for any opinions, omissions or errors by the authors, nor do they approve of any product advertised within the journal.