{"title":"AI and adolescent Relationships: Bridging emotional intelligence and practical guidance","authors":"Tsameret Ricon","doi":"10.1016/j.chbr.2025.100752","DOIUrl":null,"url":null,"abstract":"<div><div>This study comparatively explores how generative AI models respond to adolescents disclosing experiences of romantic and sexual violence. Using five hypothetical vignettes depicting coercive sexting, psychological abuse, sexual assault, image-based sexual abuse (IBSA), and digital control, we evaluated the responses of four leading AI models (ChatGPT, Gemini, Claude, and LLaMA) in terms of emotional empathy, practical guidance, and ethical awareness. Each vignette was presented three times per model to examine consistency and variability. Prompts were standardized to adolescent disclosures and instructed the AI to respond.</div><div>A qualitative Thematic analysis was employed to examine the tone, content, and values of each response. ChatGPT and Claude consistently demonstrated stronger emotional resonance and contextual sensitivity. Gemini emphasized legal considerations but often used a detached tone. LLaMA responses were generally minimal and lacked both emotional and ethical attunement. These findings contribute empirically to the evolving discourse on AI in mental health and education by illustrating how specific model architectures align with or fall short of ethical, emotional, and developmental standards.</div><div>Anchored in Communication Privacy Management Theory and Relational Ethics, this study offers an applied ethical framework for assessing AI responsiveness to sensitive disclosures. While current models show potential, they require development in trauma-informed design and moral reasoning. The findings provide empirical insight for improving AI tools that engage with adolescents in emotionally charged and ethically complex digital contexts.</div></div>","PeriodicalId":72681,"journal":{"name":"Computers in human behavior reports","volume":"19 ","pages":"Article 100752"},"PeriodicalIF":5.8000,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in human behavior reports","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2451958825001678","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
This study comparatively explores how generative AI models respond to adolescents disclosing experiences of romantic and sexual violence. Using five hypothetical vignettes depicting coercive sexting, psychological abuse, sexual assault, image-based sexual abuse (IBSA), and digital control, we evaluated the responses of four leading AI models (ChatGPT, Gemini, Claude, and LLaMA) in terms of emotional empathy, practical guidance, and ethical awareness. Each vignette was presented three times per model to examine consistency and variability. Prompts were standardized to adolescent disclosures and instructed the AI to respond.
A qualitative Thematic analysis was employed to examine the tone, content, and values of each response. ChatGPT and Claude consistently demonstrated stronger emotional resonance and contextual sensitivity. Gemini emphasized legal considerations but often used a detached tone. LLaMA responses were generally minimal and lacked both emotional and ethical attunement. These findings contribute empirically to the evolving discourse on AI in mental health and education by illustrating how specific model architectures align with or fall short of ethical, emotional, and developmental standards.
Anchored in Communication Privacy Management Theory and Relational Ethics, this study offers an applied ethical framework for assessing AI responsiveness to sensitive disclosures. While current models show potential, they require development in trauma-informed design and moral reasoning. The findings provide empirical insight for improving AI tools that engage with adolescents in emotionally charged and ethically complex digital contexts.