Rezaur Rashid, Saba Kheirinejad, Brianna M White, Soheil Hashtarkhani, Parnian Kheirkhah Rahimabad, Fekede A Kumsa, Lokesh Chinthala, Janet A Zink, Christopher L Brett, Robert L Davis, David L Schwartz, Arash Shaban-Nejad
{"title":"模拟共情互动与合成法学硕士生成的癌症患者人物角色。","authors":"Rezaur Rashid, Saba Kheirinejad, Brianna M White, Soheil Hashtarkhani, Parnian Kheirkhah Rahimabad, Fekede A Kumsa, Lokesh Chinthala, Janet A Zink, Christopher L Brett, Robert L Davis, David L Schwartz, Arash Shaban-Nejad","doi":"10.3233/SHTI251498","DOIUrl":null,"url":null,"abstract":"<p><p>Unplanned interruptions in radiation therapy (RT) increase clinical risks, yet proactive, personalized psychosocial support remains limited. This study presents a proof-of-concept framework that simulates and evaluates Empathic AI-patient interactions using large language models (LLMs) and synthetic oncology patient personas. Leveraging a de-identified dataset of patient demographics, clinical features, and social determinants of health (SDoH), we created realistic personas that interact with an empathic AI assistant in simulated dialogues. The system uses dual LLMs, one for persona generation and another for empathic response, which engage in multi-turn dialogue pairs per persona. We evaluated the outputs using statistical similarity tests, quantitative metrics (BERTScore, SDoH relevance, empathy, persona distinctness), and qualitative human assessment. The results demonstrate the feasibility of scalable, secure, and context-aware dialogue for early-stage AI development. This HIPAA/GDPR compliant framework supports ethical testing of empathic clinical support tools and lays the groundwork for AI-driven interventions to improve RT adherence.</p>","PeriodicalId":94357,"journal":{"name":"Studies in health technology and informatics","volume":"332 ","pages":"72-76"},"PeriodicalIF":0.0000,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Simulating Empathic Interactions with Synthetic LLM-Generated Cancer Patient Personas.\",\"authors\":\"Rezaur Rashid, Saba Kheirinejad, Brianna M White, Soheil Hashtarkhani, Parnian Kheirkhah Rahimabad, Fekede A Kumsa, Lokesh Chinthala, Janet A Zink, Christopher L Brett, Robert L Davis, David L Schwartz, Arash Shaban-Nejad\",\"doi\":\"10.3233/SHTI251498\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Unplanned interruptions in radiation therapy (RT) increase clinical risks, yet proactive, personalized psychosocial support remains limited. This study presents a proof-of-concept framework that simulates and evaluates Empathic AI-patient interactions using large language models (LLMs) and synthetic oncology patient personas. Leveraging a de-identified dataset of patient demographics, clinical features, and social determinants of health (SDoH), we created realistic personas that interact with an empathic AI assistant in simulated dialogues. The system uses dual LLMs, one for persona generation and another for empathic response, which engage in multi-turn dialogue pairs per persona. We evaluated the outputs using statistical similarity tests, quantitative metrics (BERTScore, SDoH relevance, empathy, persona distinctness), and qualitative human assessment. The results demonstrate the feasibility of scalable, secure, and context-aware dialogue for early-stage AI development. This HIPAA/GDPR compliant framework supports ethical testing of empathic clinical support tools and lays the groundwork for AI-driven interventions to improve RT adherence.</p>\",\"PeriodicalId\":94357,\"journal\":{\"name\":\"Studies in health technology and informatics\",\"volume\":\"332 \",\"pages\":\"72-76\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-10-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Studies in health technology and informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3233/SHTI251498\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Studies in health technology and informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/SHTI251498","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Simulating Empathic Interactions with Synthetic LLM-Generated Cancer Patient Personas.
Unplanned interruptions in radiation therapy (RT) increase clinical risks, yet proactive, personalized psychosocial support remains limited. This study presents a proof-of-concept framework that simulates and evaluates Empathic AI-patient interactions using large language models (LLMs) and synthetic oncology patient personas. Leveraging a de-identified dataset of patient demographics, clinical features, and social determinants of health (SDoH), we created realistic personas that interact with an empathic AI assistant in simulated dialogues. The system uses dual LLMs, one for persona generation and another for empathic response, which engage in multi-turn dialogue pairs per persona. We evaluated the outputs using statistical similarity tests, quantitative metrics (BERTScore, SDoH relevance, empathy, persona distinctness), and qualitative human assessment. The results demonstrate the feasibility of scalable, secure, and context-aware dialogue for early-stage AI development. This HIPAA/GDPR compliant framework supports ethical testing of empathic clinical support tools and lays the groundwork for AI-driven interventions to improve RT adherence.