ExDoRA: enhancing the transferability of large language models for depression detection using free-text explanations.

IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Frontiers in Artificial Intelligence Pub Date : 2025-05-21 eCollection Date: 2025-01-01 DOI:10.3389/frai.2025.1564828
Y H P P Priyadarshana, Zilu Liang, Ian Piumarta
{"title":"<i>ExDoRA</i>: enhancing the transferability of large language models for depression detection using free-text explanations.","authors":"Y H P P Priyadarshana, Zilu Liang, Ian Piumarta","doi":"10.3389/frai.2025.1564828","DOIUrl":null,"url":null,"abstract":"<p><p>Few-shot prompting in large language models (LLMs) significantly improves performance across various tasks, including both in-domain and previously unseen natural language tasks, by learning from limited in-context examples. How these examples enhance transferability and contribute to achieving state-of-the-art (SOTA) performance in downstream tasks remains unclear. To address this, we propose <i>ExDoRA</i>, a novel LLM transferability framework designed to clarify the selection of the most relevant examples using synthetic free-text explanations. Our novel hybrid method ranks LLM-generated explanations by selecting the most semantically relevant examples closest to the input query while balancing diversity. The top-ranked explanations, along with few-shot examples, are then used to enhance LLMs' knowledge transfer in multi-party conversational modeling for previously unseen depression detection tasks. Evaluations using the IMHI corpus demonstrate that <i>ExDoRA</i> consistently produces high-quality free-text explanations. Extensive experiments on depression detection tasks, including depressed utterance classification (DUC) and depressed speaker identification (DSI), show that <i>ExDoRA</i> achieves SOTA performance. The evaluation results indicate significant improvements, with up to 20.59% in recall for DUC and 21.58% in F1 scores for DSI, using 5-shot examples with top-ranked explanations in the RSDD and eRisk 18 T2 corpora. These findings underscore <i>ExDoRA</i>'s potential as an effective screening tool for digital mental health applications.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1564828"},"PeriodicalIF":3.0000,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12133835/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frai.2025.1564828","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Few-shot prompting in large language models (LLMs) significantly improves performance across various tasks, including both in-domain and previously unseen natural language tasks, by learning from limited in-context examples. How these examples enhance transferability and contribute to achieving state-of-the-art (SOTA) performance in downstream tasks remains unclear. To address this, we propose ExDoRA, a novel LLM transferability framework designed to clarify the selection of the most relevant examples using synthetic free-text explanations. Our novel hybrid method ranks LLM-generated explanations by selecting the most semantically relevant examples closest to the input query while balancing diversity. The top-ranked explanations, along with few-shot examples, are then used to enhance LLMs' knowledge transfer in multi-party conversational modeling for previously unseen depression detection tasks. Evaluations using the IMHI corpus demonstrate that ExDoRA consistently produces high-quality free-text explanations. Extensive experiments on depression detection tasks, including depressed utterance classification (DUC) and depressed speaker identification (DSI), show that ExDoRA achieves SOTA performance. The evaluation results indicate significant improvements, with up to 20.59% in recall for DUC and 21.58% in F1 scores for DSI, using 5-shot examples with top-ranked explanations in the RSDD and eRisk 18 T2 corpora. These findings underscore ExDoRA's potential as an effective screening tool for digital mental health applications.

ExDoRA:使用自由文本解释增强抑郁症检测的大型语言模型的可转移性。
通过从有限的上下文示例中学习,大型语言模型(llm)中的Few-shot提示显着提高了各种任务的性能,包括领域内和以前未见过的自然语言任务。这些例子如何增强可转移性并有助于在下游任务中实现最先进的(SOTA)性能仍不清楚。为了解决这个问题,我们提出了ExDoRA,一个新颖的法学硕士可转移性框架,旨在使用合成的自由文本解释来澄清最相关示例的选择。我们的新混合方法通过选择最接近输入查询的语义相关示例来对llm生成的解释进行排序,同时平衡多样性。然后,将排名靠前的解释和少量示例一起用于增强法学硕士在多方对话建模中的知识转移,以完成以前未见过的抑郁症检测任务。使用IMHI语料库的评估表明,ExDoRA始终如一地生成高质量的自由文本解释。在抑郁检测任务上的大量实验,包括抑郁话语分类(DUC)和抑郁说话人识别(DSI),表明ExDoRA达到了SOTA的性能。评价结果表明,使用RSDD和eRisk 18 T2语料库中排名最高的5个示例,DUC的召回率高达20.59%,DSI的F1分数高达21.58%。这些发现强调了ExDoRA作为数字心理健康应用的有效筛选工具的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.10
自引率
2.50%
发文量
272
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信