Stench of Errors or the Shine of Potential: The Challenge of (Ir)Responsible Use of ChatGPT in Speech-Language Pathology

IF 2.1 3区 医学 Q2 AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY
Mytsyk Hanna, Suchikova Yana
{"title":"Stench of Errors or the Shine of Potential: The Challenge of (Ir)Responsible Use of ChatGPT in Speech-Language Pathology","authors":"Mytsyk Hanna,&nbsp;Suchikova Yana","doi":"10.1111/1460-6984.70088","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Integrating large language models (LLMs), such as ChatGPT, into speech-language pathology (SLP) presents promising opportunities and notable challenges. While these tools can support diagnostics, streamline documentation and assist in therapy planning, they also raise concerns related to misinformation, cultural insensitivity, overreliance and ethical ambiguity. Current discourse often centres on technological capabilities, overlooking how future speech-language pathologists (SLPs) are being prepared to use such tools responsibly.</p>\n </section>\n \n <section>\n \n <h3> Aims</h3>\n \n <p>This paper examines the pedagogical, ethical and professional implications of integrating LLMs into SLP. It emphasizes the need to cultivate professional responsibility, ethical awareness and critical engagement amongst student SLPs, ensuring that such technologies are applied thoughtfully, appropriately and in accordance with evidence-based and contextually relevant therapeutic standards.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>The paper combines a review of recent interdisciplinary research with reflective insights from academic practice. It presents documented cases of student SLPs’ overreliance on ChatGPT, analyzes common pitfalls through a structured table of examples and synthesizes perspectives from SLP, education, data ethics and linguistics.</p>\n </section>\n \n <section>\n \n <h3> Main Contribution</h3>\n \n <p>Reflective examples presented in the article illustrate challenges that arise when LLMs are used without sufficient oversight or a clear understanding of their limitations. Rather than questioning the value of LLMs, these cases emphasize the importance of ensuring that student SLPs are guided towards thoughtful, ethical and clinically sound use. To support this, the paper offers a set of pedagogical recommendations—including ethics integration, reflective assignments, case-based learning, peer critique and interdisciplinary collaboration—aimed at embedding critical engagement with tools such as ChatGPT into professional training.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>LLMs are becoming an integral part of SLP. Their impact, however, will depend on how effectively student SLPs are trained to balance technological innovation with professional responsibility. Higher education institutions (HEIs) must take an active role in embedding responsible engagement with LLMs into pre-service training and SLP curricula. Through intentional and early preparation, the field can move beyond the risks associated with automation and towards a future shaped by reflective, informed and ethically grounded use of generative tools.</p>\n </section>\n \n <section>\n \n <h3> WHAT THIS PAPER ADDS</h3>\n \n <div><i>What is already known on this subject</i>\n \n <ul>\n \n <li>Large language models (LLMs), including ChatGPT, are increasingly used in speech-language pathology (SLP) for tasks such as diagnostic support, therapy material generation and documentation. While prior research acknowledges both their utility and risks, limited attention has been paid to how student SLPs engage with these tools and how educational institutions prepare them for responsible use.</li>\n </ul>\n </div>\n \n <div><i>What this paper adds to existing knowledge</i>\n \n <ul>\n \n <li>This paper identifies key challenges in how student SLPs interact with ChatGPT, including overreliance, lack of critical evaluation and ethical blind spots. It emphasizes the role of higher education in developing critical AI literacy aligned with clinical and ethical standards. The study offers specific, practice-oriented recommendations for embedding responsibility-focused engagement with LLMs into SLP curricula. These include ethics integration, reflective assignments, peer feedback and interdisciplinary dialogue.</li>\n </ul>\n </div>\n \n <div><i>What are the potential or actual clinical implications of this work?</i>\n \n <ul>\n \n <li>Without structured guidance, future SLPs may misuse LLMs in ways that compromise diagnostic accuracy, cultural appropriateness or therapeutic quality. Embedding reflective, ethics-focused training into SLP curricula can reduce these risks and ensure that generative tools like ChatGPT support rather than undermine clinical decision-making and patient care.</li>\n </ul>\n </div>\n </section>\n </div>","PeriodicalId":49182,"journal":{"name":"International Journal of Language & Communication Disorders","volume":"60 4","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Language & Communication Disorders","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/1460-6984.70088","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Integrating large language models (LLMs), such as ChatGPT, into speech-language pathology (SLP) presents promising opportunities and notable challenges. While these tools can support diagnostics, streamline documentation and assist in therapy planning, they also raise concerns related to misinformation, cultural insensitivity, overreliance and ethical ambiguity. Current discourse often centres on technological capabilities, overlooking how future speech-language pathologists (SLPs) are being prepared to use such tools responsibly.

Aims

This paper examines the pedagogical, ethical and professional implications of integrating LLMs into SLP. It emphasizes the need to cultivate professional responsibility, ethical awareness and critical engagement amongst student SLPs, ensuring that such technologies are applied thoughtfully, appropriately and in accordance with evidence-based and contextually relevant therapeutic standards.

Methods

The paper combines a review of recent interdisciplinary research with reflective insights from academic practice. It presents documented cases of student SLPs’ overreliance on ChatGPT, analyzes common pitfalls through a structured table of examples and synthesizes perspectives from SLP, education, data ethics and linguistics.

Main Contribution

Reflective examples presented in the article illustrate challenges that arise when LLMs are used without sufficient oversight or a clear understanding of their limitations. Rather than questioning the value of LLMs, these cases emphasize the importance of ensuring that student SLPs are guided towards thoughtful, ethical and clinically sound use. To support this, the paper offers a set of pedagogical recommendations—including ethics integration, reflective assignments, case-based learning, peer critique and interdisciplinary collaboration—aimed at embedding critical engagement with tools such as ChatGPT into professional training.

Conclusions

LLMs are becoming an integral part of SLP. Their impact, however, will depend on how effectively student SLPs are trained to balance technological innovation with professional responsibility. Higher education institutions (HEIs) must take an active role in embedding responsible engagement with LLMs into pre-service training and SLP curricula. Through intentional and early preparation, the field can move beyond the risks associated with automation and towards a future shaped by reflective, informed and ethically grounded use of generative tools.

WHAT THIS PAPER ADDS

What is already known on this subject
  • Large language models (LLMs), including ChatGPT, are increasingly used in speech-language pathology (SLP) for tasks such as diagnostic support, therapy material generation and documentation. While prior research acknowledges both their utility and risks, limited attention has been paid to how student SLPs engage with these tools and how educational institutions prepare them for responsible use.
What this paper adds to existing knowledge
  • This paper identifies key challenges in how student SLPs interact with ChatGPT, including overreliance, lack of critical evaluation and ethical blind spots. It emphasizes the role of higher education in developing critical AI literacy aligned with clinical and ethical standards. The study offers specific, practice-oriented recommendations for embedding responsibility-focused engagement with LLMs into SLP curricula. These include ethics integration, reflective assignments, peer feedback and interdisciplinary dialogue.
What are the potential or actual clinical implications of this work?
  • Without structured guidance, future SLPs may misuse LLMs in ways that compromise diagnostic accuracy, cultural appropriateness or therapeutic quality. Embedding reflective, ethics-focused training into SLP curricula can reduce these risks and ensure that generative tools like ChatGPT support rather than undermine clinical decision-making and patient care.
错误的恶臭还是潜力的光芒:在语言病理学中负责任地使用ChatGPT的挑战
将大型语言模型(LLMs),如ChatGPT,整合到语音语言病理学(SLP)中呈现出良好的机遇和显著的挑战。虽然这些工具可以支持诊断,简化文档并协助治疗计划,但它们也引起了与错误信息,文化不敏感,过度依赖和道德模糊相关的担忧。当前的讨论往往集中在技术能力上,忽视了未来的语言病理学家(slp)如何负责任地使用这些工具。本文探讨了将法学硕士纳入SLP的教学、伦理和专业意义。它强调了培养学生slp的职业责任、道德意识和批判性参与的必要性,确保这些技术得到深思熟虑、适当的应用,并根据循证和情境相关的治疗标准。方法对近期跨学科研究进行综述,并结合学术实践进行反思。它展示了学生SLP过度依赖ChatGPT的记录案例,通过结构化的示例表分析了常见的陷阱,并综合了SLP,教育,数据伦理学和语言学的观点。文章中提供的反思性示例说明了在没有充分监督或清楚了解其局限性的情况下使用法学硕士时出现的挑战。这些案例并没有质疑法学硕士的价值,而是强调了确保学生的slp被引导到深思熟虑、合乎道德和临床合理使用的重要性。为了支持这一点,本文提供了一套教学建议,包括伦理整合、反思作业、基于案例的学习、同行批评和跨学科合作,旨在将ChatGPT等工具的批判性参与纳入专业培训。结论llm正在成为SLP的重要组成部分。然而,他们的影响将取决于如何有效地训练学生slp平衡技术创新与专业责任。高等教育机构(HEIs)必须发挥积极作用,将负责任的法学硕士课程纳入职前培训和SLP课程。通过有意和早期的准备,该领域可以超越与自动化相关的风险,走向一个由反思、知情和道德基础的生成工具的使用所塑造的未来。包括ChatGPT在内的大型语言模型(llm)越来越多地用于语音语言病理学(SLP),用于诊断支持、治疗材料生成和文档编制等任务。虽然先前的研究承认了它们的效用和风险,但对学生slp如何使用这些工具以及教育机构如何为负责任地使用这些工具做好准备的关注有限。本文确定了学生slp如何与ChatGPT互动的关键挑战,包括过度依赖,缺乏批判性评估和道德盲点。它强调高等教育在培养符合临床和道德标准的关键人工智能素养方面的作用。该研究提供了具体的、以实践为导向的建议,将以责任为中心的法学硕士参与到SLP课程中。这些包括伦理整合、反思作业、同行反馈和跨学科对话。这项工作的潜在或实际临床意义是什么?如果没有结构化的指导,未来的slp可能会以损害诊断准确性、文化适应性或治疗质量的方式滥用llm。在SLP课程中嵌入反思性的、以伦理为重点的培训可以减少这些风险,并确保ChatGPT等生成工具支持而不是破坏临床决策和患者护理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Language & Communication Disorders
International Journal of Language & Communication Disorders AUDIOLOGY & SPEECH-LANGUAGE PATHOLOGY-REHABILITATION
CiteScore
3.30
自引率
12.50%
发文量
116
审稿时长
6-12 weeks
期刊介绍: The International Journal of Language & Communication Disorders (IJLCD) is the official journal of the Royal College of Speech & Language Therapists. The Journal welcomes submissions on all aspects of speech, language, communication disorders and speech and language therapy. It provides a forum for the exchange of information and discussion of issues of clinical or theoretical relevance in the above areas.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信