Assessing the Quality, Usefulness, and Reliability of Large Language Models (ChatGPT, DeepSeek, and Gemini) in Answering General Questions Regarding Dyslexia and Dyscalculia.
{"title":"Assessing the Quality, Usefulness, and Reliability of Large Language Models (ChatGPT, DeepSeek, and Gemini) in Answering General Questions Regarding Dyslexia and Dyscalculia.","authors":"Abdullah Alrubaian","doi":"10.1007/s11126-025-10170-6","DOIUrl":null,"url":null,"abstract":"<p><p>The current study aimed to evaluate the quality, usefulness, and reliability of three large language models (LLMs)-ChatGPT-4, DeepSeek, and Gemini-in answering general questions about specific learning disorders (SLDs), specifically dyslexia and dyscalculia. For each learning disorder subtype, 15 questions were developed through expert review of social media, forums, and professional input. Responses from the LLMs were evaluated using the Global Quality Scale (GQS) and a seven-point Likert scale to assess usefulness and reliability. Statistical analyses were conducted to compare model performance, including descriptive statistics and one-way ANOVA. Results revealed no statistically significant differences in quality or usefulness across models for both disorders. However, ChatGPT-4 demonstrated superior reliability for dyscalculia (p < 0.05), outperforming Gemini and DeepSeek. For dyslexia, DeepSeek achieved 100% maximum reliability scores, while GPT-4 and Gemini scored 60%. All models provided high-quality responses, with mean GQS scores ranging from 4.20 to 4.60 for dyslexia and 3.93 to 4.53 for dyscalculia, although variability existed in their practical utility. While LLMs show promise in delivering dyslexia and dyscalculia-related information, GPT-4's reliability for dyscalculia highlights its potential as a supplementary educational tool. Further validation by professionals remains critical.</p>","PeriodicalId":520814,"journal":{"name":"The Psychiatric quarterly","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Psychiatric quarterly","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11126-025-10170-6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The current study aimed to evaluate the quality, usefulness, and reliability of three large language models (LLMs)-ChatGPT-4, DeepSeek, and Gemini-in answering general questions about specific learning disorders (SLDs), specifically dyslexia and dyscalculia. For each learning disorder subtype, 15 questions were developed through expert review of social media, forums, and professional input. Responses from the LLMs were evaluated using the Global Quality Scale (GQS) and a seven-point Likert scale to assess usefulness and reliability. Statistical analyses were conducted to compare model performance, including descriptive statistics and one-way ANOVA. Results revealed no statistically significant differences in quality or usefulness across models for both disorders. However, ChatGPT-4 demonstrated superior reliability for dyscalculia (p < 0.05), outperforming Gemini and DeepSeek. For dyslexia, DeepSeek achieved 100% maximum reliability scores, while GPT-4 and Gemini scored 60%. All models provided high-quality responses, with mean GQS scores ranging from 4.20 to 4.60 for dyslexia and 3.93 to 4.53 for dyscalculia, although variability existed in their practical utility. While LLMs show promise in delivering dyslexia and dyscalculia-related information, GPT-4's reliability for dyscalculia highlights its potential as a supplementary educational tool. Further validation by professionals remains critical.