{"title":"ChatGPT-4的皮肤病学知识水平基于委员会审查问题和布鲁姆的分类。","authors":"Hansen Tai, Carrie Kovarik","doi":"10.2196/74085","DOIUrl":null,"url":null,"abstract":"<p><strong>Unlabelled: </strong>Our study demonstrated the ability of ChatGPT-4 to answer 77.5% of all sampled text-based board review type questions correctly. Questions requiring the recall of factual information were answered correctly most often, with slight decreases in correctness as higher-order thinking requirements increased. Improvements to ChatGPT's visual diagnostics capabilities will be required before it can be used reliably for clinical decision-making and visual diagnostics.</p>","PeriodicalId":73553,"journal":{"name":"JMIR dermatology","volume":"8 ","pages":"e74085"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12331215/pdf/","citationCount":"0","resultStr":"{\"title\":\"ChatGPT-4's Level of Dermatological Knowledge Based on Board Examination Review Questions and Bloom's Taxonomy.\",\"authors\":\"Hansen Tai, Carrie Kovarik\",\"doi\":\"10.2196/74085\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Unlabelled: </strong>Our study demonstrated the ability of ChatGPT-4 to answer 77.5% of all sampled text-based board review type questions correctly. Questions requiring the recall of factual information were answered correctly most often, with slight decreases in correctness as higher-order thinking requirements increased. Improvements to ChatGPT's visual diagnostics capabilities will be required before it can be used reliably for clinical decision-making and visual diagnostics.</p>\",\"PeriodicalId\":73553,\"journal\":{\"name\":\"JMIR dermatology\",\"volume\":\"8 \",\"pages\":\"e74085\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-08-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12331215/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"JMIR dermatology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2196/74085\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR dermatology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/74085","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
ChatGPT-4's Level of Dermatological Knowledge Based on Board Examination Review Questions and Bloom's Taxonomy.
Unlabelled: Our study demonstrated the ability of ChatGPT-4 to answer 77.5% of all sampled text-based board review type questions correctly. Questions requiring the recall of factual information were answered correctly most often, with slight decreases in correctness as higher-order thinking requirements increased. Improvements to ChatGPT's visual diagnostics capabilities will be required before it can be used reliably for clinical decision-making and visual diagnostics.