{"title":"Artificial intelligence versus human analysis: Interpreting data in elderly fat reduction study","authors":"Piotr Sporek, Mariusz Konieczny","doi":"10.1016/j.aimed.2024.12.011","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Artificial intelligence (AI) has emerged as a transformative tool in scientific research, playing an increasingly significant role in data analysis and interpretation. This study aimed to evaluate the efficiency and accuracy of AI-driven interpretations of health-related data in comparison to those provided by human experts.</div></div><div><h3>Methods</h3><div>The analysis utilized a paid version of Chat GPT-4 (AI) to interpret study results, relying solely on table titles and data extracted from the authors' previously published manuscript. The dataset encompassed body composition and health parameters within the context of a dietary intervention. Data from a prior publication by the authors were referenced, as detailed in the methods section. The evaluation focused on comparing word count and descriptive content across interpretations of three tables from the original manuscript.</div></div><div><h3>Results</h3><div>The human expert's data interpretation was succinct, comprising 160 words, while AI-generated descriptions extended to 426 words. Similarly, the AI provided a more verbose analysis of the pre/post-intervention parameter significance, with 374 words compared to the human's 108 words. Group interactions were described in 44 words by the human expert and 486 words by the AI. Notably, the AI's analysis was accurate, though more detailed.</div></div><div><h3>Conclusions</h3><div>Chat GPT-4 necessitates precise table titles and well-defined data inputs to generate comprehensive analyses, as it does not autonomously account for all parameters within the dataset. While the AI demonstrated factual accuracy and valid conclusions, its descriptions lacked precision, underscoring the importance of human oversight in ensuring interpretative clarity and relevance.</div></div>","PeriodicalId":7343,"journal":{"name":"Advances in integrative medicine","volume":"12 1","pages":"Pages 13-18"},"PeriodicalIF":1.7000,"publicationDate":"2025-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in integrative medicine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2212958824001666","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0
Abstract
Background
Artificial intelligence (AI) has emerged as a transformative tool in scientific research, playing an increasingly significant role in data analysis and interpretation. This study aimed to evaluate the efficiency and accuracy of AI-driven interpretations of health-related data in comparison to those provided by human experts.
Methods
The analysis utilized a paid version of Chat GPT-4 (AI) to interpret study results, relying solely on table titles and data extracted from the authors' previously published manuscript. The dataset encompassed body composition and health parameters within the context of a dietary intervention. Data from a prior publication by the authors were referenced, as detailed in the methods section. The evaluation focused on comparing word count and descriptive content across interpretations of three tables from the original manuscript.
Results
The human expert's data interpretation was succinct, comprising 160 words, while AI-generated descriptions extended to 426 words. Similarly, the AI provided a more verbose analysis of the pre/post-intervention parameter significance, with 374 words compared to the human's 108 words. Group interactions were described in 44 words by the human expert and 486 words by the AI. Notably, the AI's analysis was accurate, though more detailed.
Conclusions
Chat GPT-4 necessitates precise table titles and well-defined data inputs to generate comprehensive analyses, as it does not autonomously account for all parameters within the dataset. While the AI demonstrated factual accuracy and valid conclusions, its descriptions lacked precision, underscoring the importance of human oversight in ensuring interpretative clarity and relevance.
期刊介绍:
Advances in Integrative Medicine (AIMED) is an international peer-reviewed, evidence-based research and review journal that is multi-disciplinary within the fields of Integrative and Complementary Medicine. The journal focuses on rigorous quantitative and qualitative research including systematic reviews, clinical trials and surveys, whilst also welcoming medical hypotheses and clinically-relevant articles and case studies disclosing practical learning tools for the consulting practitioner. By promoting research and practice excellence in the field, and cross collaboration between relevant practitioner groups and associations, the journal aims to advance the practice of IM, identify areas for future research, and improve patient health outcomes. International networking is encouraged through clinical innovation, the establishment of best practice and by providing opportunities for cooperation between organisations and communities.