{"title":"ChatGPT isn't an author, but a contribution taxonomy is needed.","authors":"Y Suchikova, N Tsybuliak","doi":"10.1080/08989621.2024.2405039","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>The increasing use of AI tools, particularly large language models like ChatGPT, in academic research has raised significant questions about authorship and transparency. This commentary emphasizes the need for a standardized AI contributions taxonomy to clarify AI's role in producing and publishing research outputs, ensuring ethical standards and maintaining academic integrity.</p><p><strong>Approach: </strong>We propose adapting the NIST AI Use Taxonomy and incorporating categories that reflect AI's use in tasks such as hypothesis generation, data analysis, manuscript preparation, and ethical oversight. Findings: Establishing an AI contributions taxonomy for the production and publication of research output would address inconsistencies in AI disclosure, enhance transparency, and uphold accountability in research. It would help differentiate between AI-assisted and human-led tasks, providing more explicit attribution of contributions.</p><p><strong>Findings: </strong>Establishing an AI contributions taxonomy for the production and publication of research output would address inconsistencies in AI disclosure, enhance transparency, and uphold accountability in research. It would help differentiate between AI-assisted and human-led tasks, providing more explicit attribution of contributions.</p><p><strong>Practical implications: </strong>The proposed taxonomy would offer researchers and journals a standardized method for disclosing AI's role in academic work, promoting responsible and transparent reporting aligned with ethical guidelines from COPE and ICMJE.</p><p><strong>Value: </strong>A well-defined AI contributions taxonomy for the production and publication of research output would foster transparency and trust in using AI in research, ensuring that AI's role is appropriately acknowledged while preserving academic integrity.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-6"},"PeriodicalIF":2.8000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accountability in Research-Policies and Quality Assurance","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1080/08989621.2024.2405039","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICAL ETHICS","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: The increasing use of AI tools, particularly large language models like ChatGPT, in academic research has raised significant questions about authorship and transparency. This commentary emphasizes the need for a standardized AI contributions taxonomy to clarify AI's role in producing and publishing research outputs, ensuring ethical standards and maintaining academic integrity.
Approach: We propose adapting the NIST AI Use Taxonomy and incorporating categories that reflect AI's use in tasks such as hypothesis generation, data analysis, manuscript preparation, and ethical oversight. Findings: Establishing an AI contributions taxonomy for the production and publication of research output would address inconsistencies in AI disclosure, enhance transparency, and uphold accountability in research. It would help differentiate between AI-assisted and human-led tasks, providing more explicit attribution of contributions.
Findings: Establishing an AI contributions taxonomy for the production and publication of research output would address inconsistencies in AI disclosure, enhance transparency, and uphold accountability in research. It would help differentiate between AI-assisted and human-led tasks, providing more explicit attribution of contributions.
Practical implications: The proposed taxonomy would offer researchers and journals a standardized method for disclosing AI's role in academic work, promoting responsible and transparent reporting aligned with ethical guidelines from COPE and ICMJE.
Value: A well-defined AI contributions taxonomy for the production and publication of research output would foster transparency and trust in using AI in research, ensuring that AI's role is appropriately acknowledged while preserving academic integrity.
期刊介绍:
Accountability in Research: Policies and Quality Assurance is devoted to the examination and critical analysis of systems for maximizing integrity in the conduct of research. It provides an interdisciplinary, international forum for the development of ethics, procedures, standards policies, and concepts to encourage the ethical conduct of research and to enhance the validity of research results.
The journal welcomes views on advancing the integrity of research in the fields of general and multidisciplinary sciences, medicine, law, economics, statistics, management studies, public policy, politics, sociology, history, psychology, philosophy, ethics, and information science.
All submitted manuscripts are subject to initial appraisal by the Editor, and if found suitable for further consideration, to peer review by independent, anonymous expert referees.