Fiona McGowan Martha Morrison, Nima Rezaei, Amanuel Godana Arero, Vasko Graklanov, Sevan Iritsyan, Mariya Ivanovska, Rangariari Makuku, Leander Penaso Marquez, Kseniia Minakova, Lindelwa Phakamile Mmema, Piotr Rzymski, Ganna Zavolodko
{"title":"在各领域人工智能应用不断增加的背景下,保持科学诚信和高研究标准","authors":"Fiona McGowan Martha Morrison, Nima Rezaei, Amanuel Godana Arero, Vasko Graklanov, Sevan Iritsyan, Mariya Ivanovska, Rangariari Makuku, Leander Penaso Marquez, Kseniia Minakova, Lindelwa Phakamile Mmema, Piotr Rzymski, Ganna Zavolodko","doi":"10.21037/jmai-23-63","DOIUrl":null,"url":null,"abstract":"Abstract: Artificial intelligence (AI) technologies have already played a revolutionary role in scientific research, from diagnostics to text-generative AI used in scientific writing. The use of AI in the scientific field needs transparent regulation, especially with a longstanding history of use—the first AI technologies in science were developed in the 1950s. Since then, AI has gone from being able to alter texts to producing them using billions of parameters to generate accurate and natural texts. However, scientific work requires high ethical and professional standards, and the rise of AI use in the field has led to many institutions and journals releasing statements and restrictions on its use. AI, being reliant on its users can exacerbate and increase existing biases in the field without being able to take accountability. AI responses can also often lack specificity and depth. However, it is important not to condemn the use of AI in scientific work as a whole. This article has partial use of an AI large language model (LLM), specifically Chatbot Generative Pre-Trained Transformer (ChatGPT), to demonstrate the theories with clear examples. Several recommendations on both a strategic and regulatory level have been formulated in this paper to enable the complementary use of AI alongside ethically-conducted scientific research or for educational purposes, where it shows great potential as a transformative force in interactive work. Policymakers should create wide-reaching, clear guidelines and legal frameworks for using AI to remove the burden of consideration from educators and senior researchers. Caution in the scientific community is advised, though further understanding and work to improve AI use is encouraged.","PeriodicalId":73815,"journal":{"name":"Journal of medical artificial intelligence","volume":"101 3-4","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Maintaining scientific integrity and high research standards against the backdrop of rising artificial intelligence use across fields\",\"authors\":\"Fiona McGowan Martha Morrison, Nima Rezaei, Amanuel Godana Arero, Vasko Graklanov, Sevan Iritsyan, Mariya Ivanovska, Rangariari Makuku, Leander Penaso Marquez, Kseniia Minakova, Lindelwa Phakamile Mmema, Piotr Rzymski, Ganna Zavolodko\",\"doi\":\"10.21037/jmai-23-63\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract: Artificial intelligence (AI) technologies have already played a revolutionary role in scientific research, from diagnostics to text-generative AI used in scientific writing. The use of AI in the scientific field needs transparent regulation, especially with a longstanding history of use—the first AI technologies in science were developed in the 1950s. Since then, AI has gone from being able to alter texts to producing them using billions of parameters to generate accurate and natural texts. However, scientific work requires high ethical and professional standards, and the rise of AI use in the field has led to many institutions and journals releasing statements and restrictions on its use. AI, being reliant on its users can exacerbate and increase existing biases in the field without being able to take accountability. AI responses can also often lack specificity and depth. However, it is important not to condemn the use of AI in scientific work as a whole. This article has partial use of an AI large language model (LLM), specifically Chatbot Generative Pre-Trained Transformer (ChatGPT), to demonstrate the theories with clear examples. Several recommendations on both a strategic and regulatory level have been formulated in this paper to enable the complementary use of AI alongside ethically-conducted scientific research or for educational purposes, where it shows great potential as a transformative force in interactive work. Policymakers should create wide-reaching, clear guidelines and legal frameworks for using AI to remove the burden of consideration from educators and senior researchers. Caution in the scientific community is advised, though further understanding and work to improve AI use is encouraged.\",\"PeriodicalId\":73815,\"journal\":{\"name\":\"Journal of medical artificial intelligence\",\"volume\":\"101 3-4\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of medical artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.21037/jmai-23-63\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of medical artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21037/jmai-23-63","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Maintaining scientific integrity and high research standards against the backdrop of rising artificial intelligence use across fields
Abstract: Artificial intelligence (AI) technologies have already played a revolutionary role in scientific research, from diagnostics to text-generative AI used in scientific writing. The use of AI in the scientific field needs transparent regulation, especially with a longstanding history of use—the first AI technologies in science were developed in the 1950s. Since then, AI has gone from being able to alter texts to producing them using billions of parameters to generate accurate and natural texts. However, scientific work requires high ethical and professional standards, and the rise of AI use in the field has led to many institutions and journals releasing statements and restrictions on its use. AI, being reliant on its users can exacerbate and increase existing biases in the field without being able to take accountability. AI responses can also often lack specificity and depth. However, it is important not to condemn the use of AI in scientific work as a whole. This article has partial use of an AI large language model (LLM), specifically Chatbot Generative Pre-Trained Transformer (ChatGPT), to demonstrate the theories with clear examples. Several recommendations on both a strategic and regulatory level have been formulated in this paper to enable the complementary use of AI alongside ethically-conducted scientific research or for educational purposes, where it shows great potential as a transformative force in interactive work. Policymakers should create wide-reaching, clear guidelines and legal frameworks for using AI to remove the burden of consideration from educators and senior researchers. Caution in the scientific community is advised, though further understanding and work to improve AI use is encouraged.