Troy D. Sadler, Felicia Moore Mensah, Jonathan Tam
{"title":"Artificial intelligence and the Journal of Research in Science Teaching","authors":"Troy D. Sadler, Felicia Moore Mensah, Jonathan Tam","doi":"10.1002/tea.21933","DOIUrl":null,"url":null,"abstract":"<p>Artificial Intelligence (AI) is a transformative technology that promises to impact many aspects of society including research, education, and publishing. We, the editors of the <i>Journal of Research in Science Teaching</i> (JRST), think that the journal has a responsibility to contribute to the ongoing dialogues about the use of AI in research and publishing with particular attention to the field of science education. We use this editorial to share our current ideas about the opportunities and challenges associated with AI in science education research and to sketch out new journal guidelines related to the use of AI for the production of JRST articles. We also extend an invitation to scholars to submit research articles and commentaries that advance the field's understanding of the intersections of AI and science education.</p><p>Establishing foundations for an AI revolution has been in progress since the mid-twentieth century (Adamopoulou & Moussiades, <span>2020</span>), and a giant step in public engagement with AI was taken in November 2022 when OpenAI released ChatGPT. This tool along with other large language models (LLM) such as Google Bard, and Microsoft's Copilot, provide platforms that are easy to use and can generate content such as text, images, computer code, audio, and video. It has quickly become apparent that these <i>generative</i> AI tools have the potential to change education in substantial ways. There is already evidence that students and teachers are actively using AI in ways that will push the field of education to reconsider what it means to construct learning artifacts, how to assess the work of learners, and the nature of learner-technology interactions (e.g., Prather et al., <span>2023</span>). Of course, generative AI will not just impact the work of students, teachers, and other educational practitioners, it will affect how research is conducted and reported. As journal editors, we are particularly interested in the use of AI in the sharing of research and publication processes.</p><p>Across the field of education research, and science education research more specifically, scholars use a host of technologies to support their work. For example, researchers regularly use statistical packages to derive quantitative patterns in data, qualitative software to organize and represent coded themes in data, grammar, and spelling check software embedded in word processors and online (i.e., Grammarly), and reference managers to find and cite literature. Technologies such as these examples are ubiquitous across our field, and new generative AI presents another set of tools that researchers might leverage for the sharing of their scholarship. However, the now widely available LLMs seem, to us, to represent a fundamental shift in technological capacity for producing research publications. The users of software for data analysis, reference management, and grammar checks exert levels of control and supervision over these technologies, which is not the case when using an LLM. There is a much greater degree of opaqueness and uncertainty when it comes to generating content with an LLM as compared to generating regression coefficients with data analysis software. Given these distinctions between AI and other technologies used by researchers, we think AI presents a unique challenge for academic publishing and therefore warrants the additional attention called for in this editorial.</p><p>In considering the role of AI in publishing research, we think it is important to highlight two fundamental tensions. First, the research enterprise is about the creation of new knowledge. Researchers conduct and write about studies and other forms of scholarship as a means of generating new ideas and insights about the foci of their inquiries. We argue that AI, at least the LLMs that are currently prevalent, cannot achieve the goal of trustworthy knowledge creation. LLMs necessarily work from existing source material—they can repeat, reword, and summarize what already exists, but they do not create new knowledge. AI can be generative in the sense that it can generate content such as text, but AI is not generative from a research perspective. Second, an important hallmark of science and research is a commitment to openness and transparency. The set of social practices employed by research communities is a fundamental dimension of science itself, and open sharing and critique of methods, findings, and interpretations are some of these critical social practices (Osborne et al., <span>2022</span>). The processes underlying generative AI tools in common use are not open or transparent. It is not always clear what the sources for AI generation are, how the sources are being analyzed, or why some ideas are highlighted and others are not. The phenomenon of AI hallucination, wherein an LLM generates false information based on patterns that do not exist in the source material, provides evidence of this problem. Why AI tools create content that is false or misleading is not fully understood and reflects an underlying degree of uncertainty (Athaluri et al., <span>2023</span>).</p><p>Despite these concerns, we are not arguing that AI has no place in conducting and publishing research. As authors of a recent JRST commentary suggest, “the [AI] train… has indeed left the station” (Zhai & Nehm, <span>2023</span>, p. 1395). Although this statement was written specifically in response to AI's role in formative assessment, its point about the inevitability of AI extends to other aspects of our field including publishing. We can imagine ways in which AI might be used (and is already being used) responsibly for conducting research and preparing manuscripts. For example, AI can help researchers review existing literature, generate code for analyzing data, create outlines for organizing manuscripts, and assist brainstorming processes. (In the interest of full disclosure, as we thought about what to claim that AI could do for researchers, we posed the following questions to ChatGPT: “How can generative AI be used responsibly for conducting research and publishing?” and “What things can AI do for researchers trying to publish their work?” Some of the responses were helpful to jump-start our thinking, but we created the final list shared above.)</p><p>We also think that it is critically important for users of AI to be aware of its limitations and problems. Some of those limitations and problems include bias, inaccuracy, and, as we highlighted above, limited transparency. Generative AI is biased by the data corpus that it reviews. Models trained on biased data sets produce biased results including the propagation of gender stereotypes and racial discrimination (Heaven, <span>2023</span>). These platforms can also produce inaccurate results—the output can be outdated, factually inaccurate, and occasionally nonsensical. In addition, generative AI tends not to provide citations for the products that it creates, and when asked specifically to do so, may create fictitious references (Stokel-Walker & Van Noorden, <span>2023</span>). Over time, the models will improve, and the users of this technology will get better at using it. However, these concerns will not simply go away, and it is essential for scholars using generative AI as well as those consuming AI-generated content to be aware of these issues.</p><p>Given both the challenges and potential associated with AI, we are not in favor of the use of generative AI to produce text for writing manuscripts. However, as stewards of JRST, we recognize that AI technologies are rapidly evolving as are the ways in which science education scholars use them, and setting overly restrictive guidelines regarding the use of AI for JRST publications could be detrimental to the journal and the JRST community. We think that it would be inappropriate for a research team to use AI to generate the full text for a JRST manuscript. At this moment, we do not think that it would even be possible to do this in a way that yields a product that meets the standards for JRST publication. However, we can also imagine circumstances in which a team employs AI in a manner consistent with the uses we presented above, and that some aspect of the AI-generated content ends up in the manuscript. Despite our acknowledged skepticism of the role of AI in publishing scholarship generally, we see this hypothetical case as one of likely numerous situations in which AI-generated content is quite appropriately included in a JRST article. In all situations in which authors employ AI, they should thoroughly review and edit the AI-generated content to check for accuracy and ensure that ethical standards for research, including proper attribution of sources and the avoidance of plagiarism, are met.</p><p>In terms of guidelines for the journal regarding AI, transparency is our key principle. When authors choose to use AI in their research and creation of manuscripts to be considered in JRST, they should openly disclose what AI tools were used and how they were used. Authors should make it clear at the time of submission what, if any, text, or other content (e.g., images or data displays) included in the manuscript was the product of an AI tool. These disclosures should be made in a manuscript's Methods section, when AI use relates to the design, enactment, or analysis of the research, or in an acknowledgments section. Ultimately, the authors are responsible for the information presented in their manuscripts. This includes accuracy of the information, proper citation of sources, and insurance of academic integrity. The editors, associate editors, and reviewers of JRST will consider AI declarations as a part of the process for publication decisions.</p><p>Whereas the use of AI tools for the preparation of manuscripts should be clearly acknowledged, these tools cannot be included as coauthors in JRST. Authorship carries with it responsibilities related to integrity, accuracy, and agreement to the journal's terms of use. AI cannot assume these responsibilities and, therefore, should not be listed as an author for JRST manuscripts. Human authors who submit a manuscript to JRST are responsible for all of the content presented in their manuscript regardless of the ways AI might have been used to support the process of generating the research or preparing the manuscript. The guidelines that we have outlined for JRST regarding author responsibilities, use and declaration of AI, and authorship are consistent with Wiley's guidelines for research integrity and publishing ethics. Wiley, the Publisher of JRST, includes an explicit statement on AI-generated content in their statement on ethics (https://authorservices.wiley.com/ethics-guidelines/index.html). The guidelines we share are also consistent with the Committee on Publication Ethics (COPE) position statement on AI tools (https://publicationethics.org/) and align with prevailing trends among academic publishers and journals (e.g., Flanagin et al., <span>2023</span>).</p><p>Of course, there is potential for employing AI in publication processes that go beyond conducting research and preparing manuscripts. For example, JRST regularly uses software to detect how similar newly accepted manuscripts are to previously published reports. In this case, we use a form of AI to guard against plagiarism. However, at this time, JRST does not approve of the use of generative AI in the review of manuscripts or the determination of publication decisions. Furthermore, reviewers should not upload any content from submitted manuscripts to generative AI tools. Uploading manuscripts to an AI model violates the confidentiality assumed in the JRST review process. The editorial team sends manuscripts to reviewers to read and provide feedback based on their expertise, and we expect the feedback provided to be the product of the expert reviewers and not AI. We think that reviewing and making publication decisions on science education research manuscripts requires specialized knowledge and that current AI tools cannot complete these tasks well nor do they currently have the capacity to do so.</p><p>AI holds exciting potential for many dimensions of modern life; and research, education, and publishing are certainly some of the areas that might be dramatically impacted. Just as it is exciting to consider the possibilities of AI, there are ample reasons for concern. As the editors of JRST, we think it is important for the journal to present clear guidelines for the use of AI in JRST publications and review processes. In this editorial, we have attempted to outline such a set of guidelines. As AI technologies change, these guidelines will need to be reviewed and when appropriate revised; but for now, we hope that these guidelines provide help for researchers and authors trying to navigate the current environment for science education research in which AI is clearly a part.</p><p>In addition to presenting guidelines for AI use in JRST, we hope this editorial contributes to a burgeoning conversation in the science education community about AI more generally. As nearly all commentators about AI have suggested, AI is potentially transformative, but there are many uncertainties about how we should use AI and what problems could be generated through that use. AI is already an important part of science learning environments and a tool being used in many different ways by learners and teachers (e.g., Cross, <span>2023</span>). While there are certainly some science education researchers responding to the AI revolution (e.g., Antonenko & Bramowitz, <span>2023</span>), we think, that as a whole, the science education research community is not as far along as it needs to be in terms of understanding, theorizing, and studying the intersections of AI and science education.</p><p>To help advance this discourse, we invite scholars to submit their research related to AI in science education to JRST. Authors of empirical manuscripts, literature reviews, or explorations of theory related to the use of AI in science education are invited to submit manuscripts to the journal. In addition, we are very interested in hosting a series of commentaries that advance positions regarding what AI technologies are being used in science education, how AI should be used (or not used) to support science learning and teaching, the pitfalls and potential of AI in our field, how the field should respond to developments in AI, and so forth. Commentaries are much shorter than full article submissions (1000–2000 words) and are reviewed by the editorial team as opposed to the full review process used for other types of manuscripts. We invite scholars to send inquiries regarding the appropriateness of particular themes or purposes of potential commentaries to the JRST editors via email: <span>[email protected]</span>. Commentaries related to AI (or other topics) should be submitted through the journal's online submission platform (https://mc.manuscriptcentral.com/jrst) as a “Comment” (when asked to select article type). We look forward to conversations in the pages of JRST that can help shape the future of science education and science education research and the role of AI in that future.</p>","PeriodicalId":48369,"journal":{"name":"Journal of Research in Science Teaching","volume":"61 4","pages":"739-743"},"PeriodicalIF":3.6000,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/tea.21933","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Research in Science Teaching","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/tea.21933","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial Intelligence (AI) is a transformative technology that promises to impact many aspects of society including research, education, and publishing. We, the editors of the Journal of Research in Science Teaching (JRST), think that the journal has a responsibility to contribute to the ongoing dialogues about the use of AI in research and publishing with particular attention to the field of science education. We use this editorial to share our current ideas about the opportunities and challenges associated with AI in science education research and to sketch out new journal guidelines related to the use of AI for the production of JRST articles. We also extend an invitation to scholars to submit research articles and commentaries that advance the field's understanding of the intersections of AI and science education.
Establishing foundations for an AI revolution has been in progress since the mid-twentieth century (Adamopoulou & Moussiades, 2020), and a giant step in public engagement with AI was taken in November 2022 when OpenAI released ChatGPT. This tool along with other large language models (LLM) such as Google Bard, and Microsoft's Copilot, provide platforms that are easy to use and can generate content such as text, images, computer code, audio, and video. It has quickly become apparent that these generative AI tools have the potential to change education in substantial ways. There is already evidence that students and teachers are actively using AI in ways that will push the field of education to reconsider what it means to construct learning artifacts, how to assess the work of learners, and the nature of learner-technology interactions (e.g., Prather et al., 2023). Of course, generative AI will not just impact the work of students, teachers, and other educational practitioners, it will affect how research is conducted and reported. As journal editors, we are particularly interested in the use of AI in the sharing of research and publication processes.
Across the field of education research, and science education research more specifically, scholars use a host of technologies to support their work. For example, researchers regularly use statistical packages to derive quantitative patterns in data, qualitative software to organize and represent coded themes in data, grammar, and spelling check software embedded in word processors and online (i.e., Grammarly), and reference managers to find and cite literature. Technologies such as these examples are ubiquitous across our field, and new generative AI presents another set of tools that researchers might leverage for the sharing of their scholarship. However, the now widely available LLMs seem, to us, to represent a fundamental shift in technological capacity for producing research publications. The users of software for data analysis, reference management, and grammar checks exert levels of control and supervision over these technologies, which is not the case when using an LLM. There is a much greater degree of opaqueness and uncertainty when it comes to generating content with an LLM as compared to generating regression coefficients with data analysis software. Given these distinctions between AI and other technologies used by researchers, we think AI presents a unique challenge for academic publishing and therefore warrants the additional attention called for in this editorial.
In considering the role of AI in publishing research, we think it is important to highlight two fundamental tensions. First, the research enterprise is about the creation of new knowledge. Researchers conduct and write about studies and other forms of scholarship as a means of generating new ideas and insights about the foci of their inquiries. We argue that AI, at least the LLMs that are currently prevalent, cannot achieve the goal of trustworthy knowledge creation. LLMs necessarily work from existing source material—they can repeat, reword, and summarize what already exists, but they do not create new knowledge. AI can be generative in the sense that it can generate content such as text, but AI is not generative from a research perspective. Second, an important hallmark of science and research is a commitment to openness and transparency. The set of social practices employed by research communities is a fundamental dimension of science itself, and open sharing and critique of methods, findings, and interpretations are some of these critical social practices (Osborne et al., 2022). The processes underlying generative AI tools in common use are not open or transparent. It is not always clear what the sources for AI generation are, how the sources are being analyzed, or why some ideas are highlighted and others are not. The phenomenon of AI hallucination, wherein an LLM generates false information based on patterns that do not exist in the source material, provides evidence of this problem. Why AI tools create content that is false or misleading is not fully understood and reflects an underlying degree of uncertainty (Athaluri et al., 2023).
Despite these concerns, we are not arguing that AI has no place in conducting and publishing research. As authors of a recent JRST commentary suggest, “the [AI] train… has indeed left the station” (Zhai & Nehm, 2023, p. 1395). Although this statement was written specifically in response to AI's role in formative assessment, its point about the inevitability of AI extends to other aspects of our field including publishing. We can imagine ways in which AI might be used (and is already being used) responsibly for conducting research and preparing manuscripts. For example, AI can help researchers review existing literature, generate code for analyzing data, create outlines for organizing manuscripts, and assist brainstorming processes. (In the interest of full disclosure, as we thought about what to claim that AI could do for researchers, we posed the following questions to ChatGPT: “How can generative AI be used responsibly for conducting research and publishing?” and “What things can AI do for researchers trying to publish their work?” Some of the responses were helpful to jump-start our thinking, but we created the final list shared above.)
We also think that it is critically important for users of AI to be aware of its limitations and problems. Some of those limitations and problems include bias, inaccuracy, and, as we highlighted above, limited transparency. Generative AI is biased by the data corpus that it reviews. Models trained on biased data sets produce biased results including the propagation of gender stereotypes and racial discrimination (Heaven, 2023). These platforms can also produce inaccurate results—the output can be outdated, factually inaccurate, and occasionally nonsensical. In addition, generative AI tends not to provide citations for the products that it creates, and when asked specifically to do so, may create fictitious references (Stokel-Walker & Van Noorden, 2023). Over time, the models will improve, and the users of this technology will get better at using it. However, these concerns will not simply go away, and it is essential for scholars using generative AI as well as those consuming AI-generated content to be aware of these issues.
Given both the challenges and potential associated with AI, we are not in favor of the use of generative AI to produce text for writing manuscripts. However, as stewards of JRST, we recognize that AI technologies are rapidly evolving as are the ways in which science education scholars use them, and setting overly restrictive guidelines regarding the use of AI for JRST publications could be detrimental to the journal and the JRST community. We think that it would be inappropriate for a research team to use AI to generate the full text for a JRST manuscript. At this moment, we do not think that it would even be possible to do this in a way that yields a product that meets the standards for JRST publication. However, we can also imagine circumstances in which a team employs AI in a manner consistent with the uses we presented above, and that some aspect of the AI-generated content ends up in the manuscript. Despite our acknowledged skepticism of the role of AI in publishing scholarship generally, we see this hypothetical case as one of likely numerous situations in which AI-generated content is quite appropriately included in a JRST article. In all situations in which authors employ AI, they should thoroughly review and edit the AI-generated content to check for accuracy and ensure that ethical standards for research, including proper attribution of sources and the avoidance of plagiarism, are met.
In terms of guidelines for the journal regarding AI, transparency is our key principle. When authors choose to use AI in their research and creation of manuscripts to be considered in JRST, they should openly disclose what AI tools were used and how they were used. Authors should make it clear at the time of submission what, if any, text, or other content (e.g., images or data displays) included in the manuscript was the product of an AI tool. These disclosures should be made in a manuscript's Methods section, when AI use relates to the design, enactment, or analysis of the research, or in an acknowledgments section. Ultimately, the authors are responsible for the information presented in their manuscripts. This includes accuracy of the information, proper citation of sources, and insurance of academic integrity. The editors, associate editors, and reviewers of JRST will consider AI declarations as a part of the process for publication decisions.
Whereas the use of AI tools for the preparation of manuscripts should be clearly acknowledged, these tools cannot be included as coauthors in JRST. Authorship carries with it responsibilities related to integrity, accuracy, and agreement to the journal's terms of use. AI cannot assume these responsibilities and, therefore, should not be listed as an author for JRST manuscripts. Human authors who submit a manuscript to JRST are responsible for all of the content presented in their manuscript regardless of the ways AI might have been used to support the process of generating the research or preparing the manuscript. The guidelines that we have outlined for JRST regarding author responsibilities, use and declaration of AI, and authorship are consistent with Wiley's guidelines for research integrity and publishing ethics. Wiley, the Publisher of JRST, includes an explicit statement on AI-generated content in their statement on ethics (https://authorservices.wiley.com/ethics-guidelines/index.html). The guidelines we share are also consistent with the Committee on Publication Ethics (COPE) position statement on AI tools (https://publicationethics.org/) and align with prevailing trends among academic publishers and journals (e.g., Flanagin et al., 2023).
Of course, there is potential for employing AI in publication processes that go beyond conducting research and preparing manuscripts. For example, JRST regularly uses software to detect how similar newly accepted manuscripts are to previously published reports. In this case, we use a form of AI to guard against plagiarism. However, at this time, JRST does not approve of the use of generative AI in the review of manuscripts or the determination of publication decisions. Furthermore, reviewers should not upload any content from submitted manuscripts to generative AI tools. Uploading manuscripts to an AI model violates the confidentiality assumed in the JRST review process. The editorial team sends manuscripts to reviewers to read and provide feedback based on their expertise, and we expect the feedback provided to be the product of the expert reviewers and not AI. We think that reviewing and making publication decisions on science education research manuscripts requires specialized knowledge and that current AI tools cannot complete these tasks well nor do they currently have the capacity to do so.
AI holds exciting potential for many dimensions of modern life; and research, education, and publishing are certainly some of the areas that might be dramatically impacted. Just as it is exciting to consider the possibilities of AI, there are ample reasons for concern. As the editors of JRST, we think it is important for the journal to present clear guidelines for the use of AI in JRST publications and review processes. In this editorial, we have attempted to outline such a set of guidelines. As AI technologies change, these guidelines will need to be reviewed and when appropriate revised; but for now, we hope that these guidelines provide help for researchers and authors trying to navigate the current environment for science education research in which AI is clearly a part.
In addition to presenting guidelines for AI use in JRST, we hope this editorial contributes to a burgeoning conversation in the science education community about AI more generally. As nearly all commentators about AI have suggested, AI is potentially transformative, but there are many uncertainties about how we should use AI and what problems could be generated through that use. AI is already an important part of science learning environments and a tool being used in many different ways by learners and teachers (e.g., Cross, 2023). While there are certainly some science education researchers responding to the AI revolution (e.g., Antonenko & Bramowitz, 2023), we think, that as a whole, the science education research community is not as far along as it needs to be in terms of understanding, theorizing, and studying the intersections of AI and science education.
To help advance this discourse, we invite scholars to submit their research related to AI in science education to JRST. Authors of empirical manuscripts, literature reviews, or explorations of theory related to the use of AI in science education are invited to submit manuscripts to the journal. In addition, we are very interested in hosting a series of commentaries that advance positions regarding what AI technologies are being used in science education, how AI should be used (or not used) to support science learning and teaching, the pitfalls and potential of AI in our field, how the field should respond to developments in AI, and so forth. Commentaries are much shorter than full article submissions (1000–2000 words) and are reviewed by the editorial team as opposed to the full review process used for other types of manuscripts. We invite scholars to send inquiries regarding the appropriateness of particular themes or purposes of potential commentaries to the JRST editors via email: [email protected]. Commentaries related to AI (or other topics) should be submitted through the journal's online submission platform (https://mc.manuscriptcentral.com/jrst) as a “Comment” (when asked to select article type). We look forward to conversations in the pages of JRST that can help shape the future of science education and science education research and the role of AI in that future.
期刊介绍:
Journal of Research in Science Teaching, the official journal of NARST: A Worldwide Organization for Improving Science Teaching and Learning Through Research, publishes reports for science education researchers and practitioners on issues of science teaching and learning and science education policy. Scholarly manuscripts within the domain of the Journal of Research in Science Teaching include, but are not limited to, investigations employing qualitative, ethnographic, historical, survey, philosophical, case study research, quantitative, experimental, quasi-experimental, data mining, and data analytics approaches; position papers; policy perspectives; critical reviews of the literature; and comments and criticism.