Louis Anthony Cox, Terje Aven, Seth Guikema, Charles N Haas, James H Lambert, Karen Lowrie, George Maldonado, Felicia Wu
{"title":"Can AI help authors prepare better risk science manuscripts?","authors":"Louis Anthony Cox, Terje Aven, Seth Guikema, Charles N Haas, James H Lambert, Karen Lowrie, George Maldonado, Felicia Wu","doi":"10.1111/risa.70055","DOIUrl":null,"url":null,"abstract":"<p><p>Scientists, publishers, and journal editors are wondering how, whether, and to what extent artificial intelligence (AI) tools might soon help to advance the rigor, efficiency, and value of scientific peer review. Will AI provide timely, useful feedback that helps authors improve their manuscripts while avoiding the biases and inconsistencies of human reviewers? Or might it instead generate low-quality verbiage, add noise and errors, reinforce flawed reasoning, and erode trust in the review process? This perspective reports on evaluations of two experimental AI systems: (i) a \"Screener\" available at http://screener.riskanalysis.cloud/ that gives authors feedback on whether a draft paper (or abstract, proposal, etc.) appears to be a fit for the journal Risk Analysis, based on the guidance to authors provided by the journal (https://www.sra.org/journal/what-makes-a-good-risk-analysis-article/); and (ii) a more ambitious \"Reviewer\" (http://aia1.moirai-solutions.com/) that gives substantive technical feedback and recommends how to improve the clarity of methodology and the interpretation of results. The evaluations were conducted by a convenience sample of Risk Analysis Area Editors (AEs) and authors, including two authors of manuscripts in progress and four authors of papers that had already been published. The Screener was generally rated as useful. It has been deployed at Risk Analysis since January of 2025. On the other hand, the Reviewer had mixed ratings, ranging from strongly positive to strongly negative. This perspective describes both the lessons learned and potential next steps in making AI tools useful to authors prior to peer review by human experts.</p>","PeriodicalId":21472,"journal":{"name":"Risk Analysis","volume":" ","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Risk Analysis","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/risa.70055","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Scientists, publishers, and journal editors are wondering how, whether, and to what extent artificial intelligence (AI) tools might soon help to advance the rigor, efficiency, and value of scientific peer review. Will AI provide timely, useful feedback that helps authors improve their manuscripts while avoiding the biases and inconsistencies of human reviewers? Or might it instead generate low-quality verbiage, add noise and errors, reinforce flawed reasoning, and erode trust in the review process? This perspective reports on evaluations of two experimental AI systems: (i) a "Screener" available at http://screener.riskanalysis.cloud/ that gives authors feedback on whether a draft paper (or abstract, proposal, etc.) appears to be a fit for the journal Risk Analysis, based on the guidance to authors provided by the journal (https://www.sra.org/journal/what-makes-a-good-risk-analysis-article/); and (ii) a more ambitious "Reviewer" (http://aia1.moirai-solutions.com/) that gives substantive technical feedback and recommends how to improve the clarity of methodology and the interpretation of results. The evaluations were conducted by a convenience sample of Risk Analysis Area Editors (AEs) and authors, including two authors of manuscripts in progress and four authors of papers that had already been published. The Screener was generally rated as useful. It has been deployed at Risk Analysis since January of 2025. On the other hand, the Reviewer had mixed ratings, ranging from strongly positive to strongly negative. This perspective describes both the lessons learned and potential next steps in making AI tools useful to authors prior to peer review by human experts.
期刊介绍:
Published on behalf of the Society for Risk Analysis, Risk Analysis is ranked among the top 10 journals in the ISI Journal Citation Reports under the social sciences, mathematical methods category, and provides a focal point for new developments in the field of risk analysis. This international peer-reviewed journal is committed to publishing critical empirical research and commentaries dealing with risk issues. The topics covered include:
• Human health and safety risks
• Microbial risks
• Engineering
• Mathematical modeling
• Risk characterization
• Risk communication
• Risk management and decision-making
• Risk perception, acceptability, and ethics
• Laws and regulatory policy
• Ecological risks.