{"title":"Are there 34,000 human emotions? Deconstructing patterns of scientific misinformation.","authors":"Jonas Polfuß","doi":"10.1080/08989621.2024.2393813","DOIUrl":"https://doi.org/10.1080/08989621.2024.2393813","url":null,"abstract":"<p><strong>Background: </strong>Scientific misinformation is a much-discussed topic, and the COVID-19 crisis has highlighted the importance of reliability in science and research. However, limiting misinformation is complicated because of the growing number of communication channels, in which scientific and nonscientific content are often mixed.</p><p><strong>Methods: </strong>This case study combines the examination of references, online observation, and a content and frequency analysis to investigate the dissemination of scientific misinformation in the interplay of different genres and media.</p><p><strong>Results: </strong>Using the example of the claimed existence of 34,000 human emotions, this study demonstrates how questionable statements are spread in science, popular science, and pseudoscience, making it particularly challenging to track and correct them.</p><p><strong>Conclusions: </strong>The findings highlight epistemic authority, trust, and injustice within and between scientific and nonscientific communities. The author argues that, in the digital age, researchers should defend and monitor scientific principles beyond academia.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-20"},"PeriodicalIF":2.8,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prevalence of plagiarism in hijacked journals: A text similarity analysis.","authors":"Anna Abalkina","doi":"10.1080/08989621.2024.2387210","DOIUrl":"https://doi.org/10.1080/08989621.2024.2387210","url":null,"abstract":"<p><strong>Background: </strong>The study examines the prevalence of plagiarism in hijacked journals, a category of problematic journals that have proliferated over the past decade.</p><p><strong>Methods: </strong>A quasi-random sample of 936 papers published in 58 hijacked journals that provided free access to their archive as of June 2021 was selected for the analysis. The study utilizes Urkund (Ouriginal) software and manual verification to investigate plagiarism and finds a significant prevalence of plagiarism in hijacked journals.</p><p><strong>Results: </strong>Out of the analyzed sample papers, 618 (66%) were found to contain instances of plagiarism, and 28% of papers from the sample (n = 259) displayed text similarities of 25% or more. The analysis reveals that a majority of authors originate from developing and ex-Soviet countries, with limited affiliation ties to developed countries and scarce international cooperation in papers submitted to hijacked journals. The absence of rigorous publication requirements, peer review processes, and plagiarism checks in hijacked journals creates an environment where authors can publish texts with a significant amount of plagiarism.</p><p><strong>Conclusions: </strong>These findings suggest a tendency for fraudulent journals to attract authors who do not uphold scientific integrity principles. The legitimization of papers from hijacked journals in bibliographic databases, along with their citation, poses significant challenges to scientific integrity.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-19"},"PeriodicalIF":2.8,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141996901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Minal M Caron, Carolyn T Lye, Barbara E Bierer, Mark Barnes
{"title":"The PubPeer conundrum: Administrative challenges in research misconduct proceedings.","authors":"Minal M Caron, Carolyn T Lye, Barbara E Bierer, Mark Barnes","doi":"10.1080/08989621.2024.2390007","DOIUrl":"10.1080/08989621.2024.2390007","url":null,"abstract":"<p><p>The founders of PubPeer envisioned their website as an online form of a \"journal club\" that would facilitate post-publication peer review. Recently, PubPeer comments have led to a significant number of research misconduct proceedings - a development that could not have been anticipated when the current federal research misconduct regulations were developed two decades ago. Yet the number, frequency, and velocity of PubPeer comments identifying data integrity concerns, and institutional and government practices that treat all such comments as potential research misconduct allegations, have overwhelmed institutions and threaten to divert attention and resources away from other research integrity initiatives. Recent, high profile research misconduct cases accentuate the increasing public interest in research integrity and make it inevitable that the use of platforms such as PubPeer to challenge research findings will intensify. This article examines the origins of PubPeer and its central role in the modern era of online-based scouring of scientific publications for potential problems and outlines the challenges that institutions must manage in addressing issues identified on PubPeer. In conclusion, we discuss some potential enhancements to the investigatory process specified under federal regulations that could, if implemented, allow institutions to manage some of these challenges more efficiently.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-19"},"PeriodicalIF":2.8,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maarten Derksen, Stephanie Meirmans, Jonna Brenninkmeijer, Jeannette Pols, Annemarijn de Boer, Hans van Eyghen, Surya Gayet, Rolf Groenwold, Dennis Hernaus, Pim Huijnen, Nienke Jonker, Renske de Kleijn, Charlotte F Kroll, Angelos-Miltiadis Krypotos, Nynke van der Laan, Kim Luijken, Ewout Meijer, Rachel S A Pear, Rik Peels, Robin Peeters, Charlotte C S Rulkens, Christin Scholz, Nienke Smit, Rombert Stapel, Joost de Winter
{"title":"Replication studies in the Netherlands: Lessons learned and recommendations for funders, publishers and editors, and universities.","authors":"Maarten Derksen, Stephanie Meirmans, Jonna Brenninkmeijer, Jeannette Pols, Annemarijn de Boer, Hans van Eyghen, Surya Gayet, Rolf Groenwold, Dennis Hernaus, Pim Huijnen, Nienke Jonker, Renske de Kleijn, Charlotte F Kroll, Angelos-Miltiadis Krypotos, Nynke van der Laan, Kim Luijken, Ewout Meijer, Rachel S A Pear, Rik Peels, Robin Peeters, Charlotte C S Rulkens, Christin Scholz, Nienke Smit, Rombert Stapel, Joost de Winter","doi":"10.1080/08989621.2024.2383349","DOIUrl":"https://doi.org/10.1080/08989621.2024.2383349","url":null,"abstract":"<p><p>Drawing on our experiences conducting replications we describe the lessons we learned about replication studies and formulate recommendations for researchers, policy makers, and funders about the role of replication in science and how it should be supported and funded. We first identify a variety of benefits of doing replication studies. Next, we argue that it is often necessary to improve aspects of the original study, even if that means deviating from the original protocol. Thirdly, we argue that replication studies highlight the importance of and need for more transparency of the research process, but also make clear how difficult that is. Fourthly, we underline that it is worth trying out replication in the humanities. We finish by formulating recommendations regarding reproduction and replication research, aimed specifically at funders, editors and publishers, and universities and other research institutes.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-19"},"PeriodicalIF":2.8,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141972304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is AI my co-author? The ethics of using artificial intelligence in scientific publishing.","authors":"Barton Moffatt, Alicia Hall","doi":"10.1080/08989621.2024.2386285","DOIUrl":"10.1080/08989621.2024.2386285","url":null,"abstract":"<p><p>The recent emergence of Large Language Models (LLMs) and other forms of Artificial Intelligence (AI) has led people to wonder whether they could act as an author on a scientific paper. This paper argues that AI systems should not be included on the author by-line. We agree with current commentators that LLMs are incapable of taking responsibility for their work and thus do not meet current authorship guidelines. We identify other problems with responsibility and authorship. In addition, the problems go deeper as AI tools also do not write in a meaningful sense nor do they have persistent identities. From a broader publication ethics perspective, adopting AI authorship would have detrimental effects on an already overly competitive and stressed publishing ecosystem. Deterrence is possible as backward-looking tools will likely be able to identify past AI usage. Finally, we question the value of using AI to produce more research simply for publication's sake.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-17"},"PeriodicalIF":2.8,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141898856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Training, networking, and support infrastructure for ombudspersons for good research practice: A survey of the status quo in the Berlin research area.","authors":"Simona Olivieri, Viktor Ullmann","doi":"10.1080/08989621.2024.2376644","DOIUrl":"https://doi.org/10.1080/08989621.2024.2376644","url":null,"abstract":"<p><p>Recent developments in the German academic landscape have seen a shifting approach to promoting research integrity. In 2019, the German Research Foundation (DFG) incentivized all research and higher education institutions to appoint ombudspersons who advise members of their institution in matters of good research practice or suspected research misconduct. These ombudspersons for good research practice, usually professors who act in this function on a voluntary basis, need institutional support to be prepared for and fulfill their diverse duties. The Ombuds-Modelle@BUA (2020) and OBUA - Ombudswesen@BUA (2021-2023) projects worked to advance the professionalization of ombudspersons in the Berlin research area by first investigating the current situation and then offering a meta-level of support in training, networking, and knowledge exchange. Furthermore, the OBUA project engaged in meta-research, investigating the status quo of local ombuds systems and demands for support. The project findings, discussed in this contribution, show that the professionalization of local ombuds systems has been evolving in past years, especially in the areas of training and networking. Infrastructural support measures, however, remain largely underdeveloped.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-20"},"PeriodicalIF":2.8,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141890860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Hosseini, Bert Gordijn, Q Eileen Wafford, Kristi L Holmes
{"title":"A systematic scoping review of the ethics of Contributor Role Ontologies and Taxonomies.","authors":"Mohammad Hosseini, Bert Gordijn, Q Eileen Wafford, Kristi L Holmes","doi":"10.1080/08989621.2022.2161049","DOIUrl":"10.1080/08989621.2022.2161049","url":null,"abstract":"<p><p>Contributor Role Ontologies and Taxonomies (CROTs) provide a standard list of roles to specify individual contributions to research. CROTs most common application has been their inclusion alongside author bylines in scholarly publications. With the recent uptake of CROTs among publishers -particularly the Contributor Role Taxonomy (CRediT)- some have anticipated a positive impact on ethical issues regarding the attribution of credit and responsibilities, but others have voiced concerns about CROTs shortcomings and ways they could be misunderstood or have unintended consequences. Since these discussions have never been consolidated, this review collated and explored published viewpoints about the ethics of CROTs. After searching Ovid Medline, Scopus, Web of Science, and Google Scholar, 30 papers met the inclusion criteria and were analyzed. We identified eight themes and 20 specific issues related to the ethics of CROTs and provided four recommendations for CROT developers, custodians, or others seeking to use CROTs in their workflows, policy and practice: 1) Compile comprehensive instructions that explain how CROTs should be used; 2) Improve the coherence of used terms, 3) Translate roles in languages other than English, 4) Communicate a clear vision about future development plans and be transparent about CROTs' strengths and weaknesses. We conclude that CROTs are not the panacea for unethical attributions and should be complemented with initiatives that support social and infrastructural transformation of scholarly publications.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"678-705"},"PeriodicalIF":2.8,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10533075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Krishma Labib, Daniel Pizzolato, Pieter Jan Stappers, Natalie Evans, Iris Lechner, Guy Widdershoven, Lex Bouter, Kris Dierickx, Katinka Bergema, Joeri Tijdink
{"title":"Using co-creation methods for research integrity guideline development - how, what, why and when?","authors":"Krishma Labib, Daniel Pizzolato, Pieter Jan Stappers, Natalie Evans, Iris Lechner, Guy Widdershoven, Lex Bouter, Kris Dierickx, Katinka Bergema, Joeri Tijdink","doi":"10.1080/08989621.2022.2154154","DOIUrl":"10.1080/08989621.2022.2154154","url":null,"abstract":"<p><p>Existing research integrity (RI) guideline development methods are limited in including various perspectives. While co-creation methods could help to address this, there is little information available to researchers and practitioners on how, why and when to use co-creation for developing RI guidelines, nor what the outcomes of co-creation methods are. In this paper, we aim to address this gap. First, we discuss <i>how</i> co-creation methods can be used for RI guideline development, based on our experience of developing RI guidelines. We elaborate on steps including preparation of the aims and design; participant sensitization; organizing and facilitating workshops; and analyzing data and translating them into guidelines. Secondly, we present the resulting RI guidelines, to show <i>what</i> the outcome of co-creation methods are. Thirdly, we reflect on <i>why</i> and <i>when</i> researchers might want to use co-creation methods for developing RI guidelines. We discuss that stakeholder engagement and inclusion of diverse perspectives are key strengths of co-creation methods. We also reflect that co-creation methods have the potential to make guidelines implementable if followed by additional steps such as revision working groups. We conclude that co-creation methods are a valuable approach to creating new RI guidelines when used together with additional methods.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"531-556"},"PeriodicalIF":2.8,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9090783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research data mismanagement - from questionable research practice to research misconduct.","authors":"Nicole Shu Ling Yeo-Teh, Bor Luen Tang","doi":"10.1080/08989621.2022.2157268","DOIUrl":"10.1080/08989621.2022.2157268","url":null,"abstract":"<p><p>Good record keeping practice and research data management underlie responsible research conduct and promote reproducibility of research findings in the sciences. In many cases of research misconduct, inadequate research data management frequently appear as an accompanying finding. Findings of disorganized or otherwise poor data archival or loss of research data are, on their own, not usually considered as indicative of research misconduct. Focusing on the availability of raw/primary data and the replicability of research based on these, we posit that most, if not all, instances of research data mismanagement (RDMM) could be considered a questionable research practice (QRP). Furthermore, instances of RDMM at their worst could indeed be viewed as acts of research misconduct. Here, we analyze with postulated scenarios the contexts and circumstances under which RDMM could be viewed as a significant misrepresentation of research (ie. falsification), or data fabrication. We further explore how RDMM might potentially be adjudicated as research misconduct based on intent and consequences. Defining how RDMM could constitute QRP or research misconduct would aid the formulation of relevant institutional research integrity policies to mitigate undesirable events stemming from RDMM.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"706-713"},"PeriodicalIF":2.8,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10532026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kamiel Verbeke, Tomasz Krawczyk, Dieter Baeyens, Jan Piasecki, Pascal Borry
{"title":"Assessing the acceptability of individual studies that use deception: A systematic review of normative guidance documents.","authors":"Kamiel Verbeke, Tomasz Krawczyk, Dieter Baeyens, Jan Piasecki, Pascal Borry","doi":"10.1080/08989621.2022.2153675","DOIUrl":"10.1080/08989621.2022.2153675","url":null,"abstract":"<p><p>Research participants are often deceived for methodological reasons. However, assessing the ethical acceptability of an individual study that uses deception is not straightforward. The academic literature is scattered on the subject and several aspects of the acceptability assessment are only scarcely addressed, which parallels reports of inconsistent ethics review. Therefore, we aimed to investigate where normative guidance documents agree and disagree about this assessment. A PRISMA-Ethics-guided systematic review of normative guidance documents that discuss deception of research participants was conducted. Our search strategy resulted in 55 documents that were subsequently analyzed through abductive thematic analysis. While guidance documents mention little about specific risks and opportunities of deception, our analysis describes a rich picture of the thresholds for acceptability of the risks and benefits of deception and their integration, the comparison with the risk-benefit analysis of alternative non-deceptive methods, and the bodies of people who are positioned to do the review. Our review reveals an agreement on the general process of assessing the acceptability of studies that use deception, although significant variability remains in the details and several topics are largely or completely unaddressed in guidance documents.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"655-677"},"PeriodicalIF":2.8,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10693454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}