Research integrity and peer review最新文献

筛选
英文 中文
SANRA-a scale for the quality assessment of narrative review articles. sanra -记叙性评论文章的质量评估量表。
Research integrity and peer review Pub Date : 2019-03-26 eCollection Date: 2019-01-01 DOI: 10.1186/s41073-019-0064-8
Christopher Baethge, Sandra Goldbeck-Wood, Stephan Mertens
{"title":"SANRA-a scale for the quality assessment of narrative review articles.","authors":"Christopher Baethge,&nbsp;Sandra Goldbeck-Wood,&nbsp;Stephan Mertens","doi":"10.1186/s41073-019-0064-8","DOIUrl":"https://doi.org/10.1186/s41073-019-0064-8","url":null,"abstract":"<p><strong>Background: </strong>Narrative reviews are the commonest type of articles in the medical literature. However, unlike systematic reviews and randomized controlled trials (RCT) articles, for which formal instruments exist to evaluate quality, there is currently no instrument available to assess the quality of narrative reviews. In response to this gap, we developed SANRA, the Scale for the Assessment of Narrative Review Articles.</p><p><strong>Methods: </strong>A team of three experienced journal editors modified or deleted items in an earlier SANRA version based on face validity, item-total correlations, and reliability scores from previous tests. We deleted an item which addressed a manuscript's writing and accessibility due to poor inter-rater reliability. The six items which form the revised scale are rated from 0 (low standard) to 2 (high standard) and cover the following topics: explanation of (1) the importance and (2) the aims of the review, (3) literature search and (4) referencing and presentation of (5) evidence level and (6) relevant endpoint data. For all items, we developed anchor definitions and examples to guide users in filling out the form. The revised scale was tested by the same editors (blinded to each other's ratings) in a group of 30 consecutive non-systematic review manuscripts submitted to a general medical journal.</p><p><strong>Results: </strong>Raters confirmed that completing the scale is feasible in everyday editorial work. The mean sum score across all 30 manuscripts was 6.0 out of 12 possible points (SD 2.6, range 1-12). Corrected item-total correlations ranged from 0.33 (item 3) to 0.58 (item 6), and Cronbach's alpha was 0.68 (internal consistency). The intra-class correlation coefficient (average measure) was 0.77 [95% CI 0.57, 0.88] (inter-rater reliability). Raters often disagreed on items 1 and 4.</p><p><strong>Conclusions: </strong>SANRA's feasibility, inter-rater reliability, homogeneity of items, and internal consistency are sufficient for a scale of six items. Further field testing, particularly of validity, is desirable. We recommend rater training based on the \"explanations and instructions\" document provided with SANRA. In editorial decision-making, SANRA may complement journal-specific evaluation of manuscripts-pertaining to, e.g., audience, originality or difficulty-and may contribute to improving the standard of non-systematic reviews.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0064-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37309816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 581
Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature 比较生物医学文献中预印本和同行评议文章的报告质量
Research integrity and peer review Pub Date : 2019-03-22 DOI: 10.1101/581892
C. F. D. Carneiro, Victor G. S. Queiroz, T. Moulin, Carlos A. M. Carvalho, C. Haas, Danielle Rayêe, D. Henshall, Evandro A. De-Souza, F. E. Amorim, Flávia Z. Boos, G. Guercio, Igor R. Costa, K. Hajdu, L. V. van Egmond, M. Modrák, Pedro B. Tan, Richard J. Abdill, S. Burgess, Sylvia F. S. Guerra, V. T. Bortoluzzi, O. Amaral
{"title":"Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature","authors":"C. F. D. Carneiro, Victor G. S. Queiroz, T. Moulin, Carlos A. M. Carvalho, C. Haas, Danielle Rayêe, D. Henshall, Evandro A. De-Souza, F. E. Amorim, Flávia Z. Boos, G. Guercio, Igor R. Costa, K. Hajdu, L. V. van Egmond, M. Modrák, Pedro B. Tan, Richard J. Abdill, S. Burgess, Sylvia F. S. Guerra, V. T. Bortoluzzi, O. Amaral","doi":"10.1101/581892","DOIUrl":"https://doi.org/10.1101/581892","url":null,"abstract":"Background Preprint usage is growing rapidly in the life sciences; however, questions remain on the relative quality of preprints when compared to published articles. An objective dimension of quality that is readily measurable is completeness of reporting, as transparency can improve the reader’s ability to independently interpret data and reproduce findings. Methods In this observational study, we initially compared independent samples of articles published in bioRxiv and in PubMed-indexed journals in 2016 using a quality of reporting questionnaire. After that, we performed paired comparisons between preprints from bioRxiv to their own peer-reviewed versions in journals. Results Peer-reviewed articles had, on average, higher quality of reporting than preprints, although the difference was small, with absolute differences of 5.0% [95% CI 1.4, 8.6] and 4.7% [95% CI 2.4, 7.0] of reported items in the independent samples and paired sample comparison, respectively. There were larger differences favoring peer-reviewed articles in subjective ratings of how clearly titles and abstracts presented the main findings and how easy it was to locate relevant reporting information. Changes in reporting from preprints to peer-reviewed versions did not correlate with the impact factor of the publication venue or with the time lag from bioRxiv to journal publication. Conclusions Our results suggest that, on average, publication in a peer-reviewed journal is associated with improvement in quality of reporting. They also show that quality of reporting in preprints in the life sciences is within a similar range as that of peer-reviewed articles, albeit slightly lower on average, supporting the idea that preprints should be considered valid scientific contributions.","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41784313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Guidelines for open peer review implementation. 开放式同行评审实施指南。
Research integrity and peer review Pub Date : 2019-02-27 DOI: 10.1186/s41073-019-0063-9
Tony Ross-Hellauer, Edit Görögh
{"title":"Guidelines for open peer review implementation.","authors":"Tony Ross-Hellauer,&nbsp;Edit Görögh","doi":"10.1186/s41073-019-0063-9","DOIUrl":"10.1186/s41073-019-0063-9","url":null,"abstract":"<p><p>Open peer review (OPR) is moving into the mainstream, but it is often poorly understood and surveys of researcher attitudes show important barriers to implementation. As more journals move to implement and experiment with the myriad of innovations covered by this term, there is a clear need for best practice guidelines to guide implementation. This brief article aims to address this knowledge gap, reporting work based on an interactive stakeholder workshop to create best-practice guidelines for editors and journals who wish to transition to OPR. Although the advice is aimed mainly at editors and publishers of scientific journals, since this is the area in which OPR is at its most mature, many of the principles may also be applicable for the implementation of OPR in other areas (e.g., books, conference submissions).</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"4"},"PeriodicalIF":0.0,"publicationDate":"2019-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0063-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37045643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Quality of reports of investigations of research integrity by academic institutions. 学术机构研究诚信调查报告的质量。
Research integrity and peer review Pub Date : 2019-02-19 DOI: 10.1186/s41073-019-0062-x
Andrew Grey, Mark Bolland, Greg Gamble, Alison Avenell
{"title":"Quality of reports of investigations of research integrity by academic institutions.","authors":"Andrew Grey,&nbsp;Mark Bolland,&nbsp;Greg Gamble,&nbsp;Alison Avenell","doi":"10.1186/s41073-019-0062-x","DOIUrl":"10.1186/s41073-019-0062-x","url":null,"abstract":"<p><strong>Background: </strong>Academic institutions play important roles in protecting and preserving research integrity. Concerns have been expressed about the objectivity, adequacy and transparency of institutional investigations of potentially compromised research integrity. We assessed the reports provided to us of investigations by three academic institutions of a large body of overlapping research with potentially compromised integrity.</p><p><strong>Methods: </strong>In 2017, we raised concerns with four academic institutions about the integrity of > 200 publications co-authored by an overlapping set of researchers. Each institution initiated an investigation. By November 2018, three had reported to us the results of their investigations, but only one report was publicly available. Two investigators independently assessed each available report using a published 26-item checklist designed to determine the quality and adequacy of institutional investigations of research integrity. Each assessor recorded additional comments ad hoc.</p><p><strong>Results: </strong>Concerns raised with the institutions were overlapping, wide-ranging and included those which were both general and publication-specific. The number of potentially affected publications at individual institutions ranged from 34 to 200. The duration of investigation by the three institutions which provided reports was 8-17 months. These investigations covered 14%, 15% and 77%, respectively, of potentially affected publications. Between-assessor agreement using the quality checklist was 0.68, 0.72 and 0.65 for each report. Only 4/78 individual checklist items were addressed adequately: a further 14 could not be assessed. Each report was graded inadequate overall. Reports failed to address publication-specific concerns and focussed more strongly on determining research misconduct than evaluating the integrity of publications.</p><p><strong>Conclusions: </strong>Our analyses identify important deficiencies in the quality and reporting of institutional investigation of concerns about the integrity of a large body of research reported by an overlapping set of researchers. They reinforce disquiet about the ability of institutions to rigorously and objectively oversee integrity of research conducted by their own employees.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0062-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37173168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Reporting in the abstracts presented at the 5th AfriNEAD (African Network for Evidence-to-Action in Disability) Conference in Ghana. 在加纳举行的第五届非洲残疾证据行动网络会议上提交的摘要报告。
Research integrity and peer review Pub Date : 2019-01-16 eCollection Date: 2019-01-01 DOI: 10.1186/s41073-018-0061-3
Eric Badu, Paul Okyere, Diane Bell, Naomi Gyamfi, Maxwell Peprah Opoku, Peter Agyei-Baffour, Anthony Kwaku Edusei
{"title":"Reporting in the abstracts presented at the 5th AfriNEAD (African Network for Evidence-to-Action in Disability) Conference in Ghana.","authors":"Eric Badu,&nbsp;Paul Okyere,&nbsp;Diane Bell,&nbsp;Naomi Gyamfi,&nbsp;Maxwell Peprah Opoku,&nbsp;Peter Agyei-Baffour,&nbsp;Anthony Kwaku Edusei","doi":"10.1186/s41073-018-0061-3","DOIUrl":"https://doi.org/10.1186/s41073-018-0061-3","url":null,"abstract":"<p><strong>Introduction: </strong>The abstracts of a conference are important for informing the participants about the results that are communicated. However, there is poor reporting in conference abstracts in disability research. This paper aims to assess the reporting in the abstracts presented at the 5th African Network for Evidence-to-Action in Disability (AfriNEAD) Conference in Ghana.</p><p><strong>Methods: </strong>This descriptive study extracted information from the abstracts presented at the 5th AfriNEAD Conference. Three reviewers independently reviewed all the included abstracts using a predefined data extraction form. Descriptive statistics were used to analyze the extracted information, using Stata version 15.</p><p><strong>Results: </strong>Of the 76 abstracts assessed, 54 met the inclusion criteria, while 22 were excluded. More than half of all the included abstracts (32/54; 59.26%) were studies conducted in Ghana. Some of the included abstracts did not report on the study design (37/54; 68.5%), the type of analysis performed (30/54; 55.56%), the sampling (27/54; 50%), and the sample size (18/54; 33.33%). Almost all the included abstracts did not report the age distribution and the gender of the participants.</p><p><strong>Conclusion: </strong>The study findings confirm that there is poor reporting of methods and findings in conference abstracts. Future conference organizers should critically examine abstracts to ensure that these issues are adequately addressed, so that findings are effectively communicated to participants.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2019-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-018-0061-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36939596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Replicability and replication in the humanities. 人文学科中的可复制性和可复制性。
Research integrity and peer review Pub Date : 2019-01-09 eCollection Date: 2019-01-01 DOI: 10.1186/s41073-018-0060-4
Rik Peels
{"title":"Replicability and replication in the humanities.","authors":"Rik Peels","doi":"10.1186/s41073-018-0060-4","DOIUrl":"10.1186/s41073-018-0060-4","url":null,"abstract":"<p><p>A large number of scientists and several news platforms have, over the last few years, been speaking of a replication crisis in various academic disciplines, especially the biomedical and social sciences. This paper answers the novel question of whether we should also pursue replication in the humanities. First, I create more conceptual clarity by defining, in addition to the term \"humanities,\" various key terms in the debate on replication, such as \"reproduction\" and \"replicability.\" In doing so, I pay attention to what is supposed to be the object of replication: certain studies, particular inferences, of specific results. After that, I spell out three reasons for thinking that replication in the humanities is not possible and argue that they are unconvincing. Subsequently, I give a more detailed case for thinking that replication in the humanities is possible. Finally, I explain why such replication in the humanities is not only possible, but also desirable.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2019-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6348612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36918266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Professional medical writing support and the quality, ethics and timeliness of clinical trial reporting: a systematic review 专业医学写作支持与临床试验报告的质量、道德和及时性:系统综述
Research integrity and peer review Pub Date : 2018-12-20 DOI: 10.1186/s41073-019-0073-7
O. Evuarherhe, W. Gattrell, Richard White, C. Winchester
{"title":"Professional medical writing support and the quality, ethics and timeliness of clinical trial reporting: a systematic review","authors":"O. Evuarherhe, W. Gattrell, Richard White, C. Winchester","doi":"10.1186/s41073-019-0073-7","DOIUrl":"https://doi.org/10.1186/s41073-019-0073-7","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0073-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46108951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Protocol for the development of a CONSORT extension for RCTs using cohorts and routinely collected health data. 为使用队列和常规收集的健康数据的 RCT 制定 CONSORT 扩展协议。
Research integrity and peer review Pub Date : 2018-10-29 eCollection Date: 2018-01-01 DOI: 10.1186/s41073-018-0053-3
Linda Kwakkenbos, Edmund Juszczak, Lars G Hemkens, Margaret Sampson, Ole Fröbert, Clare Relton, Chris Gale, Merrick Zwarenstein, Sinéad M Langan, David Moher, Isabelle Boutron, Philippe Ravaud, Marion K Campbell, Kimberly A Mc Cord, Tjeerd P van Staa, Lehana Thabane, Rudolf Uher, Helena M Verkooijen, Eric I Benchimol, David Erlinge, Maureen Sauvé, David Torgerson, Brett D Thombs
{"title":"Protocol for the development of a CONSORT extension for RCTs using cohorts and routinely collected health data.","authors":"Linda Kwakkenbos, Edmund Juszczak, Lars G Hemkens, Margaret Sampson, Ole Fröbert, Clare Relton, Chris Gale, Merrick Zwarenstein, Sinéad M Langan, David Moher, Isabelle Boutron, Philippe Ravaud, Marion K Campbell, Kimberly A Mc Cord, Tjeerd P van Staa, Lehana Thabane, Rudolf Uher, Helena M Verkooijen, Eric I Benchimol, David Erlinge, Maureen Sauvé, David Torgerson, Brett D Thombs","doi":"10.1186/s41073-018-0053-3","DOIUrl":"10.1186/s41073-018-0053-3","url":null,"abstract":"<p><strong>Background: </strong>Randomized controlled trials (RCTs) are often complex and expensive to perform. Less than one third achieve planned recruitment targets, follow-up can be labor-intensive, and many have limited real-world generalizability. Designs for RCTs conducted using cohorts and routinely collected health data, including registries, electronic health records, and administrative databases, have been proposed to address these challenges and are being rapidly adopted. These designs, however, are relatively recent innovations, and published RCT reports often do not describe important aspects of their methodology in a standardized way. Our objective is to extend the Consolidated Standards of Reporting Trials (CONSORT) statement with a consensus-driven reporting guideline for RCTs using cohorts and routinely collected health data.</p><p><strong>Methods: </strong>The development of this CONSORT extension will consist of five phases. Phase 1 (completed) consisted of the project launch, including fundraising, the establishment of a research team, and development of a conceptual framework. In phase 2, a systematic review will be performed to identify publications (1) that describe methods or reporting considerations for RCTs conducted using cohorts and routinely collected health data or (2) that are protocols or report results from such RCTs. An initial \"long list\" of possible modifications to CONSORT checklist items and possible new items for the reporting guideline will be generated based on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) and The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) statements. Additional possible modifications and new items will be identified based on the results of the systematic review. Phase 3 will consist of a three-round Delphi exercise with methods and content experts to evaluate the \"long list\" and generate a \"short list\" of key items. In phase 4, these items will serve as the basis for an in-person consensus meeting to finalize a core set of items to be included in the reporting guideline and checklist. Phase 5 will involve drafting the checklist and elaboration-explanation documents, and dissemination and implementation of the guideline.</p><p><strong>Discussion: </strong>Development of this CONSORT extension will contribute to more transparent reporting of RCTs conducted using cohorts and routinely collected health data.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"3 ","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2018-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6205772/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9105072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing integrated research integrity training: authorship, publication, and peer review 设计综合研究诚信培训:作者身份、出版和同行评审
Research integrity and peer review Pub Date : 2018-02-26 DOI: 10.1186/s41073-018-0046-2
Mark Hooper, Virginia Barbour, Anne Walsh, Stephanie Bradbury, Jane Jacobs
{"title":"Designing integrated research integrity training: authorship, publication, and peer review","authors":"Mark Hooper, Virginia Barbour, Anne Walsh, Stephanie Bradbury, Jane Jacobs","doi":"10.1186/s41073-018-0046-2","DOIUrl":"https://doi.org/10.1186/s41073-018-0046-2","url":null,"abstract":"This paper describes the experience of an academic institution, the Queensland University of Technology (QUT), developing training courses about research integrity practices in authorship, publication, and Journal Peer Review. The importance of providing research integrity training in these areas is now widely accepted; however, it remains an open question how best to conduct this training. For this reason, it is vital for institutions, journals, and peak bodies to share learnings.We describe how we have collaborated across our institution to develop training that supports QUT’s principles and which is in line with insights from contemporary research on best practices in learning design, universal design, and faculty involvement. We also discuss how we have refined these courses iteratively over time, and consider potential mechanisms for evaluating the effectiveness of the courses more formally.","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simple decision-tree tool to facilitate author identification of reporting guidelines during submission: a before-after study. 简单的决策树工具,方便作者在提交过程中确定报告准则:前后研究。
Research integrity and peer review Pub Date : 2017-12-18 eCollection Date: 2017-01-01 DOI: 10.1186/s41073-017-0044-9
Daniel R Shanahan, Ines Lopes de Sousa, Diana M Marshall
{"title":"Simple decision-tree tool to facilitate author identification of reporting guidelines during submission: a before-after study.","authors":"Daniel R Shanahan,&nbsp;Ines Lopes de Sousa,&nbsp;Diana M Marshall","doi":"10.1186/s41073-017-0044-9","DOIUrl":"https://doi.org/10.1186/s41073-017-0044-9","url":null,"abstract":"<p><strong>Background: </strong>There is evidence that direct journal endorsement of reporting guidelines can lead to important improvements in the quality and reliability of the published research. However, over the last 20 years, there has been a proliferation of reporting guidelines for different study designs, making it impractical for a journal to explicitly endorse them all. The objective of this study was to investigate whether a decision tree tool made available during the submission process facilitates author identification of the relevant reporting guideline.</p><p><strong>Methods: </strong>This was a prospective 14-week before-after study across four speciality medical research journals. During the submission process, authors were prompted to follow the relevant reporting guideline from the EQUATOR Network and asked to confirm that they followed the guideline ('before'). After 7 weeks, this prompt was updated to include a direct link to the decision-tree tool and an additional prompt for those authors who stated that 'no guidelines were applicable' ('after'). For each article submitted, the authors' response, what guideline they followed (if any) and what reporting guideline they should have followed (including none relevant) were recorded.</p><p><strong>Results: </strong>Overall, 590 manuscripts were included in this analysis-300 in the before cohort and 290 in the after. There were relevant reporting guidelines for 75% of manuscripts in each group; STROBE was the most commonly applicable reporting guideline, relevant for 35% (<i>n</i> = 106) and 37% (<i>n</i> = 106) of manuscripts, respectively. Use of the tool was associated with an 8.4% improvement in the number of authors correctly identifying the relevant reporting guideline for their study (<i>p</i> < 0.0001), a 14% reduction in the number of authors incorrectly stating that there were no relevant reporting guidelines (<i>p</i> < 0.0001), and a 1.7% reduction in authors choosing a guideline (<i>p</i> = 0.10). However, the 'after' cohort also saw a significant increase in the number of authors stating that there were relevant reporting guidelines for their study, but not specifying which (34 vs 29%; <i>p</i> = 0.04).</p><p><strong>Conclusion: </strong>This study suggests that use of a decision-tree tool during submission of a manuscript is associated with improved author identification of the relevant reporting guidelines for their study type; however, the majority of authors still failed to correctly identify the relevant guidelines.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"20"},"PeriodicalIF":0.0,"publicationDate":"2017-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-017-0044-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35837675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信