{"title":"A randomised controlled trial of an Intervention to Improve Compliance with the ARRIVE guidelines (IICARus).","authors":"Kaitlyn Hair, Malcolm R Macleod, Emily S Sena","doi":"10.1186/s41073-019-0069-3","DOIUrl":"10.1186/s41073-019-0069-3","url":null,"abstract":"<p><strong>Background: </strong>The ARRIVE (Animal Research: Reporting of In Vivo Experiments) guidelines are widely endorsed but compliance is limited. We sought to determine whether journal-requested completion of an ARRIVE checklist improves full compliance with the guidelines.</p><p><strong>Methods: </strong>In a randomised controlled trial, manuscripts reporting in vivo animal research submitted to PLOS ONE (March-June 2015) were randomly allocated to either requested completion of an ARRIVE checklist or current standard practice. Authors, academic editors, and peer reviewers were blinded to group allocation. Trained reviewers performed outcome adjudication in duplicate by assessing manuscripts against an operationalised version of the ARRIVE guidelines that consists 108 items. Our primary outcome was the between-group differences in the proportion of manuscripts meeting all ARRIVE guideline checklist subitems.</p><p><strong>Results: </strong>We randomised 1689 manuscripts (control: <i>n</i> = 844, intervention: <i>n</i> = 845), of which 1269 were sent for peer review and 762 (control: <i>n</i> = 340; intervention: <i>n</i> = 332) accepted for publication. No manuscript in either group achieved full compliance with the ARRIVE checklist. Details of animal husbandry (ARRIVE subitem 9b) was the only subitem to show improvements in reporting, with the proportion of compliant manuscripts rising from 52.1 to 74.1% (<i>X</i> <sup>2</sup> = 34.0, df = 1, <i>p</i> = 2.1 × 10<sup>-7</sup>) in the control and intervention groups, respectively.</p><p><strong>Conclusions: </strong>These results suggest that altering the editorial process to include requests for a completed ARRIVE checklist is not enough to improve compliance with the ARRIVE guidelines. Other approaches, such as more stringent editorial policies or a targeted approach on key quality items, may promote improvements in reporting.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0069-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37339641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cate Foster, Elizabeth Wager, Jackie Marchington, Mina Patel, Steve Banner, Nina C Kennard, Antonia Panayi, Rianne Stacey
{"title":"Good Practice for Conference Abstracts and Presentations: GPCAP.","authors":"Cate Foster, Elizabeth Wager, Jackie Marchington, Mina Patel, Steve Banner, Nina C Kennard, Antonia Panayi, Rianne Stacey","doi":"10.1186/s41073-019-0070-x","DOIUrl":"10.1186/s41073-019-0070-x","url":null,"abstract":"<p><p>Research that has been sponsored by pharmaceutical, medical device and biotechnology companies is often presented at scientific and medical conferences. However, practices vary between organizations and it can be difficult to follow both individual conference requirements and good publication practice guidelines. Until now, no specific guidelines or recommendations have been available to describe best practice for conference presentations. This document was developed by a working group of publication professionals and uploaded to PeerJ Preprints for consultation prior to publication; an additional 67 medical societies, medical conference sites and conference companies were also asked to comment. The resulting recommendations aim to complement current good publication practice and authorship guidelines, outline the general principles of best practice for conference presentations and provide recommendations around authorship, contributorship, financial transparency, prior publication and copyright, to conference organizers, authors and industry professionals. While the authors of this document recognize that individual conference guidelines should be respected, they urge organizers to consider authorship criteria and data transparency when designing submission sites and setting parameters around word/character count and content for abstracts. It is also important to recognize that conference presentations have different limitations to full journal publications, for example, in the case of limited audiences that necessitate refocused abstracts, or where lead authors do not speak the local language, and these have been acknowledged accordingly. The authors also recognize the need for further clarity regarding copyright of previously published abstracts and have made recommendations to assist with best practice. By following Good Practice for Conference Abstracts and Presentations: GPCAP recommendations, industry professionals, authors and conference organizers will improve consistency, transparency and integrity of publications submitted to conferences worldwide.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0070-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37315202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The advantages of peer review over arbitration for resolving authorship disputes.","authors":"Zubin Master, Evelyn Tenenbaum","doi":"10.1186/s41073-019-0071-9","DOIUrl":"https://doi.org/10.1186/s41073-019-0071-9","url":null,"abstract":"<p><p>A recent commentary argued for arbitration to resolve authorship disputes within academic research settings explaining that current mechanisms to resolve conflicts result in unclear outcomes and institutional power vested in senior investigators could compromise fairness. We argue here that arbitration is not a suitable means to resolve disputes among researchers in academia because it remains unclear who will assume the costs of arbitration, the rules of evidence do not apply to arbitration, and decisions are binding and very difficult to appeal. Instead of arbitration, we advocate for peer-based approaches involving a peer review committee and research ethics consultation to help resolve authorship disagreements. We describe the composition of an institutional peer review committee to address authorship disputes. Both of these mechanisms are found, or can be formed, within academic institutions and offer several advantages to researchers who are likely to shy away from legalistic processes and gravitate towards those handled by their peers. Peer-based approaches are cheaper than arbitration and the experts involved have knowledge about academic publishing and the culture of research in the specific field. Decisions by knowledgeable and neutral experts could reduce bias, have greater authority, and could be appealed. Not only can peer-based approaches be leveraged to resolve authorship disagreements, but they may also enhance collegiality and promote a healthy team environment.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0071-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37302995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measuring the data gap: inclusion of sex and gender reporting in diabetes research.","authors":"Suzanne Day, Wei Wu, Robin Mason, Paula A Rochon","doi":"10.1186/s41073-019-0068-4","DOIUrl":"https://doi.org/10.1186/s41073-019-0068-4","url":null,"abstract":"<p><strong>Background: </strong>Important sex and gender differences have been found in research on diabetes complications and treatment. Reporting on whether and how sex and gender impact research findings is crucial for developing tailored diabetes care strategies. To analyze the extent to which this information is available in current diabetes research, we examined original investigations on diabetes for the integration of sex and gender in study reporting.</p><p><strong>Methods: </strong>We examined original investigations on diabetes published between January 1 and December 31, 2015, in the top five general medicine journals and top five diabetes-specific journals (by 2015 impact factor). Data were extracted on sex and gender integration across seven article sections: title, abstract, introduction, methods, results, discussion, and limitations.</p><p><strong>Results: </strong>We identified 155 original investigations on diabetes, including 115 randomized controlled trials (RCTs) and 40 observational studies. Sex and gender were rarely incorporated in article titles, abstracts and introductions. Most methods sections did not describe plans for sex/gender analyses; 47 (30.3%) articles described plans to control for sex/gender in the analysis and 12 (7.7%) described plans to stratify results by sex/gender. While most articles (151, 97.4%) reported the sex/gender of study participants, only 10 (6.5%) of all articles reported all study outcomes separately by sex/gender. Discussion of sex-related issues was incorporated into 21 (13.5%) original investigations; however, just 1 (0.6%) discussed gender-related issues. Comparison by journal type (general medicine vs. diabetes specific) yielded only minor differences from the overall integration results. In contrast, RCTs performed more poorly on multiple sex/gender assessment metrics compared to observational studies.</p><p><strong>Conclusions: </strong>Sex and gender are poorly integrated in current diabetes original investigations, suggesting that substantial improvements in sex and gender data reporting are needed to inform the evidence to support sex- and gender-specific diabetes care.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0068-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37233205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S Bressers, H van den Elzen, C Gräwe, D van den Oetelaar, P H A Postma, S K Schoustra
{"title":"Policy driven changes in animal research practices: mapping researchers' attitudes towards animal-free innovations using the Netherlands as an example.","authors":"S Bressers, H van den Elzen, C Gräwe, D van den Oetelaar, P H A Postma, S K Schoustra","doi":"10.1186/s41073-019-0067-5","DOIUrl":"https://doi.org/10.1186/s41073-019-0067-5","url":null,"abstract":"<p><strong>Background: </strong>Reducing the number of animals used in experiments has become a priority for the governments of many countries. For these reductions to occur, animal-free alternatives must be made more available and, crucially, must be embraced by researchers.</p><p><strong>Methods: </strong>We conducted an international online survey for academics in the field of animal science (<i>N</i> = 367) to explore researchers' attitudes towards the implementation of animal-free innovations. Through this survey, we address three key questions. The first question is whether scientists who use animals in their research consider governmental goals for animal-free innovations achievable and whether they would support such goals. Secondly, responders were asked to rank the importance of ten roadblocks that could hamper the implementation of animal-free innovations. Finally, responders were asked whether they would migrate (either themselves or their research) if increased animal research regulations in their country of residence restricted their research.</p><p><strong>Results: </strong>While nearly half (40%) of the responders support governmental goals, the majority (71%) of researchers did not consider such goals achievable in their field within the near future. In terms of roadblocks for implementation of animal-free methods, ~ 80% of the responders considered 'reliability' as important, making it the most highly ranked roadblock. However, all other roadblocks were reported by most responders as somewhat important, suggesting that they must also be considered when addressing animal-free innovations. Importantly, a majority reported that they would consider migration to another country in response to a restrictive animal research policy. Thus, governments must consider the risk of researchers migrating to other institutes, states or countries, leading to a 'brain-drain' if policies are too strict or suitable animal-free alternatives are not available.</p><p><strong>Conclusion: </strong>Our findings suggest that development and implementation of animal-free innovations are hampered by multiple factors. We outline three pillars concerning education, governmental influence and data sharing, the implementation of which may help to overcome these roadblocks to animal-free innovations.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0067-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37187339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tamarinde L Haven, Marije Esther Evalien de Goede, Joeri K Tijdink, Frans Jeroen Oort
{"title":"Personally perceived publication pressure: revising the Publication Pressure Questionnaire (PPQ) by using work stress models.","authors":"Tamarinde L Haven, Marije Esther Evalien de Goede, Joeri K Tijdink, Frans Jeroen Oort","doi":"10.1186/s41073-019-0066-6","DOIUrl":"https://doi.org/10.1186/s41073-019-0066-6","url":null,"abstract":"<p><strong>Background: </strong>The emphasis on impact factors and the quantity of publications intensifies competition between researchers. This competition was traditionally considered an incentive to produce high-quality work, but there are unwanted side-effects of this competition like publication pressure. To measure the effect of publication pressure on researchers, the Publication Pressure Questionnaire (PPQ) was developed. Upon using the PPQ, some issues came to light that motivated a revision.</p><p><strong>Method: </strong>We constructed two new subscales based on work stress models using the facet method. We administered the revised PPQ (PPQr) to a convenience sample together with the Maslach Burnout Inventory (MBI) and the Work Design Questionnaire (WDQ). To assess which items best measured publication pressure, we carried out a principal component analysis (PCA). Reliability was sufficient when Cronbach's alpha > 0.7. Finally, we administered the PPQr in a larger, independent sample of researchers to check the reliability of the revised version.</p><p><strong>Results: </strong>Three components were identified as 'stress', 'attitude', and 'resources'. We selected 3 × 6 = 18 items with high loadings in the three-component solution. Based on the convenience sample, Cronbach's alphas were 0.83 for stress, 0.80 for attitude, and 0.76 for resources. We checked the validity of the PPQr by inspecting the correlations with the MBI and the WDQ. Stress correlated 0.62 with MBI's emotional exhaustion. Resources correlated 0.50 with relevant WDQ subscales. To assess the internal structure of the PPQr in the independent reliability sample, we conducted the principal component analysis. The three-component solution explains 50% of the variance. Cronbach's alphas were 0.80, 0.78, and 0.75 for stress, attitude, and resources, respectively.</p><p><strong>Conclusion: </strong>We conclude that the PPQr is a valid and reliable instrument to measure publication pressure in academic researchers from all disciplinary fields. The PPQr strongly relates to burnout and could also be beneficial for policy makers and research institutions to assess the degree of publication pressure in their institute.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0066-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37347931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M J E Urlings, B Duyx, G M H Swaen, L M Bouter, M P Zeegers
{"title":"Selective citation in scientific literature on the human health effects of bisphenol A.","authors":"M J E Urlings, B Duyx, G M H Swaen, L M Bouter, M P Zeegers","doi":"10.1186/s41073-019-0065-7","DOIUrl":"https://doi.org/10.1186/s41073-019-0065-7","url":null,"abstract":"<p><strong>Introduction: </strong>Bisphenol A is highly debated and studied in relation to a variety of health outcomes. This large variation in the literature makes BPA a topic that is prone to selective use of literature, in order to underpin one's own findings and opinion. Over time, selective use of literature, by means of citations, can lead to a skewed knowledge development and a biased scientific consensus. In this study, we assess which factors drive citation and whether this results in the overrepresentation of harmful health effects of BPA.</p><p><strong>Methods: </strong>A citation network analysis was performed to test various determinants of citation. A systematic search identified all relevant publications on the human health effect of BPA. Data were extracted on potential determinants of selective citation, such as study outcome, study design, sample size, journal impact factor, authority of the author, self-citation, and funding source. We applied random effect logistic regression to assess whether these determinants influence the likelihood of citation.</p><p><strong>Results: </strong>One hundred sixty-nine publications on BPA were identified, with 12,432 potential citation pathways of which 808 citations occurred. The network consisted of 63 cross-sectional studies, 34 cohort studies, 29 case-control studies, 35 narrative reviews, and 8 systematic reviews. Positive studies have a 1.5 times greater chance of being cited compared to negative studies. Additionally, the authority of the author and self-citation are consistently found to be positively associated with the likelihood of being cited. Overall, the network seems to be highly influenced by two highly cited publications, whereas 60 out of 169 publications received no citations.</p><p><strong>Conclusion: </strong>In the literature on BPA, citation is mostly driven by positive study outcome and author-related factors, such as high authority within the network. Interpreting the impact of these factors and the big influence of a few highly cited publications, it can be questioned to which extent the knowledge development in human literature on BPA is actually evidence-based.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0065-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37144238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher Baethge, Sandra Goldbeck-Wood, Stephan Mertens
{"title":"SANRA-a scale for the quality assessment of narrative review articles.","authors":"Christopher Baethge, Sandra Goldbeck-Wood, Stephan Mertens","doi":"10.1186/s41073-019-0064-8","DOIUrl":"https://doi.org/10.1186/s41073-019-0064-8","url":null,"abstract":"<p><strong>Background: </strong>Narrative reviews are the commonest type of articles in the medical literature. However, unlike systematic reviews and randomized controlled trials (RCT) articles, for which formal instruments exist to evaluate quality, there is currently no instrument available to assess the quality of narrative reviews. In response to this gap, we developed SANRA, the Scale for the Assessment of Narrative Review Articles.</p><p><strong>Methods: </strong>A team of three experienced journal editors modified or deleted items in an earlier SANRA version based on face validity, item-total correlations, and reliability scores from previous tests. We deleted an item which addressed a manuscript's writing and accessibility due to poor inter-rater reliability. The six items which form the revised scale are rated from 0 (low standard) to 2 (high standard) and cover the following topics: explanation of (1) the importance and (2) the aims of the review, (3) literature search and (4) referencing and presentation of (5) evidence level and (6) relevant endpoint data. For all items, we developed anchor definitions and examples to guide users in filling out the form. The revised scale was tested by the same editors (blinded to each other's ratings) in a group of 30 consecutive non-systematic review manuscripts submitted to a general medical journal.</p><p><strong>Results: </strong>Raters confirmed that completing the scale is feasible in everyday editorial work. The mean sum score across all 30 manuscripts was 6.0 out of 12 possible points (SD 2.6, range 1-12). Corrected item-total correlations ranged from 0.33 (item 3) to 0.58 (item 6), and Cronbach's alpha was 0.68 (internal consistency). The intra-class correlation coefficient (average measure) was 0.77 [95% CI 0.57, 0.88] (inter-rater reliability). Raters often disagreed on items 1 and 4.</p><p><strong>Conclusions: </strong>SANRA's feasibility, inter-rater reliability, homogeneity of items, and internal consistency are sufficient for a scale of six items. Further field testing, particularly of validity, is desirable. We recommend rater training based on the \"explanations and instructions\" document provided with SANRA. In editorial decision-making, SANRA may complement journal-specific evaluation of manuscripts-pertaining to, e.g., audience, originality or difficulty-and may contribute to improving the standard of non-systematic reviews.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0064-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37309816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. F. D. Carneiro, Victor G. S. Queiroz, T. Moulin, Carlos A. M. Carvalho, C. Haas, Danielle Rayêe, D. Henshall, Evandro A. De-Souza, F. E. Amorim, Flávia Z. Boos, G. Guercio, Igor R. Costa, K. Hajdu, L. V. van Egmond, M. Modrák, Pedro B. Tan, Richard J. Abdill, S. Burgess, Sylvia F. S. Guerra, V. T. Bortoluzzi, O. Amaral
{"title":"Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature","authors":"C. F. D. Carneiro, Victor G. S. Queiroz, T. Moulin, Carlos A. M. Carvalho, C. Haas, Danielle Rayêe, D. Henshall, Evandro A. De-Souza, F. E. Amorim, Flávia Z. Boos, G. Guercio, Igor R. Costa, K. Hajdu, L. V. van Egmond, M. Modrák, Pedro B. Tan, Richard J. Abdill, S. Burgess, Sylvia F. S. Guerra, V. T. Bortoluzzi, O. Amaral","doi":"10.1101/581892","DOIUrl":"https://doi.org/10.1101/581892","url":null,"abstract":"Background Preprint usage is growing rapidly in the life sciences; however, questions remain on the relative quality of preprints when compared to published articles. An objective dimension of quality that is readily measurable is completeness of reporting, as transparency can improve the reader’s ability to independently interpret data and reproduce findings. Methods In this observational study, we initially compared independent samples of articles published in bioRxiv and in PubMed-indexed journals in 2016 using a quality of reporting questionnaire. After that, we performed paired comparisons between preprints from bioRxiv to their own peer-reviewed versions in journals. Results Peer-reviewed articles had, on average, higher quality of reporting than preprints, although the difference was small, with absolute differences of 5.0% [95% CI 1.4, 8.6] and 4.7% [95% CI 2.4, 7.0] of reported items in the independent samples and paired sample comparison, respectively. There were larger differences favoring peer-reviewed articles in subjective ratings of how clearly titles and abstracts presented the main findings and how easy it was to locate relevant reporting information. Changes in reporting from preprints to peer-reviewed versions did not correlate with the impact factor of the publication venue or with the time lag from bioRxiv to journal publication. Conclusions Our results suggest that, on average, publication in a peer-reviewed journal is associated with improvement in quality of reporting. They also show that quality of reporting in preprints in the life sciences is within a similar range as that of peer-reviewed articles, albeit slightly lower on average, supporting the idea that preprints should be considered valid scientific contributions.","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41784313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Guidelines for open peer review implementation.","authors":"Tony Ross-Hellauer, Edit Görögh","doi":"10.1186/s41073-019-0063-9","DOIUrl":"10.1186/s41073-019-0063-9","url":null,"abstract":"<p><p>Open peer review (OPR) is moving into the mainstream, but it is often poorly understood and surveys of researcher attitudes show important barriers to implementation. As more journals move to implement and experiment with the myriad of innovations covered by this term, there is a clear need for best practice guidelines to guide implementation. This brief article aims to address this knowledge gap, reporting work based on an interactive stakeholder workshop to create best-practice guidelines for editors and journals who wish to transition to OPR. Although the advice is aimed mainly at editors and publishers of scientific journals, since this is the area in which OPR is at its most mature, many of the principles may also be applicable for the implementation of OPR in other areas (e.g., books, conference submissions).</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0063-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37045643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}