Mark Hooper, Virginia Barbour, Anne Walsh, Stephanie Bradbury, Jane Jacobs
{"title":"Designing integrated research integrity training: authorship, publication, and peer review","authors":"Mark Hooper, Virginia Barbour, Anne Walsh, Stephanie Bradbury, Jane Jacobs","doi":"10.1186/s41073-018-0046-2","DOIUrl":"https://doi.org/10.1186/s41073-018-0046-2","url":null,"abstract":"This paper describes the experience of an academic institution, the Queensland University of Technology (QUT), developing training courses about research integrity practices in authorship, publication, and Journal Peer Review. The importance of providing research integrity training in these areas is now widely accepted; however, it remains an open question how best to conduct this training. For this reason, it is vital for institutions, journals, and peak bodies to share learnings.We describe how we have collaborated across our institution to develop training that supports QUT’s principles and which is in line with insights from contemporary research on best practices in learning design, universal design, and faculty involvement. We also discuss how we have refined these courses iteratively over time, and consider potential mechanisms for evaluating the effectiveness of the courses more formally.","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel R Shanahan, Ines Lopes de Sousa, Diana M Marshall
{"title":"Simple decision-tree tool to facilitate author identification of reporting guidelines during submission: a before-after study.","authors":"Daniel R Shanahan, Ines Lopes de Sousa, Diana M Marshall","doi":"10.1186/s41073-017-0044-9","DOIUrl":"https://doi.org/10.1186/s41073-017-0044-9","url":null,"abstract":"<p><strong>Background: </strong>There is evidence that direct journal endorsement of reporting guidelines can lead to important improvements in the quality and reliability of the published research. However, over the last 20 years, there has been a proliferation of reporting guidelines for different study designs, making it impractical for a journal to explicitly endorse them all. The objective of this study was to investigate whether a decision tree tool made available during the submission process facilitates author identification of the relevant reporting guideline.</p><p><strong>Methods: </strong>This was a prospective 14-week before-after study across four speciality medical research journals. During the submission process, authors were prompted to follow the relevant reporting guideline from the EQUATOR Network and asked to confirm that they followed the guideline ('before'). After 7 weeks, this prompt was updated to include a direct link to the decision-tree tool and an additional prompt for those authors who stated that 'no guidelines were applicable' ('after'). For each article submitted, the authors' response, what guideline they followed (if any) and what reporting guideline they should have followed (including none relevant) were recorded.</p><p><strong>Results: </strong>Overall, 590 manuscripts were included in this analysis-300 in the before cohort and 290 in the after. There were relevant reporting guidelines for 75% of manuscripts in each group; STROBE was the most commonly applicable reporting guideline, relevant for 35% (<i>n</i> = 106) and 37% (<i>n</i> = 106) of manuscripts, respectively. Use of the tool was associated with an 8.4% improvement in the number of authors correctly identifying the relevant reporting guideline for their study (<i>p</i> < 0.0001), a 14% reduction in the number of authors incorrectly stating that there were no relevant reporting guidelines (<i>p</i> < 0.0001), and a 1.7% reduction in authors choosing a guideline (<i>p</i> = 0.10). However, the 'after' cohort also saw a significant increase in the number of authors stating that there were relevant reporting guidelines for their study, but not specifying which (34 vs 29%; <i>p</i> = 0.04).</p><p><strong>Conclusion: </strong>This study suggests that use of a decision-tree tool during submission of a manuscript is associated with improved author identification of the relevant reporting guidelines for their study type; however, the majority of authors still failed to correctly identify the relevant guidelines.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"20"},"PeriodicalIF":0.0,"publicationDate":"2017-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-017-0044-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35837675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Coveney, Danielle L Herbert, Kathy Hill, Karen E Mow, Nicholas Graves, Adrian Barnett
{"title":"'Are you siding with a personality or the grant proposal?': observations on how peer review panels function.","authors":"John Coveney, Danielle L Herbert, Kathy Hill, Karen E Mow, Nicholas Graves, Adrian Barnett","doi":"10.1186/s41073-017-0043-x","DOIUrl":"10.1186/s41073-017-0043-x","url":null,"abstract":"<p><strong>Background: </strong>In Australia, the peer review process for competitive funding is usually conducted by a peer review group in conjunction with prior assessment from external assessors. This process is quite mysterious to those outside it. The purpose of this research was to throw light on grant review panels (sometimes called the 'black box') through an examination of the impact of panel procedures, panel composition and panel dynamics on the decision-making in the grant review process. A further purpose was to compare experience of a simplified review process with more conventional processes used in assessing grant proposals in Australia.</p><p><strong>Methods: </strong>This project was one aspect of a larger study into the costs and benefits of a simplified peer review process. The Queensland University of Technology (QUT)-simplified process was compared with the National Health and Medical Research Council's (NHMRC) more complex process. Grant review panellists involved in both processes were interviewed about their experience of the decision-making process that assesses the excellence of an application. All interviews were recorded and transcribed. Each transcription was de-identified and returned to the respondent for review. Final transcripts were read repeatedly and coded, and similar codes were amalgamated into categories that were used to build themes. Final themes were shared with the research team for feedback.</p><p><strong>Results: </strong>Two major themes arose from the research: (1) assessing grant proposals and (2) factors influencing the fairness, integrity and objectivity of review. Issues such as the quality of writing in a grant proposal, comparison of the two review methods, the purpose and use of the rebuttal, assessing the financial value of funded projects, the importance of the experience of the panel membership and the role of track record and the impact of group dynamics on the review process were all discussed. The research also examined the influence of research culture on decision-making in grant review panels. One of the aims of this study was to compare a simplified review process with more conventional processes. Generally, participants were supportive of the simplified process.</p><p><strong>Conclusions: </strong>Transparency in the grant review process will result in better appreciation of the outcome. Despite the provision of clear guidelines for peer review, reviewing processes are likely to be subjective to the extent that different reviewers apply different rules. The peer review process will come under more scrutiny as funding for research becomes even more competitive. There is justification for further research on the process, especially of a kind that taps more deeply into the 'black box' of peer review.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"19"},"PeriodicalIF":0.0,"publicationDate":"2017-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5803633/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35838151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stéphane Boyer, Takayoshi Ikeda, Marie-Caroline Lefort, Jagoba Malumbres-Olarte, Jason M Schmidt
{"title":"Percentage-based Author Contribution Index: a universal measure of author contribution to scientific articles.","authors":"Stéphane Boyer, Takayoshi Ikeda, Marie-Caroline Lefort, Jagoba Malumbres-Olarte, Jason M Schmidt","doi":"10.1186/s41073-017-0042-y","DOIUrl":"10.1186/s41073-017-0042-y","url":null,"abstract":"<p><strong>Background: </strong>Deciphering the amount of work provided by different co-authors of a scientific paper has been a recurrent problem in science. Despite the myriad of metrics available, the scientific community still largely relies on the position in the list of authors to evaluate contributions, a metric that attributes subjective and unfounded credit to co-authors. We propose an easy to apply, universally comparable and fair metric to measure and report co-authors contribution in the scientific literature.</p><p><strong>Methods: </strong>The proposed Author Contribution Index (ACI) is based on contribution percentages provided by the authors, preferably at the time of submission. Researchers can use ACI to compare the contributions of different authors, describe the contribution profile of a particular researcher or analyse how contribution changes through time. We provide such an analysis based on contribution percentages provided by 97 scientists from the field of ecology who voluntarily responded to an online anonymous survey.</p><p><strong>Results: </strong>ACI is simple to understand and to implement because it is based solely on percentage contributions and the number of co-authors. It provides a continuous score that reflects the contribution of one author as compared to the average contribution of all other authors. For example, ACI(i) = 3, means that author i contributed three times more than what the other authors contributed on average. Our analysis comprised 836 papers published in 2014-2016 and revealed patterns of ACI values that relate to career advancement.</p><p><strong>Conclusion: </strong>There are many examples of author contribution indices that have been proposed but none has really been adopted by scientific journals. Many of the proposed solutions are either too complicated, not accurate enough or not comparable across articles, authors and disciplines. The author contribution index presented here addresses these three major issues and has the potential to contribute to more transparency in the science literature. If adopted by scientific journals, it could provide job seekers, recruiters and evaluating bodies with a tool to gather information that is essential to them and cannot be easily and accurately obtained otherwise. We also suggest that scientists use the index regardless of whether it is implemented by journals or not.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"18"},"PeriodicalIF":0.0,"publicationDate":"2017-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5803580/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35837677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bram Duyx, Miriam J E Urlings, Gerard M H Swaen, Lex M Bouter, Maurice P Zeegers
{"title":"Selective citation in the literature on swimming in chlorinated water and childhood asthma: a network analysis.","authors":"Bram Duyx, Miriam J E Urlings, Gerard M H Swaen, Lex M Bouter, Maurice P Zeegers","doi":"10.1186/s41073-017-0041-z","DOIUrl":"https://doi.org/10.1186/s41073-017-0041-z","url":null,"abstract":"<p><strong>Background: </strong>Knowledge development depends on an unbiased representation of the available evidence. Selective citation may distort this representation. Recently, some controversy emerged regarding the possible impact of swimming on childhood asthma, raising the question about the role of selective citation in this field. Our objective was to assess the occurrence and determinants of selective citation in scientific publications on the relationship between swimming in chlorinated pools and childhood asthma.</p><p><strong>Methods: </strong>We identified scientific journal articles on this relationship via a systematic literature search. The following factors were taken into account: study outcome (authors' conclusion, data-based conclusion), other content-related article characteristics (article type, sample size, research quality, specificity), content-unrelated article characteristics (language, publication title, funding source, number of authors, number of affiliations, number of references, journal impact factor), author characteristics (gender, country, affiliation), and citation characteristics (time to citation, authority, self-citation). To assess the impact of these factors on citation, we performed a series of univariate and adjusted random-effects logistic regressions, with potential citation path as unit of analysis.</p><p><strong>Results: </strong>Thirty-six articles were identified in this network, consisting of 570 potential citation paths of which 191 (34%) were realized. There was strong evidence that articles with at least one author in common, cited each other more often than articles that had no common authors (odds ratio (OR) 5.2, 95% confidence interval (CI) 3.1-8.8). Similarly, the chance of being cited was higher for articles that were empirical rather than narrative (OR 4.2, CI 2.6-6.7), that reported a large sample size (OR 5.8, CI 2.9-11.6), and that were written by authors with a high authority within the network (OR 4.1, CI 2.1-8.0). Further, there was some evidence for citation bias: articles that confirmed the relation between swimming and asthma were cited more often (OR 1.8, CI 1.1-2.9), but this finding was not robust.</p><p><strong>Conclusions: </strong>There is clear evidence of selective citation in this research field, but the evidence for citation bias is not very strong.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"17"},"PeriodicalIF":0.0,"publicationDate":"2017-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-017-0041-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35838150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrian G Barnett, Philip Clarke, Cedryck Vaquette, Nicholas Graves
{"title":"Using democracy to award research funding: an observational study.","authors":"Adrian G Barnett, Philip Clarke, Cedryck Vaquette, Nicholas Graves","doi":"10.1186/s41073-017-0040-0","DOIUrl":"10.1186/s41073-017-0040-0","url":null,"abstract":"<p><strong>Background: </strong>Winning funding for health and medical research usually involves a lengthy application process. With success rates under 20%, much of the time spent by 80% of applicants could have been better used on actual research. An alternative funding system that could save time is using democracy to award the most deserving researchers based on votes from the research community. We aimed to pilot how such a system could work and examine some potential biases.</p><p><strong>Methods: </strong>We used an online survey with a convenience sample of Australian researchers. Researchers were asked to name the 10 scientists currently working in Australia that they thought most deserved funding for future research. For comparison, we used recent winners from large national fellowship schemes that used traditional peer review.</p><p><strong>Results: </strong>Voting took a median of 5 min (inter-quartile range 3 to 10 min). Extrapolating to a national voting scheme, we estimate 599 working days of voting time (95% CI 490 to 728), compared with 827 working days for the current peer review system for fellowships. The gender ratio in the votes was a more equal 45:55 (female to male) compared with 34:66 in recent fellowship winners, although this could be explained by Simpson's paradox. Voters were biased towards their own institution, with an additional 1.6 votes per ballot (inter-quartile range 0.8 to 2.2) above the expected number. Respondents raised many concerns about the idea of using democracy to fund research, including vote rigging, lobbying and it becoming a popularity contest.</p><p><strong>Conclusions: </strong>This is a preliminary study of using voting that does not investigate many of the concerns about how a voting system would work. We were able to show that voting would take less time than traditional peer review and would spread the workload over many more reviewers. Further studies of alternative funding systems are needed as well as a wide discussion with the research community about potential changes.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"16"},"PeriodicalIF":0.0,"publicationDate":"2017-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5803583/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35837673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V Welch, M Doull, M Yoganathan, J Jull, M Boscoe, S E Coen, Z Marshall, J Pardo Pardo, A Pederson, J Petkovic, L Puil, L Quinlan, B Shea, T Rader, V Runnels, S Tudiver
{"title":"Reporting of sex and gender in randomized controlled trials in Canada: a cross-sectional methods study.","authors":"V Welch, M Doull, M Yoganathan, J Jull, M Boscoe, S E Coen, Z Marshall, J Pardo Pardo, A Pederson, J Petkovic, L Puil, L Quinlan, B Shea, T Rader, V Runnels, S Tudiver","doi":"10.1186/s41073-017-0039-6","DOIUrl":"10.1186/s41073-017-0039-6","url":null,"abstract":"<p><strong>Background: </strong>Accurate reporting on sex and gender in health research is integral to ensuring that health interventions are safe and effective. In Canada and internationally, governments, research organizations, journal editors, and health agencies have called for more inclusive research, provision of sex-disaggregated data, and the integration of sex and gender analysis throughout the research process. Sex and gender analysis is generally defined as an approach for considering how and why different subpopulations (e.g., of diverse genders, ages, and social locations) may experience health conditions and interventions in different or similar ways.The objective of this study was to assess the extent and nature of reporting about sex and/or gender, including whether sex and gender analysis (SGA) was carried out in a sample of Canadian randomized controlled trials (RCTs) with human participants.</p><p><strong>Methods: </strong>We searched MEDLINE from 01 January 2013 to 23 July 2014 using a validated filter for identification of RCTs, combined with terms related to Canada. Two reviewers screened the search results to identify the first 100 RCTs that were either identified in the trial publication as funded by a Canadian organization or which had a first or last author based in Canada. Data were independently extracted by two people from 10% of the RCTs during an initial training period; once agreement was reached on this sample, the remainder of the data extraction was completed by one person and verified by a second.</p><p><strong>Results: </strong>The search yielded 1433 records. We screened 256 records to identify 100 RCTs which met our eligibility criteria. The median sample size of the RCTs was 107 participants (range 12-6085). While 98% of studies described the demographic composition of their participants by sex, only 6% conducted a subgroup analysis across sex and 4% reported sex-disaggregated data. No article defined \"sex\" and/or \"gender.\" No publication carried out a comprehensive sex and gender analysis.</p><p><strong>Conclusions: </strong>Findings highlight poor uptake of sex and gender considerations in the Canadian RCT context and underscore the need for better articulated guidance on sex and gender analysis to improve reporting of evidence, inform policy development, and guide future research.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"15"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5803639/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35839615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving the process of research ethics review.","authors":"Stacey A Page, Jeffrey Nyeboer","doi":"10.1186/s41073-017-0038-7","DOIUrl":"https://doi.org/10.1186/s41073-017-0038-7","url":null,"abstract":"<p><strong>Background: </strong>Research Ethics Boards, or Institutional Review Boards, protect the safety and welfare of human research participants. These bodies are responsible for providing an independent evaluation of proposed research studies, ultimately ensuring that the research does not proceed unless standards and regulations are met.</p><p><strong>Main body: </strong>Concurrent with the growing volume of human participant research, the workload and responsibilities of Research Ethics Boards (REBs) have continued to increase. Dissatisfaction with the review process, particularly the time interval from submission to decision, is common within the research community, but there has been little systematic effort to examine REB processes that may contribute to inefficiencies. We offer a model illustrating REB workflow, stakeholders, and accountabilities.</p><p><strong>Conclusion: </strong>Better understanding of the components of the research ethics review will allow performance targets to be set, problems identified, and solutions developed, ultimately improving the process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"14"},"PeriodicalIF":0.0,"publicationDate":"2017-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-017-0038-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35837678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reviewer training to assess knowledge translation in funding applications is long overdue.","authors":"Gayle Scarrow, Donna Angus, Bev J Holmes","doi":"10.1186/s41073-017-0037-8","DOIUrl":"https://doi.org/10.1186/s41073-017-0037-8","url":null,"abstract":"<p><strong>Background: </strong>Health research funding agencies are placing a growing focus on knowledge translation (KT) plans, also known as dissemination and implementation (D&I) plans, in grant applications to decrease the gap between what we know from research and what we do in practice, policy, and further research. Historically, review panels have focused on the scientific excellence of applications to determine which should be funded; however, relevance to societal health priorities, the facilitation of evidence-informed practice and policy, or realizing commercialization opportunities all require a different lens.</p><p><strong>Discussion: </strong>While experts in their respective fields, grant reviewers may lack the competencies to rigorously assess the KT components of applications. Funders of health research-including health charities, non-profit agencies, governments, and foundations-have an obligation to ensure that these components of funding applications are as rigorously evaluated as the scientific components. In this paper, we discuss the need for a more rigorous evaluation of knowledge translation potential by review panels and propose how this may be addressed.</p><p><strong>Conclusion: </strong>We propose that reviewer training supported in various ways including guidelines and KT expertise on review panels and modalities such as online and face-to-face training will result in the rigorous assessment of all components of funding applications, thus increasing the relevance and use of funded research evidence. An unintended but highly welcome consequence of such training could be higher quality D&I or KT plans in subsequent funding applications from trained reviewers.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-017-0037-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35838251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Richard Gray, Ashish Badnapurkar, Eman Hassanein, Donna Thomas, Laileah Barguir, Charley Baker, Martin Jones, Daniel Bressington, Ellie Brown, Annie Topping
{"title":"Registration of randomized controlled trials in nursing journals.","authors":"Richard Gray, Ashish Badnapurkar, Eman Hassanein, Donna Thomas, Laileah Barguir, Charley Baker, Martin Jones, Daniel Bressington, Ellie Brown, Annie Topping","doi":"10.1186/s41073-017-0036-9","DOIUrl":"10.1186/s41073-017-0036-9","url":null,"abstract":"<p><strong>Background: </strong>Trial registration helps minimize publication and reporting bias. In leading medical journals, 96% of published trials are registered. The aim of this study was to determine the proportion of randomized controlled trials published in key nursing journals that met criteria for timely registration.</p><p><strong>Methods: </strong>We reviewed all RCTs published in three (two general, one mental health) nursing journals between August 2011 and September 2016. We classified the included trials as: 1. Not registered, 2. Registered but not reported in manuscript, 3. Registered retrospectively, 4. Registered prospectively (before the recruitment of the first subject into the trial). 5. Timely registration (as 4 but the trial identification number is reported in abstract).</p><p><strong>Results: </strong>We identified 135 trials published in the three included journals. The majority (<i>n</i> = 78, 58%) were not registered. Thirty-three (24%) were retrospectively registered. Of the 24 (18%) trials that were prospectively registered, 11 (8%) met the criteria for timely registration.</p><p><strong>Conclusions: </strong>There is an unacceptable difference in rates of trial registration between leading medical and nursing journals. Concerted effort is required by nurse researchers, reviewers and journal editors to ensure that all trials are registered in a timely way.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"8"},"PeriodicalIF":0.0,"publicationDate":"2017-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5803636/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35838092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}