Eric Badu, Paul Okyere, Diane Bell, Naomi Gyamfi, Maxwell Peprah Opoku, Peter Agyei-Baffour, Anthony Kwaku Edusei
{"title":"Reporting in the abstracts presented at the 5th AfriNEAD (African Network for Evidence-to-Action in Disability) Conference in Ghana.","authors":"Eric Badu, Paul Okyere, Diane Bell, Naomi Gyamfi, Maxwell Peprah Opoku, Peter Agyei-Baffour, Anthony Kwaku Edusei","doi":"10.1186/s41073-018-0061-3","DOIUrl":"https://doi.org/10.1186/s41073-018-0061-3","url":null,"abstract":"<p><strong>Introduction: </strong>The abstracts of a conference are important for informing the participants about the results that are communicated. However, there is poor reporting in conference abstracts in disability research. This paper aims to assess the reporting in the abstracts presented at the 5th African Network for Evidence-to-Action in Disability (AfriNEAD) Conference in Ghana.</p><p><strong>Methods: </strong>This descriptive study extracted information from the abstracts presented at the 5th AfriNEAD Conference. Three reviewers independently reviewed all the included abstracts using a predefined data extraction form. Descriptive statistics were used to analyze the extracted information, using Stata version 15.</p><p><strong>Results: </strong>Of the 76 abstracts assessed, 54 met the inclusion criteria, while 22 were excluded. More than half of all the included abstracts (32/54; 59.26%) were studies conducted in Ghana. Some of the included abstracts did not report on the study design (37/54; 68.5%), the type of analysis performed (30/54; 55.56%), the sampling (27/54; 50%), and the sample size (18/54; 33.33%). Almost all the included abstracts did not report the age distribution and the gender of the participants.</p><p><strong>Conclusion: </strong>The study findings confirm that there is poor reporting of methods and findings in conference abstracts. Future conference organizers should critically examine abstracts to ensure that these issues are adequately addressed, so that findings are effectively communicated to participants.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2019-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-018-0061-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36939596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Replicability and replication in the humanities.","authors":"Rik Peels","doi":"10.1186/s41073-018-0060-4","DOIUrl":"10.1186/s41073-018-0060-4","url":null,"abstract":"<p><p>A large number of scientists and several news platforms have, over the last few years, been speaking of a replication crisis in various academic disciplines, especially the biomedical and social sciences. This paper answers the novel question of whether we should also pursue replication in the humanities. First, I create more conceptual clarity by defining, in addition to the term \"humanities,\" various key terms in the debate on replication, such as \"reproduction\" and \"replicability.\" In doing so, I pay attention to what is supposed to be the object of replication: certain studies, particular inferences, of specific results. After that, I spell out three reasons for thinking that replication in the humanities is not possible and argue that they are unconvincing. Subsequently, I give a more detailed case for thinking that replication in the humanities is possible. Finally, I explain why such replication in the humanities is not only possible, but also desirable.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2019-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6348612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36918266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Evuarherhe, W. Gattrell, Richard White, C. Winchester
{"title":"Professional medical writing support and the quality, ethics and timeliness of clinical trial reporting: a systematic review","authors":"O. Evuarherhe, W. Gattrell, Richard White, C. Winchester","doi":"10.1186/s41073-019-0073-7","DOIUrl":"https://doi.org/10.1186/s41073-019-0073-7","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0073-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46108951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linda Kwakkenbos, Edmund Juszczak, Lars G Hemkens, Margaret Sampson, Ole Fröbert, Clare Relton, Chris Gale, Merrick Zwarenstein, Sinéad M Langan, David Moher, Isabelle Boutron, Philippe Ravaud, Marion K Campbell, Kimberly A Mc Cord, Tjeerd P van Staa, Lehana Thabane, Rudolf Uher, Helena M Verkooijen, Eric I Benchimol, David Erlinge, Maureen Sauvé, David Torgerson, Brett D Thombs
{"title":"Protocol for the development of a CONSORT extension for RCTs using cohorts and routinely collected health data.","authors":"Linda Kwakkenbos, Edmund Juszczak, Lars G Hemkens, Margaret Sampson, Ole Fröbert, Clare Relton, Chris Gale, Merrick Zwarenstein, Sinéad M Langan, David Moher, Isabelle Boutron, Philippe Ravaud, Marion K Campbell, Kimberly A Mc Cord, Tjeerd P van Staa, Lehana Thabane, Rudolf Uher, Helena M Verkooijen, Eric I Benchimol, David Erlinge, Maureen Sauvé, David Torgerson, Brett D Thombs","doi":"10.1186/s41073-018-0053-3","DOIUrl":"10.1186/s41073-018-0053-3","url":null,"abstract":"<p><strong>Background: </strong>Randomized controlled trials (RCTs) are often complex and expensive to perform. Less than one third achieve planned recruitment targets, follow-up can be labor-intensive, and many have limited real-world generalizability. Designs for RCTs conducted using cohorts and routinely collected health data, including registries, electronic health records, and administrative databases, have been proposed to address these challenges and are being rapidly adopted. These designs, however, are relatively recent innovations, and published RCT reports often do not describe important aspects of their methodology in a standardized way. Our objective is to extend the Consolidated Standards of Reporting Trials (CONSORT) statement with a consensus-driven reporting guideline for RCTs using cohorts and routinely collected health data.</p><p><strong>Methods: </strong>The development of this CONSORT extension will consist of five phases. Phase 1 (completed) consisted of the project launch, including fundraising, the establishment of a research team, and development of a conceptual framework. In phase 2, a systematic review will be performed to identify publications (1) that describe methods or reporting considerations for RCTs conducted using cohorts and routinely collected health data or (2) that are protocols or report results from such RCTs. An initial \"long list\" of possible modifications to CONSORT checklist items and possible new items for the reporting guideline will be generated based on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) and The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) statements. Additional possible modifications and new items will be identified based on the results of the systematic review. Phase 3 will consist of a three-round Delphi exercise with methods and content experts to evaluate the \"long list\" and generate a \"short list\" of key items. In phase 4, these items will serve as the basis for an in-person consensus meeting to finalize a core set of items to be included in the reporting guideline and checklist. Phase 5 will involve drafting the checklist and elaboration-explanation documents, and dissemination and implementation of the guideline.</p><p><strong>Discussion: </strong>Development of this CONSORT extension will contribute to more transparent reporting of RCTs conducted using cohorts and routinely collected health data.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"3 ","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2018-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6205772/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9105072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Hooper, Virginia Barbour, Anne Walsh, Stephanie Bradbury, Jane Jacobs
{"title":"Designing integrated research integrity training: authorship, publication, and peer review","authors":"Mark Hooper, Virginia Barbour, Anne Walsh, Stephanie Bradbury, Jane Jacobs","doi":"10.1186/s41073-018-0046-2","DOIUrl":"https://doi.org/10.1186/s41073-018-0046-2","url":null,"abstract":"This paper describes the experience of an academic institution, the Queensland University of Technology (QUT), developing training courses about research integrity practices in authorship, publication, and Journal Peer Review. The importance of providing research integrity training in these areas is now widely accepted; however, it remains an open question how best to conduct this training. For this reason, it is vital for institutions, journals, and peak bodies to share learnings.We describe how we have collaborated across our institution to develop training that supports QUT’s principles and which is in line with insights from contemporary research on best practices in learning design, universal design, and faculty involvement. We also discuss how we have refined these courses iteratively over time, and consider potential mechanisms for evaluating the effectiveness of the courses more formally.","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel R Shanahan, Ines Lopes de Sousa, Diana M Marshall
{"title":"Simple decision-tree tool to facilitate author identification of reporting guidelines during submission: a before-after study.","authors":"Daniel R Shanahan, Ines Lopes de Sousa, Diana M Marshall","doi":"10.1186/s41073-017-0044-9","DOIUrl":"https://doi.org/10.1186/s41073-017-0044-9","url":null,"abstract":"<p><strong>Background: </strong>There is evidence that direct journal endorsement of reporting guidelines can lead to important improvements in the quality and reliability of the published research. However, over the last 20 years, there has been a proliferation of reporting guidelines for different study designs, making it impractical for a journal to explicitly endorse them all. The objective of this study was to investigate whether a decision tree tool made available during the submission process facilitates author identification of the relevant reporting guideline.</p><p><strong>Methods: </strong>This was a prospective 14-week before-after study across four speciality medical research journals. During the submission process, authors were prompted to follow the relevant reporting guideline from the EQUATOR Network and asked to confirm that they followed the guideline ('before'). After 7 weeks, this prompt was updated to include a direct link to the decision-tree tool and an additional prompt for those authors who stated that 'no guidelines were applicable' ('after'). For each article submitted, the authors' response, what guideline they followed (if any) and what reporting guideline they should have followed (including none relevant) were recorded.</p><p><strong>Results: </strong>Overall, 590 manuscripts were included in this analysis-300 in the before cohort and 290 in the after. There were relevant reporting guidelines for 75% of manuscripts in each group; STROBE was the most commonly applicable reporting guideline, relevant for 35% (<i>n</i> = 106) and 37% (<i>n</i> = 106) of manuscripts, respectively. Use of the tool was associated with an 8.4% improvement in the number of authors correctly identifying the relevant reporting guideline for their study (<i>p</i> < 0.0001), a 14% reduction in the number of authors incorrectly stating that there were no relevant reporting guidelines (<i>p</i> < 0.0001), and a 1.7% reduction in authors choosing a guideline (<i>p</i> = 0.10). However, the 'after' cohort also saw a significant increase in the number of authors stating that there were relevant reporting guidelines for their study, but not specifying which (34 vs 29%; <i>p</i> = 0.04).</p><p><strong>Conclusion: </strong>This study suggests that use of a decision-tree tool during submission of a manuscript is associated with improved author identification of the relevant reporting guidelines for their study type; however, the majority of authors still failed to correctly identify the relevant guidelines.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"20"},"PeriodicalIF":0.0,"publicationDate":"2017-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-017-0044-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35837675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Coveney, Danielle L Herbert, Kathy Hill, Karen E Mow, Nicholas Graves, Adrian Barnett
{"title":"'Are you siding with a personality or the grant proposal?': observations on how peer review panels function.","authors":"John Coveney, Danielle L Herbert, Kathy Hill, Karen E Mow, Nicholas Graves, Adrian Barnett","doi":"10.1186/s41073-017-0043-x","DOIUrl":"10.1186/s41073-017-0043-x","url":null,"abstract":"<p><strong>Background: </strong>In Australia, the peer review process for competitive funding is usually conducted by a peer review group in conjunction with prior assessment from external assessors. This process is quite mysterious to those outside it. The purpose of this research was to throw light on grant review panels (sometimes called the 'black box') through an examination of the impact of panel procedures, panel composition and panel dynamics on the decision-making in the grant review process. A further purpose was to compare experience of a simplified review process with more conventional processes used in assessing grant proposals in Australia.</p><p><strong>Methods: </strong>This project was one aspect of a larger study into the costs and benefits of a simplified peer review process. The Queensland University of Technology (QUT)-simplified process was compared with the National Health and Medical Research Council's (NHMRC) more complex process. Grant review panellists involved in both processes were interviewed about their experience of the decision-making process that assesses the excellence of an application. All interviews were recorded and transcribed. Each transcription was de-identified and returned to the respondent for review. Final transcripts were read repeatedly and coded, and similar codes were amalgamated into categories that were used to build themes. Final themes were shared with the research team for feedback.</p><p><strong>Results: </strong>Two major themes arose from the research: (1) assessing grant proposals and (2) factors influencing the fairness, integrity and objectivity of review. Issues such as the quality of writing in a grant proposal, comparison of the two review methods, the purpose and use of the rebuttal, assessing the financial value of funded projects, the importance of the experience of the panel membership and the role of track record and the impact of group dynamics on the review process were all discussed. The research also examined the influence of research culture on decision-making in grant review panels. One of the aims of this study was to compare a simplified review process with more conventional processes. Generally, participants were supportive of the simplified process.</p><p><strong>Conclusions: </strong>Transparency in the grant review process will result in better appreciation of the outcome. Despite the provision of clear guidelines for peer review, reviewing processes are likely to be subjective to the extent that different reviewers apply different rules. The peer review process will come under more scrutiny as funding for research becomes even more competitive. There is justification for further research on the process, especially of a kind that taps more deeply into the 'black box' of peer review.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"19"},"PeriodicalIF":0.0,"publicationDate":"2017-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5803633/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35838151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stéphane Boyer, Takayoshi Ikeda, Marie-Caroline Lefort, Jagoba Malumbres-Olarte, Jason M Schmidt
{"title":"Percentage-based Author Contribution Index: a universal measure of author contribution to scientific articles.","authors":"Stéphane Boyer, Takayoshi Ikeda, Marie-Caroline Lefort, Jagoba Malumbres-Olarte, Jason M Schmidt","doi":"10.1186/s41073-017-0042-y","DOIUrl":"10.1186/s41073-017-0042-y","url":null,"abstract":"<p><strong>Background: </strong>Deciphering the amount of work provided by different co-authors of a scientific paper has been a recurrent problem in science. Despite the myriad of metrics available, the scientific community still largely relies on the position in the list of authors to evaluate contributions, a metric that attributes subjective and unfounded credit to co-authors. We propose an easy to apply, universally comparable and fair metric to measure and report co-authors contribution in the scientific literature.</p><p><strong>Methods: </strong>The proposed Author Contribution Index (ACI) is based on contribution percentages provided by the authors, preferably at the time of submission. Researchers can use ACI to compare the contributions of different authors, describe the contribution profile of a particular researcher or analyse how contribution changes through time. We provide such an analysis based on contribution percentages provided by 97 scientists from the field of ecology who voluntarily responded to an online anonymous survey.</p><p><strong>Results: </strong>ACI is simple to understand and to implement because it is based solely on percentage contributions and the number of co-authors. It provides a continuous score that reflects the contribution of one author as compared to the average contribution of all other authors. For example, ACI(i) = 3, means that author i contributed three times more than what the other authors contributed on average. Our analysis comprised 836 papers published in 2014-2016 and revealed patterns of ACI values that relate to career advancement.</p><p><strong>Conclusion: </strong>There are many examples of author contribution indices that have been proposed but none has really been adopted by scientific journals. Many of the proposed solutions are either too complicated, not accurate enough or not comparable across articles, authors and disciplines. The author contribution index presented here addresses these three major issues and has the potential to contribute to more transparency in the science literature. If adopted by scientific journals, it could provide job seekers, recruiters and evaluating bodies with a tool to gather information that is essential to them and cannot be easily and accurately obtained otherwise. We also suggest that scientists use the index regardless of whether it is implemented by journals or not.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"18"},"PeriodicalIF":0.0,"publicationDate":"2017-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5803580/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35837677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bram Duyx, Miriam J E Urlings, Gerard M H Swaen, Lex M Bouter, Maurice P Zeegers
{"title":"Selective citation in the literature on swimming in chlorinated water and childhood asthma: a network analysis.","authors":"Bram Duyx, Miriam J E Urlings, Gerard M H Swaen, Lex M Bouter, Maurice P Zeegers","doi":"10.1186/s41073-017-0041-z","DOIUrl":"https://doi.org/10.1186/s41073-017-0041-z","url":null,"abstract":"<p><strong>Background: </strong>Knowledge development depends on an unbiased representation of the available evidence. Selective citation may distort this representation. Recently, some controversy emerged regarding the possible impact of swimming on childhood asthma, raising the question about the role of selective citation in this field. Our objective was to assess the occurrence and determinants of selective citation in scientific publications on the relationship between swimming in chlorinated pools and childhood asthma.</p><p><strong>Methods: </strong>We identified scientific journal articles on this relationship via a systematic literature search. The following factors were taken into account: study outcome (authors' conclusion, data-based conclusion), other content-related article characteristics (article type, sample size, research quality, specificity), content-unrelated article characteristics (language, publication title, funding source, number of authors, number of affiliations, number of references, journal impact factor), author characteristics (gender, country, affiliation), and citation characteristics (time to citation, authority, self-citation). To assess the impact of these factors on citation, we performed a series of univariate and adjusted random-effects logistic regressions, with potential citation path as unit of analysis.</p><p><strong>Results: </strong>Thirty-six articles were identified in this network, consisting of 570 potential citation paths of which 191 (34%) were realized. There was strong evidence that articles with at least one author in common, cited each other more often than articles that had no common authors (odds ratio (OR) 5.2, 95% confidence interval (CI) 3.1-8.8). Similarly, the chance of being cited was higher for articles that were empirical rather than narrative (OR 4.2, CI 2.6-6.7), that reported a large sample size (OR 5.8, CI 2.9-11.6), and that were written by authors with a high authority within the network (OR 4.1, CI 2.1-8.0). Further, there was some evidence for citation bias: articles that confirmed the relation between swimming and asthma were cited more often (OR 1.8, CI 1.1-2.9), but this finding was not robust.</p><p><strong>Conclusions: </strong>There is clear evidence of selective citation in this research field, but the evidence for citation bias is not very strong.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"17"},"PeriodicalIF":0.0,"publicationDate":"2017-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-017-0041-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35838150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrian G Barnett, Philip Clarke, Cedryck Vaquette, Nicholas Graves
{"title":"Using democracy to award research funding: an observational study.","authors":"Adrian G Barnett, Philip Clarke, Cedryck Vaquette, Nicholas Graves","doi":"10.1186/s41073-017-0040-0","DOIUrl":"10.1186/s41073-017-0040-0","url":null,"abstract":"<p><strong>Background: </strong>Winning funding for health and medical research usually involves a lengthy application process. With success rates under 20%, much of the time spent by 80% of applicants could have been better used on actual research. An alternative funding system that could save time is using democracy to award the most deserving researchers based on votes from the research community. We aimed to pilot how such a system could work and examine some potential biases.</p><p><strong>Methods: </strong>We used an online survey with a convenience sample of Australian researchers. Researchers were asked to name the 10 scientists currently working in Australia that they thought most deserved funding for future research. For comparison, we used recent winners from large national fellowship schemes that used traditional peer review.</p><p><strong>Results: </strong>Voting took a median of 5 min (inter-quartile range 3 to 10 min). Extrapolating to a national voting scheme, we estimate 599 working days of voting time (95% CI 490 to 728), compared with 827 working days for the current peer review system for fellowships. The gender ratio in the votes was a more equal 45:55 (female to male) compared with 34:66 in recent fellowship winners, although this could be explained by Simpson's paradox. Voters were biased towards their own institution, with an additional 1.6 votes per ballot (inter-quartile range 0.8 to 2.2) above the expected number. Respondents raised many concerns about the idea of using democracy to fund research, including vote rigging, lobbying and it becoming a popularity contest.</p><p><strong>Conclusions: </strong>This is a preliminary study of using voting that does not investigate many of the concerns about how a voting system would work. We were able to show that voting would take less time than traditional peer review and would spread the workload over many more reviewers. Further studies of alternative funding systems are needed as well as a wide discussion with the research community about potential changes.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"16"},"PeriodicalIF":0.0,"publicationDate":"2017-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5803583/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35837673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}