{"title":"Gender disparity in publication records: a qualitative study of women researchers in computing and engineering.","authors":"Mohammad Hosseini, Shiva Sharifzad","doi":"10.1186/s41073-021-00117-3","DOIUrl":"https://doi.org/10.1186/s41073-021-00117-3","url":null,"abstract":"<p><strong>Background: </strong>The current paper follows up on the results of an exploratory quantitative analysis that compared the publication and citation records of men and women researchers affiliated with the Faculty of Computing and Engineering at Dublin City University (DCU) in Ireland. Quantitative analysis of publications between 2013 and 2018 showed that women researchers had fewer publications, received fewer citations per person, and participated less often in international collaborations. Given the significance of publications for pursuing an academic career, we used qualitative methods to understand these differences and explore factors that, according to women researchers, have contributed to this disparity.</p><p><strong>Methods: </strong>Sixteen women researchers from DCU's Faculty of Computing and Engineering were interviewed using a semi-structured questionnaire. Once interviews were transcribed and anonymised, they were coded by both authors in two rounds using an inductive approach.</p><p><strong>Results: </strong>Interviewed women believed that their opportunities for research engagement and research funding, collaborations, publications and promotions are negatively impacted by gender roles, implicit gender biases, their own high professional standards, family responsibilities, nationality and negative perceptions of their expertise and accomplishments.</p><p><strong>Conclusions: </strong>Our study has found that women in DCU's Faculty of Computing and Engineering face challenges that, according to those interviewed, negatively affect their engagement in various research activities, and, therefore, have contributed to their lower publication record. We suggest that while affirmative programmes aiming to correct disparities are necessary, they are more likely to improve organisational culture if they are implemented in parallel with bottom-up initiatives that engage all parties, including men researchers and non-academic partners, to inform and sensitise them about the significance of gender equity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"15"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8632200/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39679575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evan Mayo-Wilson, Meredith L Phillips, Avonne E Connor, Kelly J Vander Ley, Kevin Naaman, Mark Helfand
{"title":"Peer review reduces spin in PCORI research reports.","authors":"Evan Mayo-Wilson, Meredith L Phillips, Avonne E Connor, Kelly J Vander Ley, Kevin Naaman, Mark Helfand","doi":"10.1186/s41073-021-00119-1","DOIUrl":"https://doi.org/10.1186/s41073-021-00119-1","url":null,"abstract":"<p><strong>Background: </strong>The Patient-Centered Outcomes Research Institute (PCORI) is obligated to peer review and to post publicly \"Final Research Reports\" of all funded projects. PCORI peer review emphasizes adherence to PCORI's Methodology Standards and principles of ethical scientific communication. During the peer review process, reviewers and editors seek to ensure that results are presented objectively and interpreted appropriately, e.g., free of spin.</p><p><strong>Methods: </strong>Two independent raters assessed PCORI peer review feedback sent to authors. We calculated the proportion of reports in which spin was identified during peer review, and the types of spin identified. We included reports submitted by April 2018 with at least one associated journal article. The same raters then assessed whether authors addressed reviewers' comments about spin. The raters also assessed whether spin identified during PCORI peer review was present in related journal articles.</p><p><strong>Results: </strong>We included 64 PCORI-funded projects. Peer reviewers or editors identified spin in 55/64 (86%) submitted research reports. Types of spin included reporting bias (46/55; 84%), inappropriate interpretation (40/55; 73%), inappropriate extrapolation of results (15/55; 27%), and inappropriate attribution of causality (5/55; 9%). Authors addressed comments about spin related to 47/55 (85%) of the reports. Of 110 associated journal articles, PCORI comments about spin were potentially applicable to 44/110 (40%) articles, of which 27/44 (61%) contained the same spin that was identified in the PCORI research report. The proportion of articles with spin was similar for articles accepted before and after PCORI peer review (63% vs 58%).</p><p><strong>Discussion: </strong>Just as spin is common in journal articles and press releases, we found that most reports submitted to PCORI included spin. While most spin was mitigated during the funder's peer review process, we found no evidence that review of PCORI reports influenced spin in journal articles. Funders could explore interventions aimed at reducing spin in published articles of studies they support.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"16"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8638354/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39768548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transparency of peer review: a semi-structured interview study with chief editors from social sciences and humanities.","authors":"Veli-Matti Karhulahti, Hans-Joachim Backe","doi":"10.1186/s41073-021-00116-4","DOIUrl":"https://doi.org/10.1186/s41073-021-00116-4","url":null,"abstract":"<p><strong>Background: </strong>Open peer review practices are increasing in medicine and life sciences, but in social sciences and humanities (SSH) they are still rare. We aimed to map out how editors of respected SSH journals perceive open peer review, how they balance policy, ethics, and pragmatism in the review processes they oversee, and how they view their own power in the process.</p><p><strong>Methods: </strong>We conducted 12 pre-registered semi-structured interviews with editors of respected SSH journals. Interviews consisted of 21 questions and lasted an average of 67 min. Interviews were transcribed, descriptively coded, and organized into code families.</p><p><strong>Results: </strong>SSH editors saw anonymized peer review benefits to outweigh those of open peer review. They considered anonymized peer review the \"gold standard\" that authors and editors are expected to follow to respect institutional policies; moreover, anonymized review was also perceived as ethically superior due to the protection it provides, and more pragmatic due to eased seeking of reviewers. Finally, editors acknowledged their power in the publication process and reported strategies for keeping their work as unbiased as possible.</p><p><strong>Conclusions: </strong>Editors of SSH journals preferred the benefits of anonymized peer review over open peer and acknowledged the power they hold in the publication process during which authors are almost completely disclosed to editorial bodies. We recommend journals to communicate the transparency elements of their manuscript review processes by listing all bodies who contributed to the decision on every review stage.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2021-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8598274/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39721579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A billion-dollar donation: estimating the cost of researchers' time spent on peer review.","authors":"Balazs Aczel, Barnabas Szaszi, Alex O Holcombe","doi":"10.1186/s41073-021-00118-2","DOIUrl":"https://doi.org/10.1186/s41073-021-00118-2","url":null,"abstract":"<p><strong>Background: </strong>The amount and value of researchers' peer review work is critical for academia and journal publishing. However, this labor is under-recognized, its magnitude is unknown, and alternative ways of organizing peer review labor are rarely considered.</p><p><strong>Methods: </strong>Using publicly available data, we provide an estimate of researchers' time and the salary-based contribution to the journal peer review system.</p><p><strong>Results: </strong>We found that the total time reviewers globally worked on peer reviews was over 100 million hours in 2020, equivalent to over 15 thousand years. The estimated monetary value of the time US-based reviewers spent on reviews was over 1.5 billion USD in 2020. For China-based reviewers, the estimate is over 600 million USD, and for UK-based, close to 400 million USD.</p><p><strong>Conclusions: </strong>By design, our results are very likely to be under-estimates as they reflect only a portion of the total number of journals worldwide. The numbers highlight the enormous amount of work and time that researchers provide to the publication system, and the importance of considering alternative ways of structuring, and paying for, peer review. We foster this process by discussing some alternative models that aim to boost the benefits of peer review, thus improving its cost-benefit ratio.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"14"},"PeriodicalIF":0.0,"publicationDate":"2021-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8591820/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39622221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan-Ole Hesselberg, Knut Inge Fostervold, Pål Ulleberg, Ida Svege
{"title":"Individual versus general structured feedback to improve agreement in grant peer review: a randomized controlled trial.","authors":"Jan-Ole Hesselberg, Knut Inge Fostervold, Pål Ulleberg, Ida Svege","doi":"10.1186/s41073-021-00115-5","DOIUrl":"10.1186/s41073-021-00115-5","url":null,"abstract":"<p><strong>Background: </strong>Vast sums are distributed based on grant peer review, but studies show that interrater reliability is often low. In this study, we tested the effect of receiving two short individual feedback reports compared to one short general feedback report on the agreement between reviewers.</p><p><strong>Methods: </strong>A total of 42 reviewers at the Norwegian Foundation Dam were randomly assigned to receive either a general feedback report or an individual feedback report. The general feedback group received one report before the start of the reviews that contained general information about the previous call in which the reviewers participated. In the individual feedback group, the reviewers received two reports, one before the review period (based on the previous call) and one during the period (based on the current call). In the individual feedback group, the reviewers were presented with detailed information on their scoring compared with the review committee as a whole, both before and during the review period. The main outcomes were the proportion of agreement in the eligibility assessment and the average difference in scores between pairs of reviewers assessing the same proposal. The outcomes were measured in 2017 and after the feedback was provided in 2018.</p><p><strong>Results: </strong>A total of 2398 paired reviews were included in the analysis. There was a significant difference between the two groups in the proportion of absolute agreement on whether the proposal was eligible for the funding programme, with the general feedback group demonstrating a higher rate of agreement. There was no difference between the two groups in terms of the average score difference. However, the agreement regarding the proposal score remained critically low for both groups.</p><p><strong>Conclusions: </strong>We did not observe changes in proposal score agreement between 2017 and 2018 in reviewers receiving different feedback. The low levels of agreement remain a major concern in grant peer review, and research to identify contributing factors as well as the development and testing of interventions to increase agreement rates are still needed.</p><p><strong>Trial registration: </strong>The study was preregistered at OSF.io/n4fq3 .</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"12"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8485516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39474032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joanna Diong, Cynthia M Kroeger, Katherine J Reynolds, Adrian Barnett, Lisa A Bero
{"title":"Strengthening the incentives for responsible research practices in Australian health and medical research funding.","authors":"Joanna Diong, Cynthia M Kroeger, Katherine J Reynolds, Adrian Barnett, Lisa A Bero","doi":"10.1186/s41073-021-00113-7","DOIUrl":"10.1186/s41073-021-00113-7","url":null,"abstract":"<p><strong>Background: </strong>Australian health and medical research funders support substantial research efforts, and incentives within grant funding schemes influence researcher behaviour. We aimed to determine to what extent Australian health and medical funders incentivise responsible research practices.</p><p><strong>Methods: </strong>We conducted an audit of instructions from research grant and fellowship schemes. Eight national research grants and fellowships were purposively sampled to select schemes that awarded the largest amount of funds. The funding scheme instructions were assessed against 9 criteria to determine to what extent they incentivised these responsible research and reporting practices: (1) publicly register study protocols before starting data collection, (2) register analysis protocols before starting data analysis, (3) make study data openly available, (4) make analysis code openly available, (5) make research materials openly available, (6) discourage use of publication metrics, (7) conduct quality research (e.g. adhere to reporting guidelines), (8) collaborate with a statistician, and (9) adhere to other responsible research practices. Each criterion was answered using one of the following responses: \"Instructed\", \"Encouraged\", or \"No mention\".</p><p><strong>Results: </strong>Across the 8 schemes from 5 funders, applicants were instructed or encouraged to address a median of 4 (range 0 to 5) of the 9 criteria. Three criteria received no mention in any scheme (register analysis protocols, make analysis code open, collaborate with a statistician). Importantly, most incentives did not seem strong as applicants were only instructed to register study protocols, discourage use of publication metrics and conduct quality research. Other criteria were encouraged but were not required.</p><p><strong>Conclusions: </strong>Funders could strengthen the incentives for responsible research practices by requiring grant and fellowship applicants to implement these practices in their proposals. Administering institutions could be required to implement these practices to be eligible for funding. Strongly rewarding researchers for implementing robust research practices could lead to sustained improvements in the quality of health and medical research.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"11"},"PeriodicalIF":0.0,"publicationDate":"2021-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8328133/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39277405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kim Boesen, Anders Lykkemark Simonsen, Karsten Juhl Jørgensen, Peter C Gøtzsche
{"title":"Correction to: Cross-sectional study of medical advertisements in a national general medical journal: evidence, cost, and safe use of advertised versus comparative drugs.","authors":"Kim Boesen, Anders Lykkemark Simonsen, Karsten Juhl Jørgensen, Peter C Gøtzsche","doi":"10.1186/s41073-021-00114-6","DOIUrl":"https://doi.org/10.1186/s41073-021-00114-6","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"10"},"PeriodicalIF":0.0,"publicationDate":"2021-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-021-00114-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39086140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evan Mayo-Wilson, Sean Grant, Lauren Supplee, Sina Kianersi, Afsah Amin, Alex DeHaven, David Mellor
{"title":"Evaluating implementation of the Transparency and Openness Promotion (TOP) guidelines: the TRUST process for rating journal policies, procedures, and practices.","authors":"Evan Mayo-Wilson, Sean Grant, Lauren Supplee, Sina Kianersi, Afsah Amin, Alex DeHaven, David Mellor","doi":"10.1186/s41073-021-00112-8","DOIUrl":"10.1186/s41073-021-00112-8","url":null,"abstract":"<p><strong>Background: </strong>The Transparency and Openness Promotion (TOP) Guidelines describe modular standards that journals can adopt to promote open science. The TOP Factor is a metric to describe the extent to which journals have adopted the TOP Guidelines in their policies. Systematic methods and rating instruments are needed to calculate the TOP Factor. Moreover, implementation of these open science policies depends on journal procedures and practices, for which TOP provides no standards or rating instruments.</p><p><strong>Methods: </strong>We describe a process for assessing journal policies, procedures, and practices according to the TOP Guidelines. We developed this process as part of the Transparency of Research Underpinning Social Intervention Tiers (TRUST) Initiative to advance open science in the social intervention research ecosystem. We also provide new instruments for rating journal instructions to authors (policies), manuscript submission systems (procedures), and published articles (practices) according to standards in the TOP Guidelines. In addition, we describe how to determine the TOP Factor score for a journal, calculate reliability of journal ratings, and assess coherence among a journal's policies, procedures, and practices. As a demonstration of this process, we describe a protocol for studying approximately 345 influential journals that have published research used to inform evidence-based policy.</p><p><strong>Discussion: </strong>The TRUST Process includes systematic methods and rating instruments for assessing and facilitating implementation of the TOP Guidelines by journals across disciplines. Our study of journals publishing influential social intervention research will provide a comprehensive account of whether these journals have policies, procedures, and practices that are consistent with standards for open science and thereby facilitate the publication of trustworthy findings to inform evidence-based policy. Through this demonstration, we expect to identify ways to refine the TOP Guidelines and the TOP Factor. Refinements could include: improving templates for adoption in journal instructions to authors, manuscript submission systems, and published articles; revising explanatory guidance intended to enhance the use, understanding, and dissemination of the TOP Guidelines; and clarifying the distinctions among different levels of implementation. Research materials are available on the Open Science Framework: https://osf.io/txyr3/ .</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2021-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8173977/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39055385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kim Boesen, Anders Lykkemark Simonsen, Karsten Juhl Jørgensen, Peter C Gøtzsche
{"title":"Cross-sectional study of medical advertisements in a national general medical journal: evidence, cost, and safe use of advertised versus comparative drugs.","authors":"Kim Boesen, Anders Lykkemark Simonsen, Karsten Juhl Jørgensen, Peter C Gøtzsche","doi":"10.1186/s41073-021-00111-9","DOIUrl":"https://doi.org/10.1186/s41073-021-00111-9","url":null,"abstract":"<p><strong>Background: </strong>Healthcare professionals are exposed to advertisements for prescription drugs in medical journals. Such advertisements may increase prescriptions of new drugs at the expense of older treatments even when they have no added benefits, are more harmful, and are more expensive. The publication of medical advertisements therefore raises ethical questions related to editorial integrity.</p><p><strong>Methods: </strong>We conducted a descriptive cross-sectional study of all medical advertisements published in the Journal of the Danish Medical Association in 2015. Drugs advertised 6 times or more were compared with older comparators: (1) comparative evidence of added benefit; (2) Defined Daily Dose cost; (3) regulatory safety announcements; and (4) completed and ongoing post-marketing studies 3 years after advertising.</p><p><strong>Results: </strong>We found 158 medical advertisements for 35 prescription drugs published in 24 issues during 2015, with a median of 7 advertisements per issue (range 0 to 11). Four drug groups and 5 single drugs were advertised 6 times or more, for a total of 10 indications, and we made 14 comparisons with older treatments. We found: (1) 'no added benefit' in 4 (29%) of 14 comparisons, 'uncertain benefits' in 7 (50%), and 'no evidence' in 3 (21%) comparisons. In no comparison did we find evidence of 'substantial added benefit' for the new drug; (2) advertised drugs were 2 to 196 times (median 6) more expensive per Defined Daily Dose; (3) 11 safety announcements for five advertised drugs were issued compared to one announcement for one comparator drug; (4) 20 post-marketing studies (7 completed, 13 ongoing) were requested for the advertised drugs versus 10 studies (4 completed, 6 ongoing) for the comparator drugs, and 7 studies (2 completed, 5 ongoing) assessed both an advertised and a comparator drug at 3 year follow-up.</p><p><strong>Conclusions and relevance: </strong>In this cross-sectional study of medical advertisements published in the Journal of the Danish Medical Association during 2015, the most advertised drugs did not have documented substantial added benefits over older treatments, whereas they were substantially more expensive. From January 2021, the Journal of the Danish Medical Association no longer publishes medical advertisements.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"8"},"PeriodicalIF":0.0,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-021-00111-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38968548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tamarinde Haven, Joeri Tijdink, Brian Martinson, Lex Bouter, Frans Oort
{"title":"Explaining variance in perceived research misbehavior: results from a survey among academic researchers in Amsterdam.","authors":"Tamarinde Haven, Joeri Tijdink, Brian Martinson, Lex Bouter, Frans Oort","doi":"10.1186/s41073-021-00110-w","DOIUrl":"https://doi.org/10.1186/s41073-021-00110-w","url":null,"abstract":"<p><strong>Background: </strong>Concerns about research misbehavior in academic science have sparked interest in the factors that may explain research misbehavior. Often three clusters of factors are distinguished: individual factors, climate factors and publication factors. Our research question was: to what extent can individual, climate and publication factors explain the variance in frequently perceived research misbehaviors?</p><p><strong>Methods: </strong>From May 2017 until July 2017, we conducted a survey study among academic researchers in Amsterdam. The survey included three measurement instruments that we previously reported individual results of and here we integrate these findings.</p><p><strong>Results: </strong>One thousand two hundred ninety-eight researchers completed the survey (response rate: 17%). Results showed that individual, climate and publication factors combined explained 34% of variance in perceived frequency of research misbehavior. Individual factors explained 7%, climate factors explained 22% and publication factors 16%.</p><p><strong>Conclusions: </strong>Our results suggest that the perceptions of the research climate play a substantial role in explaining variance in research misbehavior. This suggests that efforts to improve departmental norms might have a salutary effect on behavior.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2021-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-021-00110-w","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38944409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}