{"title":"Guidelines for open peer review implementation.","authors":"Tony Ross-Hellauer, Edit Görögh","doi":"10.1186/s41073-019-0063-9","DOIUrl":"10.1186/s41073-019-0063-9","url":null,"abstract":"<p><p>Open peer review (OPR) is moving into the mainstream, but it is often poorly understood and surveys of researcher attitudes show important barriers to implementation. As more journals move to implement and experiment with the myriad of innovations covered by this term, there is a clear need for best practice guidelines to guide implementation. This brief article aims to address this knowledge gap, reporting work based on an interactive stakeholder workshop to create best-practice guidelines for editors and journals who wish to transition to OPR. Although the advice is aimed mainly at editors and publishers of scientific journals, since this is the area in which OPR is at its most mature, many of the principles may also be applicable for the implementation of OPR in other areas (e.g., books, conference submissions).</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"4"},"PeriodicalIF":0.0,"publicationDate":"2019-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0063-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37045643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew Grey, Mark Bolland, Greg Gamble, Alison Avenell
{"title":"Quality of reports of investigations of research integrity by academic institutions.","authors":"Andrew Grey, Mark Bolland, Greg Gamble, Alison Avenell","doi":"10.1186/s41073-019-0062-x","DOIUrl":"10.1186/s41073-019-0062-x","url":null,"abstract":"<p><strong>Background: </strong>Academic institutions play important roles in protecting and preserving research integrity. Concerns have been expressed about the objectivity, adequacy and transparency of institutional investigations of potentially compromised research integrity. We assessed the reports provided to us of investigations by three academic institutions of a large body of overlapping research with potentially compromised integrity.</p><p><strong>Methods: </strong>In 2017, we raised concerns with four academic institutions about the integrity of > 200 publications co-authored by an overlapping set of researchers. Each institution initiated an investigation. By November 2018, three had reported to us the results of their investigations, but only one report was publicly available. Two investigators independently assessed each available report using a published 26-item checklist designed to determine the quality and adequacy of institutional investigations of research integrity. Each assessor recorded additional comments ad hoc.</p><p><strong>Results: </strong>Concerns raised with the institutions were overlapping, wide-ranging and included those which were both general and publication-specific. The number of potentially affected publications at individual institutions ranged from 34 to 200. The duration of investigation by the three institutions which provided reports was 8-17 months. These investigations covered 14%, 15% and 77%, respectively, of potentially affected publications. Between-assessor agreement using the quality checklist was 0.68, 0.72 and 0.65 for each report. Only 4/78 individual checklist items were addressed adequately: a further 14 could not be assessed. Each report was graded inadequate overall. Reports failed to address publication-specific concerns and focussed more strongly on determining research misconduct than evaluating the integrity of publications.</p><p><strong>Conclusions: </strong>Our analyses identify important deficiencies in the quality and reporting of institutional investigation of concerns about the integrity of a large body of research reported by an overlapping set of researchers. They reinforce disquiet about the ability of institutions to rigorously and objectively oversee integrity of research conducted by their own employees.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0062-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37173168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric Badu, Paul Okyere, Diane Bell, Naomi Gyamfi, Maxwell Peprah Opoku, Peter Agyei-Baffour, Anthony Kwaku Edusei
{"title":"Reporting in the abstracts presented at the 5th AfriNEAD (African Network for Evidence-to-Action in Disability) Conference in Ghana.","authors":"Eric Badu, Paul Okyere, Diane Bell, Naomi Gyamfi, Maxwell Peprah Opoku, Peter Agyei-Baffour, Anthony Kwaku Edusei","doi":"10.1186/s41073-018-0061-3","DOIUrl":"https://doi.org/10.1186/s41073-018-0061-3","url":null,"abstract":"<p><strong>Introduction: </strong>The abstracts of a conference are important for informing the participants about the results that are communicated. However, there is poor reporting in conference abstracts in disability research. This paper aims to assess the reporting in the abstracts presented at the 5th African Network for Evidence-to-Action in Disability (AfriNEAD) Conference in Ghana.</p><p><strong>Methods: </strong>This descriptive study extracted information from the abstracts presented at the 5th AfriNEAD Conference. Three reviewers independently reviewed all the included abstracts using a predefined data extraction form. Descriptive statistics were used to analyze the extracted information, using Stata version 15.</p><p><strong>Results: </strong>Of the 76 abstracts assessed, 54 met the inclusion criteria, while 22 were excluded. More than half of all the included abstracts (32/54; 59.26%) were studies conducted in Ghana. Some of the included abstracts did not report on the study design (37/54; 68.5%), the type of analysis performed (30/54; 55.56%), the sampling (27/54; 50%), and the sample size (18/54; 33.33%). Almost all the included abstracts did not report the age distribution and the gender of the participants.</p><p><strong>Conclusion: </strong>The study findings confirm that there is poor reporting of methods and findings in conference abstracts. Future conference organizers should critically examine abstracts to ensure that these issues are adequately addressed, so that findings are effectively communicated to participants.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2019-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-018-0061-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36939596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Replicability and replication in the humanities.","authors":"Rik Peels","doi":"10.1186/s41073-018-0060-4","DOIUrl":"10.1186/s41073-018-0060-4","url":null,"abstract":"<p><p>A large number of scientists and several news platforms have, over the last few years, been speaking of a replication crisis in various academic disciplines, especially the biomedical and social sciences. This paper answers the novel question of whether we should also pursue replication in the humanities. First, I create more conceptual clarity by defining, in addition to the term \"humanities,\" various key terms in the debate on replication, such as \"reproduction\" and \"replicability.\" In doing so, I pay attention to what is supposed to be the object of replication: certain studies, particular inferences, of specific results. After that, I spell out three reasons for thinking that replication in the humanities is not possible and argue that they are unconvincing. Subsequently, I give a more detailed case for thinking that replication in the humanities is possible. Finally, I explain why such replication in the humanities is not only possible, but also desirable.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 ","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2019-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6348612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36918266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Evuarherhe, W. Gattrell, Richard White, C. Winchester
{"title":"Professional medical writing support and the quality, ethics and timeliness of clinical trial reporting: a systematic review","authors":"O. Evuarherhe, W. Gattrell, Richard White, C. Winchester","doi":"10.1186/s41073-019-0073-7","DOIUrl":"https://doi.org/10.1186/s41073-019-0073-7","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-019-0073-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46108951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linda Kwakkenbos, Edmund Juszczak, Lars G Hemkens, Margaret Sampson, Ole Fröbert, Clare Relton, Chris Gale, Merrick Zwarenstein, Sinéad M Langan, David Moher, Isabelle Boutron, Philippe Ravaud, Marion K Campbell, Kimberly A Mc Cord, Tjeerd P van Staa, Lehana Thabane, Rudolf Uher, Helena M Verkooijen, Eric I Benchimol, David Erlinge, Maureen Sauvé, David Torgerson, Brett D Thombs
{"title":"Protocol for the development of a CONSORT extension for RCTs using cohorts and routinely collected health data.","authors":"Linda Kwakkenbos, Edmund Juszczak, Lars G Hemkens, Margaret Sampson, Ole Fröbert, Clare Relton, Chris Gale, Merrick Zwarenstein, Sinéad M Langan, David Moher, Isabelle Boutron, Philippe Ravaud, Marion K Campbell, Kimberly A Mc Cord, Tjeerd P van Staa, Lehana Thabane, Rudolf Uher, Helena M Verkooijen, Eric I Benchimol, David Erlinge, Maureen Sauvé, David Torgerson, Brett D Thombs","doi":"10.1186/s41073-018-0053-3","DOIUrl":"10.1186/s41073-018-0053-3","url":null,"abstract":"<p><strong>Background: </strong>Randomized controlled trials (RCTs) are often complex and expensive to perform. Less than one third achieve planned recruitment targets, follow-up can be labor-intensive, and many have limited real-world generalizability. Designs for RCTs conducted using cohorts and routinely collected health data, including registries, electronic health records, and administrative databases, have been proposed to address these challenges and are being rapidly adopted. These designs, however, are relatively recent innovations, and published RCT reports often do not describe important aspects of their methodology in a standardized way. Our objective is to extend the Consolidated Standards of Reporting Trials (CONSORT) statement with a consensus-driven reporting guideline for RCTs using cohorts and routinely collected health data.</p><p><strong>Methods: </strong>The development of this CONSORT extension will consist of five phases. Phase 1 (completed) consisted of the project launch, including fundraising, the establishment of a research team, and development of a conceptual framework. In phase 2, a systematic review will be performed to identify publications (1) that describe methods or reporting considerations for RCTs conducted using cohorts and routinely collected health data or (2) that are protocols or report results from such RCTs. An initial \"long list\" of possible modifications to CONSORT checklist items and possible new items for the reporting guideline will be generated based on the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) and The REporting of studies Conducted using Observational Routinely-collected health Data (RECORD) statements. Additional possible modifications and new items will be identified based on the results of the systematic review. Phase 3 will consist of a three-round Delphi exercise with methods and content experts to evaluate the \"long list\" and generate a \"short list\" of key items. In phase 4, these items will serve as the basis for an in-person consensus meeting to finalize a core set of items to be included in the reporting guideline and checklist. Phase 5 will involve drafting the checklist and elaboration-explanation documents, and dissemination and implementation of the guideline.</p><p><strong>Discussion: </strong>Development of this CONSORT extension will contribute to more transparent reporting of RCTs conducted using cohorts and routinely collected health data.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"3 ","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2018-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6205772/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9105072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Hooper, Virginia Barbour, Anne Walsh, Stephanie Bradbury, Jane Jacobs
{"title":"Designing integrated research integrity training: authorship, publication, and peer review","authors":"Mark Hooper, Virginia Barbour, Anne Walsh, Stephanie Bradbury, Jane Jacobs","doi":"10.1186/s41073-018-0046-2","DOIUrl":"https://doi.org/10.1186/s41073-018-0046-2","url":null,"abstract":"This paper describes the experience of an academic institution, the Queensland University of Technology (QUT), developing training courses about research integrity practices in authorship, publication, and Journal Peer Review. The importance of providing research integrity training in these areas is now widely accepted; however, it remains an open question how best to conduct this training. For this reason, it is vital for institutions, journals, and peak bodies to share learnings.We describe how we have collaborated across our institution to develop training that supports QUT’s principles and which is in line with insights from contemporary research on best practices in learning design, universal design, and faculty involvement. We also discuss how we have refined these courses iteratively over time, and consider potential mechanisms for evaluating the effectiveness of the courses more formally.","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel R Shanahan, Ines Lopes de Sousa, Diana M Marshall
{"title":"Simple decision-tree tool to facilitate author identification of reporting guidelines during submission: a before-after study.","authors":"Daniel R Shanahan, Ines Lopes de Sousa, Diana M Marshall","doi":"10.1186/s41073-017-0044-9","DOIUrl":"https://doi.org/10.1186/s41073-017-0044-9","url":null,"abstract":"<p><strong>Background: </strong>There is evidence that direct journal endorsement of reporting guidelines can lead to important improvements in the quality and reliability of the published research. However, over the last 20 years, there has been a proliferation of reporting guidelines for different study designs, making it impractical for a journal to explicitly endorse them all. The objective of this study was to investigate whether a decision tree tool made available during the submission process facilitates author identification of the relevant reporting guideline.</p><p><strong>Methods: </strong>This was a prospective 14-week before-after study across four speciality medical research journals. During the submission process, authors were prompted to follow the relevant reporting guideline from the EQUATOR Network and asked to confirm that they followed the guideline ('before'). After 7 weeks, this prompt was updated to include a direct link to the decision-tree tool and an additional prompt for those authors who stated that 'no guidelines were applicable' ('after'). For each article submitted, the authors' response, what guideline they followed (if any) and what reporting guideline they should have followed (including none relevant) were recorded.</p><p><strong>Results: </strong>Overall, 590 manuscripts were included in this analysis-300 in the before cohort and 290 in the after. There were relevant reporting guidelines for 75% of manuscripts in each group; STROBE was the most commonly applicable reporting guideline, relevant for 35% (<i>n</i> = 106) and 37% (<i>n</i> = 106) of manuscripts, respectively. Use of the tool was associated with an 8.4% improvement in the number of authors correctly identifying the relevant reporting guideline for their study (<i>p</i> < 0.0001), a 14% reduction in the number of authors incorrectly stating that there were no relevant reporting guidelines (<i>p</i> < 0.0001), and a 1.7% reduction in authors choosing a guideline (<i>p</i> = 0.10). However, the 'after' cohort also saw a significant increase in the number of authors stating that there were relevant reporting guidelines for their study, but not specifying which (34 vs 29%; <i>p</i> = 0.04).</p><p><strong>Conclusion: </strong>This study suggests that use of a decision-tree tool during submission of a manuscript is associated with improved author identification of the relevant reporting guidelines for their study type; however, the majority of authors still failed to correctly identify the relevant guidelines.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"20"},"PeriodicalIF":0.0,"publicationDate":"2017-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s41073-017-0044-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35837675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Coveney, Danielle L Herbert, Kathy Hill, Karen E Mow, Nicholas Graves, Adrian Barnett
{"title":"'Are you siding with a personality or the grant proposal?': observations on how peer review panels function.","authors":"John Coveney, Danielle L Herbert, Kathy Hill, Karen E Mow, Nicholas Graves, Adrian Barnett","doi":"10.1186/s41073-017-0043-x","DOIUrl":"10.1186/s41073-017-0043-x","url":null,"abstract":"<p><strong>Background: </strong>In Australia, the peer review process for competitive funding is usually conducted by a peer review group in conjunction with prior assessment from external assessors. This process is quite mysterious to those outside it. The purpose of this research was to throw light on grant review panels (sometimes called the 'black box') through an examination of the impact of panel procedures, panel composition and panel dynamics on the decision-making in the grant review process. A further purpose was to compare experience of a simplified review process with more conventional processes used in assessing grant proposals in Australia.</p><p><strong>Methods: </strong>This project was one aspect of a larger study into the costs and benefits of a simplified peer review process. The Queensland University of Technology (QUT)-simplified process was compared with the National Health and Medical Research Council's (NHMRC) more complex process. Grant review panellists involved in both processes were interviewed about their experience of the decision-making process that assesses the excellence of an application. All interviews were recorded and transcribed. Each transcription was de-identified and returned to the respondent for review. Final transcripts were read repeatedly and coded, and similar codes were amalgamated into categories that were used to build themes. Final themes were shared with the research team for feedback.</p><p><strong>Results: </strong>Two major themes arose from the research: (1) assessing grant proposals and (2) factors influencing the fairness, integrity and objectivity of review. Issues such as the quality of writing in a grant proposal, comparison of the two review methods, the purpose and use of the rebuttal, assessing the financial value of funded projects, the importance of the experience of the panel membership and the role of track record and the impact of group dynamics on the review process were all discussed. The research also examined the influence of research culture on decision-making in grant review panels. One of the aims of this study was to compare a simplified review process with more conventional processes. Generally, participants were supportive of the simplified process.</p><p><strong>Conclusions: </strong>Transparency in the grant review process will result in better appreciation of the outcome. Despite the provision of clear guidelines for peer review, reviewing processes are likely to be subjective to the extent that different reviewers apply different rules. The peer review process will come under more scrutiny as funding for research becomes even more competitive. There is justification for further research on the process, especially of a kind that taps more deeply into the 'black box' of peer review.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"19"},"PeriodicalIF":0.0,"publicationDate":"2017-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5803633/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35838151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stéphane Boyer, Takayoshi Ikeda, Marie-Caroline Lefort, Jagoba Malumbres-Olarte, Jason M Schmidt
{"title":"Percentage-based Author Contribution Index: a universal measure of author contribution to scientific articles.","authors":"Stéphane Boyer, Takayoshi Ikeda, Marie-Caroline Lefort, Jagoba Malumbres-Olarte, Jason M Schmidt","doi":"10.1186/s41073-017-0042-y","DOIUrl":"10.1186/s41073-017-0042-y","url":null,"abstract":"<p><strong>Background: </strong>Deciphering the amount of work provided by different co-authors of a scientific paper has been a recurrent problem in science. Despite the myriad of metrics available, the scientific community still largely relies on the position in the list of authors to evaluate contributions, a metric that attributes subjective and unfounded credit to co-authors. We propose an easy to apply, universally comparable and fair metric to measure and report co-authors contribution in the scientific literature.</p><p><strong>Methods: </strong>The proposed Author Contribution Index (ACI) is based on contribution percentages provided by the authors, preferably at the time of submission. Researchers can use ACI to compare the contributions of different authors, describe the contribution profile of a particular researcher or analyse how contribution changes through time. We provide such an analysis based on contribution percentages provided by 97 scientists from the field of ecology who voluntarily responded to an online anonymous survey.</p><p><strong>Results: </strong>ACI is simple to understand and to implement because it is based solely on percentage contributions and the number of co-authors. It provides a continuous score that reflects the contribution of one author as compared to the average contribution of all other authors. For example, ACI(i) = 3, means that author i contributed three times more than what the other authors contributed on average. Our analysis comprised 836 papers published in 2014-2016 and revealed patterns of ACI values that relate to career advancement.</p><p><strong>Conclusion: </strong>There are many examples of author contribution indices that have been proposed but none has really been adopted by scientific journals. Many of the proposed solutions are either too complicated, not accurate enough or not comparable across articles, authors and disciplines. The author contribution index presented here addresses these three major issues and has the potential to contribute to more transparency in the science literature. If adopted by scientific journals, it could provide job seekers, recruiters and evaluating bodies with a tool to gather information that is essential to them and cannot be easily and accurately obtained otherwise. We also suggest that scientists use the index regardless of whether it is implemented by journals or not.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"2 ","pages":"18"},"PeriodicalIF":0.0,"publicationDate":"2017-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5803580/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35837677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}