Michael H. Bernstein, Brian Sheppard, Michael A. Bruno, Parker S. Lay, Grayson L. Baird
{"title":"Just because you’re paranoid doesn’t mean they won’t side with the plaintiff: Examining perceptions of liability about AI in radiology","authors":"Michael H. Bernstein, Brian Sheppard, Michael A. Bruno, Parker S. Lay, Grayson L. Baird","doi":"10.1101/2024.07.30.24311234","DOIUrl":"https://doi.org/10.1101/2024.07.30.24311234","url":null,"abstract":"<strong>Background</strong> Artificial Intelligence (AI) will have unintended consequences for radiology. When a radiologist misses an abnormality on an image, their liability may differ according to whether or not AI also missed the abnormality.","PeriodicalId":501154,"journal":{"name":"medRxiv - Medical Ethics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141880718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Santiago Vasco-Morales, Gabriel Alejandro Vasco-Toapanta, Cristhian Santiago Vasco-Toapanta, Paola Toapanta-Pinta
{"title":"Ethics in medical research: A quantitative analysis of the observations of Ethics Committees in research protocols.","authors":"Santiago Vasco-Morales, Gabriel Alejandro Vasco-Toapanta, Cristhian Santiago Vasco-Toapanta, Paola Toapanta-Pinta","doi":"10.1101/2024.06.23.24309373","DOIUrl":"https://doi.org/10.1101/2024.06.23.24309373","url":null,"abstract":"Objective: To determine the frequency of observations made by Research Ethics Committees (RECs) regarding non-compliance with ethical principles in research. Methods: We searched for articles published up to November 30, 2023. In the databases: PubMed, Scopus and Google Scholar. Single-proportion meta-analyses were performed with the R V.3.6.1 program. PROSPERO Registry: CRD42021291893 Results: 9 publications were reviewed, including cross-sectional, retrospective cohort, and descriptive studies. Lack of adherence to the ethical principle of justice was detected in up to 100% of the protocols evaluated. In addition, 9% (95% CI: 7-12) of observations in Latin America and 15% (95% CI: 9-24) in Europe. Autonomy was observed in 26% (95% CI: 20-33) of the protocols, reaching 17% (95% CI: 13-22) in experimental studies. Beneficence, lack of adherence in the protocols evaluated from 41.17% to 77.38%, observations per protocol ranged from 5.26% to 27.11%. Discussion: The findings highlighted disparities between regions and types of studies, reflecting cultural, interpretive, and human and institutional resource differences. RECs should ensure thorough and equitable assessments, promote fair selection, respect autonomy, and maximize benefits while minimizing risks to participants. This study provides an assessment of ethical practices in medical research, highlighting key areas for improving compliance with fundamental ethical principles.","PeriodicalId":501154,"journal":{"name":"medRxiv - Medical Ethics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Olmo R. van den Akker, Susanne Stark, Daniel Strech
{"title":"Ethics practices associated with reusing health data: An assessment of patient registries","authors":"Olmo R. van den Akker, Susanne Stark, Daniel Strech","doi":"10.1101/2024.04.26.24306459","DOIUrl":"https://doi.org/10.1101/2024.04.26.24306459","url":null,"abstract":"<strong>Background</strong> As routinely collected patient data have become increasingly accessible over the years, more and more attention has been directed at the ethics of using such data for research purposes. Patient data is often available to researchers through patient registries that typically collect data of patients with a specific disease. While ethical guidelines for using patient data are presented frequently in research papers and institutional documents, it is currently unknown how patient registries implement the recommendations from these guidelines in practice and how they communicate their practices. In this project, we assessed to what extent a sample of 51 patient registries provides information about a range of ethics practices.","PeriodicalId":501154,"journal":{"name":"medRxiv - Medical Ethics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140833540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
James Anibal, Hannah Huth, Jasmine Gunkel, Bradford Wood
{"title":"Simulated Misuse of Large Language Models and Clinical Credit Systems","authors":"James Anibal, Hannah Huth, Jasmine Gunkel, Bradford Wood","doi":"10.1101/2024.04.10.24305470","DOIUrl":"https://doi.org/10.1101/2024.04.10.24305470","url":null,"abstract":"Large language models (LLMs) have been proposed to support many healthcare tasks, including disease diagnostics and treatment personalization. While AI models may be applied to assist or enhance the delivery of healthcare, there is also a risk of misuse. LLMs could be used to allocate resources based on unfair, inaccurate, or unjust criteria. For example, a social credit system uses big data to assess “trustworthiness” in society, punishing those who score poorly based on evaluation metrics defined only by a power structure (corporate entity, governing body). Such a system may be amplified by powerful LLMs which can rate individuals based on high-dimensional multimodal data - financial transactions, internet activity, and other behavioural inputs. Healthcare data is perhaps the most sensitive information which can be collected and could potentially be used to violate civil liberty via a “clinical credit system”, which may include limiting or rationing access to standard care. This report simulates how clinical datasets might be exploited and proposes strategies to mitigate the risks inherent to the development of AI models for healthcare.","PeriodicalId":501154,"journal":{"name":"medRxiv - Medical Ethics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140585790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eliana Lopez Baron, Qalab Abbas, Paula Caporal, Asya Agulnik, Jonah E. Attebery, Adrian Holloway, Niranjan Kissoon, Celia Isabel Mulgado-Aguas, Kokou Amegan-Aho, Marianne Majdalani, Carmen Ocampo, Havugarurema Pascal, Erika Miller, Aimable Kanyamuhunga, Atnafu Mekonnen Tekleab, Tigist Bacha, Sebastian Gonzalez, Adnan T. Bhutta, Teresa B. Kortz, Srinivas Murthy, Kenneth E. Remy
{"title":"Challenges in Institutional Ethical Review Process and Approval for International Multicenter Clinical Studies in Lower and Middle-Income Countries: the case of PARITY Study","authors":"Eliana Lopez Baron, Qalab Abbas, Paula Caporal, Asya Agulnik, Jonah E. Attebery, Adrian Holloway, Niranjan Kissoon, Celia Isabel Mulgado-Aguas, Kokou Amegan-Aho, Marianne Majdalani, Carmen Ocampo, Havugarurema Pascal, Erika Miller, Aimable Kanyamuhunga, Atnafu Mekonnen Tekleab, Tigist Bacha, Sebastian Gonzalez, Adnan T. Bhutta, Teresa B. Kortz, Srinivas Murthy, Kenneth E. Remy","doi":"10.1101/2024.03.20.24304598","DOIUrl":"https://doi.org/10.1101/2024.03.20.24304598","url":null,"abstract":"Objectives: To describe the regulatory process, variability and challenges faced by pediatric researchers in low- and middle-income countries (LMICs) during the institutional review board (IRB) process of an international multicenter observational point prevalence study (Global PARITY).\u0000Design: A 16-question multiple-choice online survey was sent to site principal investigators (PIs) at PARITY study participating centers to explore characteristics of the IRB process, costs, and barriers to research approval. A shorter survey was employed for sites that expressed interest in participating in Global PARITY and started the approval process, but ultimately did not participate in data collection (non-participating sites) to assess IRB characteristics.\u0000Subjects: PIs from the Global PARITY Study\u0000Interventions: None.\u0000Results: Ninety-one sites pursued local IRB approval and 46 sites obtained IRB approval and completed data collection. Forty-six (100 %) participating centers and 21 (47%) non-participant centers completed the survey. Despite receiving approval from the study's lead center and being categorized as a minimal risk study, 36 (78%) of the hospitals involved in PARITY study required their own full board review. There was a significant difference between participating and non-participating sites in IRB approval of a waiver consent and in the requirement for a legal review of the protocol. The greatest challenge to research identified by non-participating sites was a lack of research time and the lack of institutional support.\u0000Conclusions: Global collaborative research is crucial to increase our understanding of pediatric critical care conditions in hospitals of all resource-levels and IRBs are required to ensure that this research complies with ethical standards. Critical barriers restrict research activities in some resource limiting countries. Increasing the efficiency and accessibility of local IRB review could greatly impact participation of resource limited sites and enrollment of vulnerable populations.","PeriodicalId":501154,"journal":{"name":"medRxiv - Medical Ethics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuhui Yin, Peiyi Lu, Zhuoran Xu, Zi Lian, Chenfei Ye, CHIHUA LI
{"title":"A Systematic Examination of Generative Artificial Intelligence (GAI) Usage Guidelines for Scholarly Publishing in Medical Journals","authors":"Shuhui Yin, Peiyi Lu, Zhuoran Xu, Zi Lian, Chenfei Ye, CHIHUA LI","doi":"10.1101/2024.03.19.24304550","DOIUrl":"https://doi.org/10.1101/2024.03.19.24304550","url":null,"abstract":"Background A thorough and in-depth examination of generative artificial intelligence (GAI) usage guidelines in medical journals will inform potential gaps and promote proper GAI usage in scholarly publishing. This study aims to examine the provision and specificity of GAI usage guidelines and their relationships with journal characteristics. Methods From the SCImago Journal Rank (SJR) list for medicine in 2022, we selected 98 journals as top journals to represent highly indexed journals and 144 as whole-spectrum sample journals to represent all medical journals. We examined their GAI usage guidelines for scholarly publishing between December 2023 and January 2024. Results Compared to whole-spectrum sample journals, the top journals were more likely to provide author guidelines (64.3% vs. 27.8%) and reviewer guidelines (11.2% vs. 0.0%) as well as refer to external guidelines (85.7% vs 74.3%). Probit models showed that SJR score or region was not associated with the provision of these guidelines among top journals. However, among whole-spectrum sample journals, SJR score was positively associated with the provision of author guidelines (0.85, 95% CI 0.49 to 1.25) and references to external guidelines (2.01, 95% CI 1.24 to 3.65). Liner models showed that SJR score was positively associated with the specificity level of author and reviewer guidelines among whole-spectrum sample journals (1.21, 95% CI 0.72 to 1.70), and no such pattern was observed among top journals. Conclusions The provision of GAI usage guidelines is limited across medical journals, especially for reviewer guidelines. The lack of specificity and consistency in existing guidelines highlights areas deserving improvement. These findings suggest that immediate attention is needed to guide GAI usage in scholarly publishing in medical journals.","PeriodicalId":501154,"journal":{"name":"medRxiv - Medical Ethics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140200414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jack D Wilkinson, Calvin Heal, Georgios A Antoniou, Ella Flemyng, Alison Avenell, Virgina Barbour, Esmee M Bordewijk, Nicholas JL Brown, Mike Clarke, Jo Dumville, Lyle C Gurrin, Jill A Hayden, Kylie E Hunter, Emily Lam, Toby Lasserson, Tianjing Li, Sarah Lensen, Jianping Liu, Andreas Lundh, Gideon Meyerowitz-Katz, Ben W Mol, Neil E O'Connell, Lisa Parker, Barbara Redman, Anna Lene Seidler, kyle Sheldrick, Emma Sydenham, Madelon van Wely, Lisa Bero, Jamie J Kirkham
{"title":"A survey of experts to identify methods to detect problematic studies: Stage 1 of the INSPECT-SR Project","authors":"Jack D Wilkinson, Calvin Heal, Georgios A Antoniou, Ella Flemyng, Alison Avenell, Virgina Barbour, Esmee M Bordewijk, Nicholas JL Brown, Mike Clarke, Jo Dumville, Lyle C Gurrin, Jill A Hayden, Kylie E Hunter, Emily Lam, Toby Lasserson, Tianjing Li, Sarah Lensen, Jianping Liu, Andreas Lundh, Gideon Meyerowitz-Katz, Ben W Mol, Neil E O'Connell, Lisa Parker, Barbara Redman, Anna Lene Seidler, kyle Sheldrick, Emma Sydenham, Madelon van Wely, Lisa Bero, Jamie J Kirkham","doi":"10.1101/2024.03.18.24304479","DOIUrl":"https://doi.org/10.1101/2024.03.18.24304479","url":null,"abstract":"Background Randomised controlled trials (RCTs) inform healthcare decisions. Unfortunately, some published RCTs contain false data, and some appear to have been entirely fabricated. Systematic reviews are performed to identify and synthesise all RCTs which have been conducted on a given topic. This means that any of these 'problematic studies' are likely to be included, but there are no agreed methods for identifying them. The INSPECT-SR project is developing a tool to identify problematic RCTs in systematic reviews of healthcare-related interventions. The tool will guide the user through a series of `checks' to determine a study's authenticity. The first objective in the development process is to assemble a comprehensive list of checks to consider for inclusion. Methods We assembled an initial list of checks for assessing the authenticity of research studies, with no restriction to RCTs, and categorised these into five domains: Inspecting results in the paper; Inspecting the research team; Inspecting conduct, governance, and transparency; Inspecting text and publication details; Inspecting the individual participant data. We implemented this list as an online survey, and invited people with expertise and experience of assessing potentially problematic studies to participate through professional networks and online forums. Participants were invited to provide feedback on the checks on the list, and were asked to describe any additional checks they knew of, which were not featured in the list. Results Extensive feedback on an initial list of 102 checks was provided by 71 participants based in 16 countries across five continents. Fourteen new checks were proposed across the five domains, and suggestions were made to reword checks on the initial list. An updated list of checks was constructed, comprising 116 checks. Many participants expressed a lack of familiarity with statistical checks, and emphasized the importance of feasibility of the tool. Conclusions\u0000A comprehensive list of trustworthiness checks has been produced. The checks will be evaluated to determine which should be included in the INSPECT-SR tool.","PeriodicalId":501154,"journal":{"name":"medRxiv - Medical Ethics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140169499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Student Cognitive Enhancement with Non-Prescribed Modafinil. Is it Cheating?","authors":"Alexia Kesta, Philip M. Newton","doi":"10.1101/2024.03.01.24303594","DOIUrl":"https://doi.org/10.1101/2024.03.01.24303594","url":null,"abstract":"Modafinil, a prescription-only drug, it is mainly used to treat narcolepsy and sleep disorders, but it is also used, without a prescription, as a cognitive enhancer by ~10% of UK University students. Previous research has focused on the prevalence of, and motivations for, these behaviours. Here we focused specifically on determining whether students view this behaviour as cheating. We used a scenario-based approach to quantify, and qualitatively understand, student views on this topic. Most students did not view this behaviour as cheating, in part due to similarities with freely available stimulants such as caffeine, and a view that cognitive enhancement does not confer new knowledge or understanding. Although a minority of students did view it as cheating, they also expressed strong views, based in part on basic questions of fairness and access. Few students did not have a view either way. These views remained largely unchanged even when presented with considerations of other moderators of the ethics of cognitive enhancement with modafinil.","PeriodicalId":501154,"journal":{"name":"medRxiv - Medical Ethics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140033001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Delwen L Franzen, Maia Salholz-Hillel, Stephanie Müller-Ohlraun, Daniel Strech
{"title":"Improving research transparency with individualized report cards: A feasibility study in clinical trials at a large university medical center","authors":"Delwen L Franzen, Maia Salholz-Hillel, Stephanie Müller-Ohlraun, Daniel Strech","doi":"10.1101/2024.02.10.24302619","DOIUrl":"https://doi.org/10.1101/2024.02.10.24302619","url":null,"abstract":"Research transparency is crucial for ensuring the relevance, integrity, and reliability of scientific findings. However, previous work indicates room for improvement across transparency practices. The primary objective of this study was to develop an extensible tool to provide individualized feedback and guidance for improved transparency across phases of a study. Our secondary objective was to assess the feasibility of implementing this tool to improve transparency in clinical trials. We developed study-level ″report cards″ that combine tailored feedback and guidance to investigators across several transparency practices, including prospective registration, availability of summary results, and open access publication. The report cards were generated through an automated pipeline for scalability. We also developed an infosheet to summarize relevant laws, guidelines, and resources relating to transparency. To assess the feasibility of using these tools to improve transparency, we conducted a single-arm intervention study at Berlin′s university medical center, the Charité – Universitätsmedizin Berlin. Investigators (n = 92) of 155 clinical trials were sent individualized report cards and the infosheet, and surveyed to assess their perceived usefulness. We also evaluated included trials for improvements in transparency following the intervention. Survey responses indicated general appreciation for the report cards and infosheet, with a majority of participants finding them helpful to build awareness of the transparency of their trial and transparency requirements. However, improvement on transparency practices was minimal and largely limited to linking publications in registries. Investigators also commented on various challenges associated with implementing transparency, including a lack of clarity around best practices and institutional hurdles. This study demonstrates the potential of developing and using tools, such as report cards, to provide individualized feedback at scale to investigators on the transparency of their study. While these tools were positively received by investigators, the limited improvement in transparency practices suggests that awareness alone is likely not sufficient to drive improvement. Future research and implementation efforts may adapt the tools to further practices or research areas, and explore integrated approaches that combine the report cards with incentives and institutional support to effectively strengthen transparency in research.","PeriodicalId":501154,"journal":{"name":"medRxiv - Medical Ethics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139765325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Haslberger, Stefanie Gestrich, Daniel Strech
{"title":"Reporting of Retrospective Registration in Clinical Trial Publications: a Cross-Sectional Study of German Trials","authors":"Martin Haslberger, Stefanie Gestrich, Daniel Strech","doi":"10.1101/2022.10.09.22280784","DOIUrl":"https://doi.org/10.1101/2022.10.09.22280784","url":null,"abstract":"Objective: Prospective registration has been widely implemented and accepted as a best practice in clinical research, but retrospective registration is still commonly found. We assessed to what extent retrospective registration is reported transparently in journal publications, and investigated factors associated with transparent reporting. Design: We used a dataset of trials registered in ClinicalTrials.gov or Deutsches Register Klinischer Studien, with a German University Medical Center as the lead center, completed 2009-2017, and with a corresponding peer-reviewed results publication. We extracted all registration statements from results publications of retrospectively registered trials and assessed whether they mention or justify the retrospective registration. We analyzed associations of retrospective registration and reporting thereof with registration number reporting, International Committee of Medical Journal Editors (ICMJE) membership/-following and industry sponsorship using chi-squared or Fisher exact test. Results: In the dataset of 1927 trials with a corresponding results publication, 956 (53.7%) were retrospectively registered. Of those, 2.2% (21) explicitly report the retrospective registration in the abstract and 3.5% (33) in the full text. In 2.1% (20) of publications, authors provide an explanation for the retrospective registration in the full text. Registration numbers were significantly underreported in abstracts of retrospectively registered trials compared to prospectively registered trials. Publications in ICMJE member journals did not have statistically significantly higher rates of both prospective registration and disclosure of retrospective registration, and publications in journals claiming to follow ICMJE recommendations showed statistically significantly lower rates compared to non-ICMJE-following journals. Industry sponsorship of trials was significantly associated with higher rates of prospective registration, but not with transparent registration reporting.\u0000Conclusions: Contrary to ICMJE guidance, retrospective registration is disclosed and explained only in a small number of retrospectively registered studies. Disclosure of the retrospective nature of the registration would require a brief statement in the manuscript and could be easily implemented by journals.","PeriodicalId":501154,"journal":{"name":"medRxiv - Medical Ethics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}