Yukinobu Kawakami, Takuya Matsuda, N. Hidaka, Mamoru Tanaka, Eizen Kimura
{"title":"Toward a unified understanding of drug-drug interactions: mapping Japanese drug codes to RxNorm concepts.","authors":"Yukinobu Kawakami, Takuya Matsuda, N. Hidaka, Mamoru Tanaka, Eizen Kimura","doi":"10.1093/jamia/ocae094","DOIUrl":"https://doi.org/10.1093/jamia/ocae094","url":null,"abstract":"OBJECTIVES\u0000Linking information on Japanese pharmaceutical products to global knowledge bases (KBs) would enhance international collaborative research and yield valuable insights. However, public access to mappings of Japanese pharmaceutical products that use international controlled vocabularies remains limited. This study mapped YJ codes to RxNorm ingredient classes, providing new insights by comparing Japanese and international drug-drug interaction (DDI) information using a case study methodology.\u0000\u0000\u0000MATERIALS AND METHODS\u0000Tables linking YJ codes to RxNorm concepts were created using the application programming interfaces of the Kyoto Encyclopedia of Genes and Genomes and the National Library of Medicine. A comparative analysis of Japanese and international DDI information was thus performed by linking to an international DDI KB.\u0000\u0000\u0000RESULTS\u0000There was limited agreement between the Japanese and international DDI severity classifications. Cross-tabulation of Japanese and international DDIs by severity showed that 213 combinations classified as serious DDIs by an international KB were missing from the Japanese DDI information.\u0000\u0000\u0000DISCUSSION\u0000It is desirable that efforts be undertaken to standardize international criteria for DDIs to ensure consistency in the classification of their severity.\u0000\u0000\u0000CONCLUSION\u0000The classification of DDI severity remains highly variable. It is imperative to augment the repository of critical DDI information, which would revalidate the utility of fostering collaborations with global KBs.","PeriodicalId":236137,"journal":{"name":"Journal of the American Medical Informatics Association : JAMIA","volume":"69 23","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140964527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Implementation of a health information technology safety classification system in the Veterans Health Administration's Informatics Patient Safety Office.","authors":"Danielle Kato, Joe Lucas, Dean F. Sittig","doi":"10.1093/jamia/ocae107","DOIUrl":"https://doi.org/10.1093/jamia/ocae107","url":null,"abstract":"OBJECTIVE\u0000Implement the 5-type health information technology (HIT) patient safety concern classification system for HIT patient safety issues reported to the Veterans Health Administration's Informatics Patient Safety Office.\u0000\u0000\u0000MATERIALS AND METHODS\u0000A team of informatics safety analysts retrospectively classified 1 year of HIT patient safety issues by type of HIT patient safety concern using consensus discussions. The processes established during retrospective classification were then applied to incoming HIT safety issues moving forward.\u0000\u0000\u0000RESULTS\u0000Of 140 issues retrospectively reviewed, 124 met the classification criteria. The majority were HIT failures (eg, software defects) (33.1%) or configuration and implementation problems (29.8%). Unmet user needs and external system interactions accounted for 20.2% and 10.5%, respectively. Absence of HIT safety features accounted for 2.4% of issues, and 4% did not have enough information to classify.\u0000\u0000\u0000CONCLUSION\u0000The 5-type HIT safety concern classification framework generated actionable categories helping organizations effectively respond to HIT patient safety risks.","PeriodicalId":236137,"journal":{"name":"Journal of the American Medical Informatics Association : JAMIA","volume":"36 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140966430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. C. Dos Santos, Lisa G Johnson, Olatunde O. Madandola, Karen J B Priola, Yingwei Yao, Tamara G. R. Macieira, Gail M. Keenan
{"title":"An example of leveraging AI for documentation: ChatGPT-generated nursing care plan for an older adult with lung cancer.","authors":"F. C. Dos Santos, Lisa G Johnson, Olatunde O. Madandola, Karen J B Priola, Yingwei Yao, Tamara G. R. Macieira, Gail M. Keenan","doi":"10.1093/jamia/ocae116","DOIUrl":"https://doi.org/10.1093/jamia/ocae116","url":null,"abstract":"OBJECTIVE\u0000Our article demonstrates the effectiveness of using a validated framework to create a ChatGPT prompt that generates valid nursing care plan suggestions for one hypothetical older patient with lung cancer.\u0000\u0000\u0000METHOD\u0000This study describes the methodology for creating ChatGPT prompts that generate consistent care plan suggestions and its application for a lung cancer case scenario. After entering a nursing assessment of the patient's condition into ChatGPT, we asked it to generate care plan suggestions. Subsequently, we assessed the quality of the care plans produced by ChatGPT.\u0000\u0000\u0000RESULTS\u0000While not all the suggested care plan terms (11 out of 16) utilized standardized nursing terminology, the ChatGPT-generated care plan closely matched the gold standard in scope and nature, correctly prioritizing oxygenation and ventilation needs.\u0000\u0000\u0000CONCLUSION\u0000Using a validated framework prompt to generate nursing care plan suggestions with ChatGPT demonstrates its potential value as a decision support tool for optimizing cancer care documentation.","PeriodicalId":236137,"journal":{"name":"Journal of the American Medical Informatics Association : JAMIA","volume":"36 29","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140966404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shiyao Xie, Wenjing Zhao, Guanghui Deng, Guohua He, Na He, Zhenhua Lu, Weihua Hu, Mingming Zhao, Jian Du
{"title":"Utilizing ChatGPT as a scientific reasoning engine to differentiate conflicting evidence and summarize challenges in controversial clinical questions.","authors":"Shiyao Xie, Wenjing Zhao, Guanghui Deng, Guohua He, Na He, Zhenhua Lu, Weihua Hu, Mingming Zhao, Jian Du","doi":"10.1093/jamia/ocae100","DOIUrl":"https://doi.org/10.1093/jamia/ocae100","url":null,"abstract":"OBJECTIVE\u0000Synthesizing and evaluating inconsistent medical evidence is essential in evidence-based medicine. This study aimed to employ ChatGPT as a sophisticated scientific reasoning engine to identify conflicting clinical evidence and summarize unresolved questions to inform further research.\u0000\u0000\u0000MATERIALS AND METHODS\u0000We evaluated ChatGPT's effectiveness in identifying conflicting evidence and investigated its principles of logical reasoning. An automated framework was developed to generate a PubMed dataset focused on controversial clinical topics. ChatGPT analyzed this dataset to identify consensus and controversy, and to formulate unsolved research questions. Expert evaluations were conducted 1) on the consensus and controversy for factual consistency, comprehensiveness, and potential harm and, 2) on the research questions for relevance, innovation, clarity, and specificity.\u0000\u0000\u0000RESULTS\u0000The gpt-4-1106-preview model achieved a 90% recall rate in detecting inconsistent claim pairs within a ternary assertions setup. Notably, without explicit reasoning prompts, ChatGPT provided sound reasoning for the assertions between claims and hypotheses, based on an analysis grounded in relevance, specificity, and certainty. ChatGPT's conclusions of consensus and controversies in clinical literature were comprehensive and factually consistent. The research questions proposed by ChatGPT received high expert ratings.\u0000\u0000\u0000DISCUSSION\u0000Our experiment implies that, in evaluating the relationship between evidence and claims, ChatGPT considered more detailed information beyond a straightforward assessment of sentimental orientation. This ability to process intricate information and conduct scientific reasoning regarding sentiment is noteworthy, particularly as this pattern emerged without explicit guidance or directives in prompts, highlighting ChatGPT's inherent logical reasoning capabilities.\u0000\u0000\u0000CONCLUSION\u0000This study demonstrated ChatGPT's capacity to evaluate and interpret scientific claims. Such proficiency can be generalized to broader clinical research literature. ChatGPT effectively aids in facilitating clinical studies by proposing unresolved challenges based on analysis of existing studies. However, caution is advised as ChatGPT's outputs are inferences drawn from the input literature and could be harmful to clinical practice.","PeriodicalId":236137,"journal":{"name":"Journal of the American Medical Informatics Association : JAMIA","volume":"9 41","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140961969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Velma L Payne, Usman Sattar, Melanie C. Wright, Elijah Hill, Jorie M Butler, Brekk C. Macpherson, Amanda Jeppesen, G. Del Fiol, Karl Madaras-Kelly
{"title":"Clinician perspectives on how situational context and augmented intelligence design features impact perceived usefulness of sepsis prediction scores embedded within a simulated electronic health record.","authors":"Velma L Payne, Usman Sattar, Melanie C. Wright, Elijah Hill, Jorie M Butler, Brekk C. Macpherson, Amanda Jeppesen, G. Del Fiol, Karl Madaras-Kelly","doi":"10.1093/jamia/ocae089","DOIUrl":"https://doi.org/10.1093/jamia/ocae089","url":null,"abstract":"OBJECTIVE\u0000Obtain clinicians' perspectives on early warning scores (EWS) use within context of clinical cases.\u0000\u0000\u0000MATERIAL AND METHODS\u0000We developed cases mimicking sepsis situations. De-identified data, synthesized physician notes, and EWS representing deterioration risk were displayed in a simulated EHR for analysis. Twelve clinicians participated in semi-structured interviews to ascertain perspectives across four domains: (1) Familiarity with and understanding of artificial intelligence (AI), prediction models and risk scores; (2) Clinical reasoning processes; (3) Impression and response to EWS; and (4) Interface design. Transcripts were coded and analyzed using content and thematic analysis.\u0000\u0000\u0000RESULTS\u0000Analysis revealed clinicians have experience but limited AI and prediction/risk modeling understanding. Case assessments were primarily based on clinical data. EWS went unmentioned during initial case analysis; although when prompted to comment on it, they discussed it in subsequent cases. Clinicians were unsure how to interpret or apply the EWS, and desired evidence on its derivation and validation. Design recommendations centered around EWS display in multi-patient lists for triage, and EWS trends within the patient record. Themes included a \"Trust but Verify\" approach to AI and early warning information, dichotomy that EWS is helpful for triage yet has disproportional signal-to-high noise ratio, and action driven by clinical judgment, not the EWS.\u0000\u0000\u0000CONCLUSIONS\u0000Clinicians were unsure of how to apply EWS, acted on clinical data, desired score composition and validation information, and felt EWS was most useful when embedded in multi-patient views. Systems providing interactive visualization may facilitate EWS transparency and increase confidence in AI-generated information.","PeriodicalId":236137,"journal":{"name":"Journal of the American Medical Informatics Association : JAMIA","volume":"11 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140654092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Sahoo, Joseph M. Plasek, Hua Xu, Özlem Uzuner, Trevor Cohen, Meliha Yetisgen, Hongfang Liu, Stéphane Meystre, Yanshan Wang
{"title":"Large language models for biomedicine: foundations, opportunities, challenges, and best practices.","authors":"S. Sahoo, Joseph M. Plasek, Hua Xu, Özlem Uzuner, Trevor Cohen, Meliha Yetisgen, Hongfang Liu, Stéphane Meystre, Yanshan Wang","doi":"10.1093/jamia/ocae074","DOIUrl":"https://doi.org/10.1093/jamia/ocae074","url":null,"abstract":"OBJECTIVES\u0000Generative large language models (LLMs) are a subset of transformers-based neural network architecture models. LLMs have successfully leveraged a combination of an increased number of parameters, improvements in computational efficiency, and large pre-training datasets to perform a wide spectrum of natural language processing (NLP) tasks. Using a few examples (few-shot) or no examples (zero-shot) for prompt-tuning has enabled LLMs to achieve state-of-the-art performance in a broad range of NLP applications. This article by the American Medical Informatics Association (AMIA) NLP Working Group characterizes the opportunities, challenges, and best practices for our community to leverage and advance the integration of LLMs in downstream NLP applications effectively. This can be accomplished through a variety of approaches, including augmented prompting, instruction prompt tuning, and reinforcement learning from human feedback (RLHF).\u0000\u0000\u0000TARGET AUDIENCE\u0000Our focus is on making LLMs accessible to the broader biomedical informatics community, including clinicians and researchers who may be unfamiliar with NLP. Additionally, NLP practitioners may gain insight from the described best practices.\u0000\u0000\u0000SCOPE\u0000We focus on 3 broad categories of NLP tasks, namely natural language understanding, natural language inferencing, and natural language generation. We review the emerging trends in prompt tuning, instruction fine-tuning, and evaluation metrics used for LLMs while drawing attention to several issues that impact biomedical NLP applications, including falsehoods in generated text (confabulation/hallucinations), toxicity, and dataset contamination leading to overfitting. We also review potential approaches to address some of these current challenges in LLMs, such as chain of thought prompting, and the phenomena of emergent capabilities observed in LLMs that can be leveraged to address complex NLP challenge in biomedical applications.","PeriodicalId":236137,"journal":{"name":"Journal of the American Medical Informatics Association : JAMIA","volume":"56 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140662274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jetske Graafsma, Rachel M Murphy, E. M. van de Garde, Fatma Karapinar-Carkıt, H. J. Derijks, Rien H L Hoge, Joanna E Klopotowska, Patricia M. L. A. van den Bemt
{"title":"The use of artificial intelligence to optimize medication alerts generated by clinical decision support systems: a scoping review.","authors":"Jetske Graafsma, Rachel M Murphy, E. M. van de Garde, Fatma Karapinar-Carkıt, H. J. Derijks, Rien H L Hoge, Joanna E Klopotowska, Patricia M. L. A. van den Bemt","doi":"10.1093/jamia/ocae076","DOIUrl":"https://doi.org/10.1093/jamia/ocae076","url":null,"abstract":"OBJECTIVE\u0000Current Clinical Decision Support Systems (CDSSs) generate medication alerts that are of limited clinical value, causing alert fatigue. Artificial Intelligence (AI)-based methods may help in optimizing medication alerts. Therefore, we conducted a scoping review on the current state of the use of AI to optimize medication alerts in a hospital setting. Specifically, we aimed to identify the applied AI methods used together with their performance measures and main outcome measures.\u0000\u0000\u0000MATERIALS AND METHODS\u0000We searched Medline, Embase, and Cochrane Library database on May 25, 2023 for studies of any quantitative design, in which the use of AI-based methods was investigated to optimize medication alerts generated by CDSSs in a hospital setting. The screening process was supported by ASReview software.\u0000\u0000\u0000RESULTS\u0000Out of 5625 citations screened for eligibility, 10 studies were included. Three studies (30%) reported on both statistical performance and clinical outcomes. The most often reported performance measure was positive predictive value ranging from 9% to 100%. Regarding main outcome measures, alerts optimized using AI-based methods resulted in a decreased alert burden, increased identification of inappropriate or atypical prescriptions, and enabled prediction of user responses. In only 2 studies the AI-based alerts were implemented in hospital practice, and none of the studies conducted external validation.\u0000\u0000\u0000DISCUSSION AND CONCLUSION\u0000AI-based methods can be used to optimize medication alerts in a hospital setting. However, reporting on models' development and validation should be improved, and external validation and implementation in hospital practice should be encouraged.","PeriodicalId":236137,"journal":{"name":"Journal of the American Medical Informatics Association : JAMIA","volume":" 644","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140682211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Moving forward on the science of informatics and predictive analytics.","authors":"Suzanne Bakken","doi":"10.1093/jamia/ocae077","DOIUrl":"https://doi.org/10.1093/jamia/ocae077","url":null,"abstract":"","PeriodicalId":236137,"journal":{"name":"Journal of the American Medical Informatics Association : JAMIA","volume":" 22","pages":"1049-1050"},"PeriodicalIF":0.0,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140684028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Onkar Litake, Brian H Park, Jeffrey L. Tully, Rodney A Gabriel
{"title":"Constructing synthetic datasets with generative artificial intelligence to train large language models to classify acute renal failure from clinical notes.","authors":"Onkar Litake, Brian H Park, Jeffrey L. Tully, Rodney A Gabriel","doi":"10.1093/jamia/ocae081","DOIUrl":"https://doi.org/10.1093/jamia/ocae081","url":null,"abstract":"OBJECTIVES\u0000To compare performances of a classifier that leverages language models when trained on synthetic versus authentic clinical notes.\u0000\u0000\u0000MATERIALS AND METHODS\u0000A classifier using language models was developed to identify acute renal failure. Four types of training data were compared: (1) notes from MIMIC-III; and (2, 3, and 4) synthetic notes generated by ChatGPT of varied text lengths of 15 (GPT-15 sentences), 30 (GPT-30 sentences), and 45 (GPT-45 sentences) sentences, respectively. The area under the receiver operating characteristics curve (AUC) was calculated from a test set from MIMIC-III.\u0000\u0000\u0000RESULTS\u0000With RoBERTa, the AUCs were 0.84, 0.80, 0.84, and 0.76 for the MIMIC-III, GPT-15, GPT-30- and GPT-45 sentences training sets, respectively.\u0000\u0000\u0000DISCUSSION\u0000Training language models to detect acute renal failure from clinical notes resulted in similar performances when using synthetic versus authentic training data.\u0000\u0000\u0000CONCLUSION\u0000The use of training data derived from protected health information may not be needed.","PeriodicalId":236137,"journal":{"name":"Journal of the American Medical Informatics Association : JAMIA","volume":"26 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140699471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chaoyi Wu, Weixiong Lin, Xiaoman Zhang, Ya Zhang, Weidi Xie, Yanfeng Wang
{"title":"PMC-LLaMA: toward building open-source language models for medicine.","authors":"Chaoyi Wu, Weixiong Lin, Xiaoman Zhang, Ya Zhang, Weidi Xie, Yanfeng Wang","doi":"10.1093/jamia/ocae045","DOIUrl":"https://doi.org/10.1093/jamia/ocae045","url":null,"abstract":"OBJECTIVE\u0000Recently, large language models (LLMs) have showcased remarkable capabilities in natural language understanding. While demonstrating proficiency in everyday conversations and question-answering (QA) situations, these models frequently struggle in domains that require precision, such as medical applications, due to their lack of domain-specific knowledge. In this article, we describe the procedure for building a powerful, open-source language model specifically designed for medicine applications, termed as PMC-LLaMA.\u0000\u0000\u0000MATERIALS AND METHODS\u0000We adapt a general-purpose LLM toward the medical domain, involving data-centric knowledge injection through the integration of 4.8M biomedical academic papers and 30K medical textbooks, as well as comprehensive domain-specific instruction fine-tuning, encompassing medical QA, rationale for reasoning, and conversational dialogues with 202M tokens.\u0000\u0000\u0000RESULTS\u0000While evaluating various public medical QA benchmarks and manual rating, our lightweight PMC-LLaMA, which consists of only 13B parameters, exhibits superior performance, even surpassing ChatGPT. All models, codes, and datasets for instruction tuning will be released to the research community.\u0000\u0000\u0000DISCUSSION\u0000Our contributions are 3-fold: (1) we build up an open-source LLM toward the medical domain. We believe the proposed PMC-LLaMA model can promote further development of foundation models in medicine, serving as a medical trainable basic generative language backbone; (2) we conduct thorough ablation studies to demonstrate the effectiveness of each proposed component, demonstrating how different training data and model scales affect medical LLMs; (3) we contribute a large-scale, comprehensive dataset for instruction tuning.\u0000\u0000\u0000CONCLUSION\u0000In this article, we systematically investigate the process of building up an open-source medical-specific LLM, PMC-LLaMA.","PeriodicalId":236137,"journal":{"name":"Journal of the American Medical Informatics Association : JAMIA","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140707187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}