{"title":"Sentiment analysis of Bangla language using a new comprehensive dataset BangDSA and the novel feature metric skipBangla-BERT","authors":"Md. Shymon Islam, Kazi Masudul Alam","doi":"10.1016/j.nlp.2024.100069","DOIUrl":"https://doi.org/10.1016/j.nlp.2024.100069","url":null,"abstract":"<div><p>In this modern technologically advanced world, Sentiment Analysis (SA) is a very important topic in every language due to its various trendy applications. But SA in Bangla language is still in a dearth level. This work focuses on examining different hybrid feature extraction techniques and learning algorithms on <strong>Bang</strong>la <strong>D</strong>ocument level <strong>S</strong>entiment <strong>A</strong>nalysis using a new comprehensive dataset (BangDSA) of 203,493 comments collected from various microblogging sites. The proposed BangDSA dataset approximately follows the Zipf’s law, covering 32.84% function words with a vocabulary growth rate of 0.053, tagged both on 15 and 3 categories. In this study, we have implemented 21 different hybrid feature extraction methods including Bag of Words (BOW), N-gram, TF-IDF, TF-IDF-ICF, Word2Vec, FastText, GloVe, Bangla-BERT etc with CBOW and Skipgram mechanisms. The proposed novel method (Bangla-BERT+Skipgram), skipBangla-BERT outperforms all other feature extraction techniques in machine leaning (ML), ensemble learning (EL) and deep learning (DL) approaches. Among the built models from ML, EL and DL domains the hybrid method CNN-BiLSTM surpasses the others. The best acquired accuracy for the CNN-BiLSTM model is 90.24% in 15 categories and 95.71% in 3 categories. Friedman test has been performed on the obtained results to observe the statistical significance. For both real 15 and 3 categories, the results of the statistical test are significant.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"7 ","pages":"Article 100069"},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000177/pdfft?md5=2a4b5d5dc62f48201e142e0cf3b9cb09&pid=1-s2.0-S2949719124000177-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140557852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jabed Omor Bappi , Mohammad Abu Tareq Rony , Mohammad Shariful Islam
{"title":"BNVGLENET: Hypercomplex Bangla handwriting character recognition with hierarchical class expansion using Convolutional Neural Networks","authors":"Jabed Omor Bappi , Mohammad Abu Tareq Rony , Mohammad Shariful Islam","doi":"10.1016/j.nlp.2024.100068","DOIUrl":"https://doi.org/10.1016/j.nlp.2024.100068","url":null,"abstract":"<div><p>Object recognition technology has made significant strides where recognizing handwritten Bangla characters including symbols, compounds form, etc. remains a challenging problem due to the prevalence of cursive writing and many ambiguous characters. The complexity and variability of the Bangla script, and individual’s unique handwriting styles make it difficult to achieve satisfactory performance for practical applications, and the best existing recognizers are far less effective than those developed for English alpha-numeric characters. In comparison to other major languages, there are limited options for recognizing handwritten Bangla characters. This study has the potential to improve the accuracy and effectiveness of handwriting recognition systems for the Bengali language, which is spoken by over 200 million people worldwide. This paper aims to investigate the application of Convolutional Neural Networks (CNNs) for recognizing Bangla handwritten characters, with a particular focus on enlarging the recognized character classes. To achieve this, a novel challenging dataset for handwriting recognition is introduced, which is collected from numerous students’ handwriting from two institutions. A novel convolutional neural network-based approach called BNVGLENET is proposed in this paper to recognize Bangla handwritten characters by modifying the LeNet-5 and combining it with the VGG architecture, which has the advantage of significantly identifying the characters from Bengali handwriting. This study systematically evaluated the performance of models not only on custom novel dataset but also on the publicly available Bangla handwritten character dataset called the Grapheme dataset. This research achieved a state-of-the-art recognition accuracy of 98.2% on our custom testing vowel-consonant class and 97.5% on the custom individual class. The improvements achieved in this study bridge a notable disparity between the practical needs and the actual performance of Bangla handwritten character recognition systems.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"7 ","pages":"Article 100068"},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000165/pdfft?md5=8bf76ee7108a74bfc05ded3f15c3a43e&pid=1-s2.0-S2949719124000165-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140557853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advancing NLP models with strategic text augmentation: A comprehensive study of augmentation methods and curriculum strategies","authors":"Himmet Toprak Kesgin, Mehmet Fatih Amasyali","doi":"10.1016/j.nlp.2024.100071","DOIUrl":"https://doi.org/10.1016/j.nlp.2024.100071","url":null,"abstract":"<div><p>This study conducts a thorough evaluation of text augmentation techniques across a variety of datasets and natural language processing (NLP) tasks to address the lack of reliable, generalized evidence for these methods. It examines the effectiveness of these techniques in augmenting training sets to improve performance in tasks such as topic classification, sentiment analysis, and offensive language detection. The research emphasizes not only the augmentation methods, but also the strategic order in which real and augmented instances are introduced during training. A major contribution is the development and evaluation of Modified Cyclical Curriculum Learning (MCCL) for augmented datasets, which represents a novel approach in the field. Results show that specific augmentation methods, especially when integrated with MCCL, significantly outperform traditional training approaches in NLP model performance. These results underscore the need for careful selection of augmentation techniques and sequencing strategies to optimize the balance between speed and quality improvement in various NLP tasks. The study concludes that the use of augmentation methods, especially in conjunction with MCCL, leads to improved results in various classification tasks, providing a foundation for future advances in text augmentation strategies in NLP.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"7 ","pages":"Article 100071"},"PeriodicalIF":0.0,"publicationDate":"2024-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000190/pdfft?md5=841354620e15317d1fd328df74581e7d&pid=1-s2.0-S2949719124000190-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140551700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A survey of text summarization: Techniques, evaluation and challenges","authors":"Supriyono , Aji Prasetya Wibawa , Suyono , Fachrul Kurniawan","doi":"10.1016/j.nlp.2024.100070","DOIUrl":"https://doi.org/10.1016/j.nlp.2024.100070","url":null,"abstract":"<div><p>This paper explores the complex field of text summarization in Natural Language Processing (NLP), with particular attention to the development and importance of semantic understanding. Text summarization is a crucial component of natural language processing (NLP), which helps to translate large amounts of textual data into clear and understandable representations. As the story progresses, it demonstrates the dynamic transition from simple syntactic structures to sophisticated models with semantic comprehension. In order to effectively summarize, syntactic, semantic, and pragmatic concerns become crucial, highlighting the necessity of capturing not only grammar but also the context and underlying meaning. It examines the wide range of summarization models, from conventional extractive techniques to state-of-the-art tools like pre-trained models. Applications are found in many different fields, demonstrating how versatile summarizing techniques are. Semantic drift and domain-specific knowledge remain obstacles, despite progress. In the future, the study predicts developments like artificial intelligence integration and transfer learning, which motivates academics to investigate these prospects for advancement. The approach, which is based on the PRISMA framework, emphasizes a methodical and open literature review. The work attempts to further natural language processing (NLP) and text summarization by combining various research findings and suggesting future research directions in this dynamic subject.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"7 ","pages":"Article 100070"},"PeriodicalIF":0.0,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000189/pdfft?md5=59f885a43c999d64a8b2382f368be608&pid=1-s2.0-S2949719124000189-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140542600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kara Combs , Trevor J. Bihl , Subhashini Ganapathy
{"title":"Utilization of generative AI for the characterization and identification of visual unknowns","authors":"Kara Combs , Trevor J. Bihl , Subhashini Ganapathy","doi":"10.1016/j.nlp.2024.100064","DOIUrl":"https://doi.org/10.1016/j.nlp.2024.100064","url":null,"abstract":"<div><p>Current state-of-the-art artificial intelligence (AI) struggles with accurate interpretation of out-of-library objects. One method proposed remedy is analogical reasoning (AR), which utilizes abductive reasoning to draw inferences on an unfamiliar scenario given knowledge about a similar familiar scenario. Currently, applications of visual AR gravitate toward analogy-formatted image problems rather than real-world computer vision data sets. This paper proposes the Image Recognition Through Analogical Reasoning Algorithm (IRTARA) and its “generative AI” version called “GIRTARA” which describes and predicts out-of-library visual objects. IRTARA characterizes the out-of-library object through a list of words called the “term frequency list”. GIRTARA uses the term frequency list to predict what the out-of-library object is. To evaluate the quality of the results of IRTARA, both quantitative and qualitative assessments are used, including a baseline to compare the automated methods with human-generated results. The accuracy of GIRTARA’s predictions is calculated through a cosine similarity analysis. This study observed that IRTARA had consistent results in the term frequency list based on the three evaluation methods for the high-quality results and GIRTARA was able to obtain up to 65% match in terms of cosine similarity when compared to the out-of-library object’s true labels.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"7 ","pages":"Article 100064"},"PeriodicalIF":0.0,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000128/pdfft?md5=b907bb3498bdf74554a25eef96b3ee34&pid=1-s2.0-S2949719124000128-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140343887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Claim detection for automated fact-checking: A survey on monolingual, multilingual and cross-lingual research","authors":"Rrubaa Panchendrarajan, Arkaitz Zubiaga","doi":"10.1016/j.nlp.2024.100066","DOIUrl":"https://doi.org/10.1016/j.nlp.2024.100066","url":null,"abstract":"<div><p>Automated fact-checking has drawn considerable attention over the past few decades due to the increase in the diffusion of misinformation on online platforms. This is often carried out as a sequence of tasks comprising (i) the detection of sentences circulating in online platforms which constitute claims needing verification, followed by (ii) the verification process of those claims. This survey focuses on the former, by discussing existing efforts towards detecting claims needing fact-checking, with a particular focus on multilingual data and methods. This is a challenging and fertile direction where existing methods are yet far from matching human performance due to the profoundly challenging nature of the issue. Especially, the dissemination of information across multiple social platforms, articulated in multiple languages and modalities demands more generalized solutions for combating misinformation. Focusing on multilingual misinformation, we present a comprehensive survey of existing multilingual claim detection research. We present state-of-the-art multilingual claim detection research categorized into three key factors of the problem, verifiability, priority, and similarity. Further, we present a detailed overview of the existing multilingual datasets along with the challenges and suggest possible future advancements.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"7 ","pages":"Article 100066"},"PeriodicalIF":0.0,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000141/pdfft?md5=43cfb5b770cda4c03e5933e454d8f5bd&pid=1-s2.0-S2949719124000141-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140290305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaoqin Huang , Yue Wang , Eugene Y.C. Wong , Lei Yu
{"title":"Ensemble learning with soft-prompted pretrained language models for fact checking","authors":"Shaoqin Huang , Yue Wang , Eugene Y.C. Wong , Lei Yu","doi":"10.1016/j.nlp.2024.100067","DOIUrl":"10.1016/j.nlp.2024.100067","url":null,"abstract":"<div><p>The infectious diseases, such as COVID-19 pandemic, has led to a surge of information on the internet, including misinformation, necessitating fact-checking tools. However, fact-checking infectious diseases related claims pose challenges due to informal claims versus formal evidence and the presence of multiple aspects in a claim. To address these issues, we propose a soft prompt-based ensemble learning framework for COVID-19 fact checking. To understand complex assertions in informal social media texts, we explore various soft prompt structures to take advantage of the T5 language model, and ensemble these prompt structures together. Soft prompts offer flexibility and better generalization compared to hard prompts. The ensemble model captures linguistic cues and contextual information in COVID-19-related data, and thus enhances generalization to new claims. Experimental results demonstrate that prompt-based ensemble learning improves fact-checking accuracy and provides a promising approach to combat misinformation during the pandemic. In addition, the method also shows great zero-shot learning capability and thus can be applied to various fact checking problems.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"7 ","pages":"Article 100067"},"PeriodicalIF":0.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000153/pdfft?md5=268e2b44eb63a0ef7ca15c1fd64330b7&pid=1-s2.0-S2949719124000153-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140269139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LeanContext: Cost-efficient domain-specific question answering using LLMs","authors":"Md Adnan Arefeen , Biplob Debnath , Srimat Chakradhar","doi":"10.1016/j.nlp.2024.100065","DOIUrl":"https://doi.org/10.1016/j.nlp.2024.100065","url":null,"abstract":"<div><p>Question-answering (QA) is a significant application of Large Language Models (LLMs), shaping chatbot capabilities across healthcare, education, and customer service. However, widespread LLM integration presents a challenge for small businesses due to the high expenses of LLM API usage. Costs rise rapidly when domain-specific data (context) is used alongside queries for accurate domain-specific LLM responses. Extracting context from domain-specific data is implemented by a Retrieval Augmented Generation (RAG) approach. One option is to summarize the RAG context by using LLMs and reduce the context. However, this can also filter out useful information that is necessary to answer some domain-specific queries. In this paper, we shift from human-oriented summarizers to AI model-friendly summaries. Our approach, LeanContext, efficiently extracts <em>k</em> key sentences from the context that are closely aligned with the query. The choice of <em>k</em> is neither static nor random; we introduce a reinforcement learning technique that dynamically determines <em>k</em> based on the query and context. The rest of the less important sentences are either reduced using a free open-source text reduction method or eliminated. We evaluate LeanContext against several recent query-aware and query-unaware context reduction approaches on prominent datasets (arxiv papers and BBC news articles, NarrativeQA). Despite cost reductions of 37.29% to 67.81%, LeanContext’s ROUGE-1 score decreases only by 1.41% to 2.65% compared to a baseline that retains the entire context (no summarization). LeanContext stands out for its ability to provide precise responses, outperforming competitors by leveraging open-source summarization techniques. Human evaluations of the responses further confirm and validate this superiority. Additionally, if open-source pre-trained LLM-based summarizers are used to reduce context (into human consumable summaries), LeanContext can further modify the reduced context to enhance the accuracy (ROUGE-1 score) by 13.22% to 24.61%.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"7 ","pages":"Article 100065"},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294971912400013X/pdfft?md5=635c034287e104fec6128cc735fdc367&pid=1-s2.0-S294971912400013X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140180775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anoop Kadan , Deepak P. , Sahely Bhadra , Manjary P. Gangan , Lajish V.L.
{"title":"Understanding latent affective bias in large pre-trained neural language models","authors":"Anoop Kadan , Deepak P. , Sahely Bhadra , Manjary P. Gangan , Lajish V.L.","doi":"10.1016/j.nlp.2024.100062","DOIUrl":"https://doi.org/10.1016/j.nlp.2024.100062","url":null,"abstract":"<div><p>Groundbreaking inventions and highly significant performance improvements in deep learning based Natural Language Processing are witnessed through the development of transformer based large Pre-trained Language Models (PLMs). The wide availability of unlabeled data within human generated data deluge along with self-supervised learning strategy helps to accelerate the success of large PLMs in language generation, language understanding, etc. But at the same time, latent historical bias/unfairness in human minds towards a particular gender, race, etc., encoded unintentionally/intentionally into the corpora harms and questions the utility and efficacy of large PLMs in many real-world applications, particularly for the protected groups. In this paper, we present an extensive investigation towards understanding the existence of <em>“Affective Bias”</em> in large PLMs to unveil any biased association of emotions such as <em>anger</em>, <em>fear</em>, <em>joy</em>, etc., towards a particular gender, race or religion with respect to the downstream task of textual emotion detection. We conduct our exploration of affective bias from the very initial stage of corpus level affective bias analysis by searching for imbalanced distribution of affective words within a domain, in large scale corpora that are used to pre-train and fine-tune PLMs. Later, to quantify affective bias in model predictions, we perform an extensive set of class-based and intensity-based evaluations using various bias evaluation corpora. Our results show the existence of statistically significant affective bias in the PLM based emotion detection systems, indicating biased association of certain emotions towards a particular gender, race, and religion.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"7 ","pages":"Article 100062"},"PeriodicalIF":0.0,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000104/pdfft?md5=47ed3491ca02f42caa81ecff613ee5f3&pid=1-s2.0-S2949719124000104-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140069620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jamin Rahman Jim , Md Apon Riaz Talukder , Partha Malakar , Md Mohsin Kabir , Kamruddin Nur , M.F. Mridha
{"title":"Recent advancements and challenges of NLP-based sentiment analysis: A state-of-the-art review","authors":"Jamin Rahman Jim , Md Apon Riaz Talukder , Partha Malakar , Md Mohsin Kabir , Kamruddin Nur , M.F. Mridha","doi":"10.1016/j.nlp.2024.100059","DOIUrl":"https://doi.org/10.1016/j.nlp.2024.100059","url":null,"abstract":"<div><p>Sentiment analysis is a method within natural language processing that evaluates and identifies the emotional tone or mood conveyed in textual data. Scrutinizing words and phrases categorizes them into positive, negative, or neutral sentiments. The significance of sentiment analysis lies in its capacity to derive valuable insights from extensive textual data, empowering businesses to grasp customer sentiments, make informed choices, and enhance their offerings. For the further advancement of sentiment analysis, gaining a deep understanding of its algorithms, applications, current performance, and challenges is imperative. Therefore, in this extensive survey, we began exploring the vast array of application domains for sentiment analysis, scrutinizing them within the context of existing research. We then delved into prevalent pre-processing techniques, datasets, and evaluation metrics to enhance comprehension. We also explored Machine Learning, Deep Learning, Large Language Models and Pre-trained models in sentiment analysis, providing insights into their advantages and drawbacks. Subsequently, we precisely reviewed the experimental results and limitations of recent state-of-the-art articles. Finally, we discussed the diverse challenges encountered in sentiment analysis and proposed future research directions to mitigate these concerns. This extensive review provides a complete understanding of sentiment analysis, covering its models, application domains, results analysis, challenges, and research directions.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"6 ","pages":"Article 100059"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000074/pdfft?md5=f2c0dd3a1ae1a2992d955f19909d86a5&pid=1-s2.0-S2949719124000074-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139998912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}