{"title":"Improving paragraph segmentation using BERT with additional information from probability density function modeling of segmentation distances","authors":"Byunghwa Yoo , Kyung-Joong Kim","doi":"10.1016/j.nlp.2024.100061","DOIUrl":"https://doi.org/10.1016/j.nlp.2024.100061","url":null,"abstract":"<div><p>Paragraphs play a key role in writing and reading texts. Therefore, studies about dividing texts into appropriate paragraphs, or paragraph segmentation have gathered academic attention for a long time. Recent advancements in pre-trained language models have achieved state-of-the-art performances in various natural language processing fields, including paragraph segmentation. However, pre-trained language model based paragraph segmentation methods had a problem in that they could not consider statistical metadata such as how far each paragraph segmentation point should be apart from each other. Therefore we focused on combining paragraph segmentation distance and pre-trained language models so that both statistical metadata and state-of-the-art representation ability could be considered at the same time. We propose a novel model by modifying BERT, a state-of-the-art pre-trained language model, by adding segmentation distance information via probability density function modeling. Our model was trained and tested on the domain of the novel, and showed improved performance compared to baseline BERT and previous study, acquiring a mean of 0.8877 F1-score and 0.8708 MCC. Furthermore, our model showed robust performance regardless of the authors of the novels.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"6 ","pages":"Article 100061"},"PeriodicalIF":0.0,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000098/pdfft?md5=a63ebac8bf9ebdb5e4d76b386ec366f1&pid=1-s2.0-S2949719124000098-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139936892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pengbo Mao , Hegang Chen , Yanghui Rao , Haoran Xie , Fu Lee Wang
{"title":"Contrastive learning for hierarchical topic modeling","authors":"Pengbo Mao , Hegang Chen , Yanghui Rao , Haoran Xie , Fu Lee Wang","doi":"10.1016/j.nlp.2024.100058","DOIUrl":"10.1016/j.nlp.2024.100058","url":null,"abstract":"<div><p>Topic models have been widely used in automatic topic discovery from text corpora, for which, the external linguistic knowledge contained in Pre-trained Word Embeddings (PWEs) is valuable. However, the existing Neural Topic Models (NTMs), particularly Variational Auto-Encoder (VAE)-based NTMs, suffer from incorporating such external linguistic knowledge, and lacking of both accurate and efficient inference methods for approximating the intractable posterior. Furthermore, most existing topic models learn topics with a flat structure or organize them into a tree with only one root node. To tackle these limitations, we propose a new framework called as Contrastive Learning for Hierarchical Topic Modeling (CLHTM), which can efficiently mine hierarchical topics based on inputs of PWEs and Bag-of-Words (BoW). Experiments show that our model can automatically mine hierarchical topic structures, and have a better performance than the baseline models in terms of topic hierarchical rationality and flexibility.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"6 ","pages":"Article 100058"},"PeriodicalIF":0.0,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000062/pdfft?md5=d909815a7127e4a5c22593827037ec98&pid=1-s2.0-S2949719124000062-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139685632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thanh Duong , Tuan-Dung Le , Ho’omana Nathan Horton , Stephanie Link , Thanh Thieu
{"title":"ParKQ: An automated Paraphrase ranKing Quality measure that balances semantic similarity with lexical diversity","authors":"Thanh Duong , Tuan-Dung Le , Ho’omana Nathan Horton , Stephanie Link , Thanh Thieu","doi":"10.1016/j.nlp.2024.100054","DOIUrl":"https://doi.org/10.1016/j.nlp.2024.100054","url":null,"abstract":"<div><p>BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) have set new state-of-the-art performance on paraphrase quality measurement. However, their main focus is on semantic similarity and lack the lexical diversity between two sentences. LexDivPara (Thieu et al., 2022) introduced a method that combines semantic similarity and lexical diversity, but the method is dependent on a human-provided semantic score to enhance its overall performance. In this work, we present <strong>ParKQ</strong> (<u>Par</u>aphrase ran<u>K</u>ing <u>Q</u>uality), a fully automatic method for measuring the holistic quality of sentential paraphrases. We create a semantic similarity ensemble model by combining the most popular adaptation of the pre-trained BERT (Devlin et al., 2019) network: BLEURT (Sellam et al., 2020), BERTSCORE (Zhang et al., 2020) and Sentence-BERT (Reimers et al., 2019). Then we build paraphrase quality learning-to-rank models with XGBoost (Chen et al., 2016) and TFranking (Pasumarthi et al., 2019) by combining the ensemble semantic score with lexical features including edit distance, BLEU, and ROUGE. To analyze and evaluate the intricate paraphrase quality measure, we create a gold-standard dataset using expert linguistic coding. The gold-standard annotation comprises four linguistic scores (semantic, lexical, grammatical, overall) and spans across three heterogeneous datasets commonly used to benchmark paraphrasing tasks: STS Benchmark,<span><sup>1</sup></span> ParaBank Evaluation<span><sup>2</sup></span> and MSR corpus.<span><sup>3</sup></span> Our <strong>ParKQ</strong> models demonstrate robust correlation with all linguistic scores, making it the first practical tool for measuring the holistic quality (semantic similarity + lexical diversity) of sentential paraphrases. In evaluation, we compare our models against contemporary methods with the ability to generate holistic quality scores for paraphrases including LexDivPara, ParaScore, and the emergent ChatGPT.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"6 ","pages":"Article 100054"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000025/pdfft?md5=2d9dd7ca1e2b2de847f402ea46f05f27&pid=1-s2.0-S2949719124000025-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139709440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the relation between K–L divergence and transfer learning performance on causality extraction tasks","authors":"Seethalakshmi Gopalakrishnan , Victor Zitian Chen , Wenwen Dou , Wlodek Zadrozny","doi":"10.1016/j.nlp.2024.100055","DOIUrl":"https://doi.org/10.1016/j.nlp.2024.100055","url":null,"abstract":"<div><p>The problem of extracting causal relations from text remains a challenging task, even in the age of Large Language Models (LLMs). A key factor that impedes the progress of this research is the availability of the annotated data and the lack of common labeling methods. We investigate the applicability of transfer learning (domain adaptation) to address these impediments in experiments with three publicly available datasets: FinCausal, SCITE, and Organizational. We perform pairwise transfer experiments between the datasets using DistilBERT, BERT, and SpanBERT (variants of BERT) and measure the performance of the resulting models. To understand the relationship between datasets and performance, we measure the differences between vocabulary distributions in the datasets using four methods: Kullback–Leibler (K–L) divergence, Wasserstein metric, Maximum Mean Discrepancy, and Kolmogorov–Smirnov test. We also estimate the predictive capability of each method using linear regression. We record the predictive values of each measure. Our results show that K–L divergence between the distribution of the vocabularies in the data predicts the performance of the transfer learning with R2 = 0.0746. Surprisingly, the Wasserstein distance predictive value is low (R2=0.52912), and the same for the Kolmogorov–Smirnov test (R2 =0.40025979). This is confirmed in a series of experiments. For example, with variants of BERT, we observe an almost a 29% to 32% increase in the macro-average F1-score, when the gap between the training and test distributions is small, according to the K–L divergence — the best-performing predictor on this task. We also discuss these results in the context of the sub-par performance of some large language models on causality extraction tasks. Finally, we report the results of transfer learning informed by K–L divergence; namely, we show that there is a 12 to 63% increase in the performance when a small portion of the test data is added to the training data. This shows that corpus expansion and n-shot learning benefit, when the process of choosing examples maximizes their information content, according to the K–L divergence.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"6 ","pages":"Article 100055"},"PeriodicalIF":0.0,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000037/pdfft?md5=b947d57bb804a1d8d27703e9d2e10448&pid=1-s2.0-S2949719124000037-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139549326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konstantinos I. Roumeliotis , Nikolaos D. Tselikas , Dimitrios K. Nasiopoulos
{"title":"LLMs in e-commerce: A comparative analysis of GPT and LLaMA models in product review evaluation","authors":"Konstantinos I. Roumeliotis , Nikolaos D. Tselikas , Dimitrios K. Nasiopoulos","doi":"10.1016/j.nlp.2024.100056","DOIUrl":"https://doi.org/10.1016/j.nlp.2024.100056","url":null,"abstract":"<div><p>E-commerce has witnessed remarkable growth, especially following the easing of COVID-19 restrictions. Many people, who were initially hesitant about online shopping, have now embraced it, while existing online shoppers increasingly prefer the convenience of e-commerce. This surge in e-commerce has prompted the implementation of automated customer service processes, incorporating innovations such as chatbots and AI-driven sales. Despite this growth, customer satisfaction remains vital for E-commerce sustainability. Data scientists have made progress in utilizing machine learning to assess satisfaction levels but struggled to understand emotions within product reviews’ context. The recent AI revolution, marked by the release of powerful Large Language Models (LLMs) to the public, has brought us closer than ever before to understanding customer sentiment. This study aims to illustrate the effectiveness of LLMs by conducting a comparative analysis of two cutting-edge LLMs, GPT-3.5 and LLaMA-2, along with two additional Natural Language Process (NLP) models, BERT and RoBERTa. We evaluate the performance of these models before and after fine-tuning them specifically for product review sentiment analysis. The primary objective of this research is to determine if these specific LLMs, could contribute to understanding customer satisfaction within the context of an e-commerce environment. By comparing the effectiveness of these models, we aim to uncover insights into the potential impact of LLMs on customer satisfaction analysis and enhance our understanding of their capabilities in this particular context.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"6 ","pages":"Article 100056"},"PeriodicalIF":0.0,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000049/pdfft?md5=125ac89a35f00dce09f1b7175cc83b6e&pid=1-s2.0-S2949719124000049-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139549081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shanjita Akter Prome , Neethiahnanthan Ari Ragavan , Md Rafiqul Islam , David Asirvatham , Anasuya Jegathevi Jegathesan
{"title":"Deception detection using machine learning (ML) and deep learning (DL) techniques: A systematic review","authors":"Shanjita Akter Prome , Neethiahnanthan Ari Ragavan , Md Rafiqul Islam , David Asirvatham , Anasuya Jegathevi Jegathesan","doi":"10.1016/j.nlp.2024.100057","DOIUrl":"10.1016/j.nlp.2024.100057","url":null,"abstract":"<div><p>Deception detection is a crucial concern in our daily lives, with its effect on social interactions. The human face is a rich source of data that offers trustworthy markers of deception. The deception detection systems are non-intrusive, cost-effective, and mobile by identifying face expressions. Over the last decade, numerous studies have been conducted on deception/lie detection using several advanced techniques. Researchers have given their attention to inventing more effective and efficient solutions for deception detection. However, there are still a lot of opportunities for innovative deception detection methods. Thus, in this literature review, we conduct the statistical analysis by following the PRISMA protocol and extract various articles from five e-databases. The main objectives of this paper are (i) to explain the overview of machine learning (ML) and deep learning (DL) techniques for deception detection, (ii) to outline the existing literature, and (iii) to address the current challenges and its research prospects for further study. While significant issues in deception detection methods are acknowledged, the review highlights key conclusions and offers a systematic analysis of state-of-the-art techniques, emphasizing contributions and opportunities. The findings illuminate current trends and future research prospects, fostering ongoing development in the field.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"6 ","pages":"Article 100057"},"PeriodicalIF":0.0,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000050/pdfft?md5=eef92a93b295ca392877e0d65bfe7ec7&pid=1-s2.0-S2949719124000050-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139638524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep temporal modelling of clinical depression through social media text","authors":"Nawshad Farruque , Randy Goebel , Sudhakar Sivapalan , Osmar Zaïane","doi":"10.1016/j.nlp.2023.100052","DOIUrl":"https://doi.org/10.1016/j.nlp.2023.100052","url":null,"abstract":"<div><p>We describe the development of a model to detect user-level clinical depression based on a user’s temporal social media posts. Our model uses a Depression Symptoms Detection (DSD) classifier, which is trained on the largest existing samples of clinician annotated tweets for clinical depression symptoms. We subsequently use our DSD model to extract clinically relevant features, e.g., depression scores and their consequent temporal patterns, as well as user posting activity patterns, e.g., quantifying their “no activity” or “silence.” Furthermore, to evaluate the efficacy of these extracted features, we create three kinds of datasets including a test dataset, from two existing well-known benchmark datasets for user-level depression detection. We then provide accuracy measures based on single features, baseline features and feature ablation tests, at several different levels of temporal granularity. The relevant data distributions and clinical depression detection related settings can be exploited to draw a complete picture of the impact of different features across our created datasets. Finally, we show that, in general, only semantic oriented representation models perform well. However, clinical features may enhance overall performance provided that the training and testing distribution is similar, and there is more data in a user’s timeline. The consequence is that the predictive capability of depression scores increase significantly while used in a more sensitive clinical depression detection settings.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"6 ","pages":"Article 100052"},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719123000493/pdfft?md5=0d6383093fc7867b461d44edd1c64ce4&pid=1-s2.0-S2949719123000493-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139550053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identifying hidden patterns of fake COVID-19 news: An in-depth sentiment analysis and topic modeling approach","authors":"Tanvir Ahammad","doi":"10.1016/j.nlp.2024.100053","DOIUrl":"10.1016/j.nlp.2024.100053","url":null,"abstract":"<div><p>Spreading misinformation and fake news about COVID-19 has become a critical concern. It contributes to a lack of trust in public health authorities, hinders actions from controlling the virus’s spread, and risks people’s lives. This study aims to gain insights into the types of misinformation spread and develop an in-depth analytical approach for analyzing COVID-19 fake news. It combines the idea of Sentiment Analysis (SA) and Topic Modeling (TM) to improve the accuracy of topic extraction from a large volume of unstructured texts by considering the sentiment of the words. A dataset containing 10,254 news headlines from various sources was collected and prepared, and rule-based SA was applied to label the dataset with three sentiment tags. Among the TM models evaluated, Latent Dirichlet Allocation (LDA) demonstrated the highest coherence score of 0.66 for 20 coherent negative sentiment-based topics and 0.573 for 18 coherent positive fake news topics, outperforming Non-negative Matrix Factorization (NMF) (coherence: 0.43) and Latent Semantic Analysis (LSA) (coherence: 0.40). The topics extracted from the experiments highlight that misinformation primarily revolves around the COVID vaccine, crime, quarantine, medicine, and political and social aspects. This research offers insight into the effects of COVID-19 fake news, provides a valuable method for detecting and analyzing misinformation, and emphasizes the importance of understanding the patterns and themes of fake news for protecting public health and promoting scientific accuracy. Moreover, it can aid in developing real-time monitoring systems to combat misinformation, extending beyond COVID-19-related fake news and enhancing the applicability of the findings.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"6 ","pages":"Article 100053"},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719124000013/pdfft?md5=8f1425dee06c23636d0b5b055c7010af&pid=1-s2.0-S2949719124000013-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139394597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A review of sentiment analysis for Afaan Oromo: Current trends and future perspectives","authors":"Jemal Abate , Faizur Rashid","doi":"10.1016/j.nlp.2023.100051","DOIUrl":"10.1016/j.nlp.2023.100051","url":null,"abstract":"<div><p>Sentiment analysis, commonly referred to as opinion mining, is a fast-expanding area that seeks to ascertain the sentiment expressed in textual data. While sentiment analysis has been extensively studied for major languages such as English, research focusing on low-resource languages like Afaan Oromo is still limited. This review article surveys the existing techniques and approaches used for sentiment analysis specifically for Afaan Oromo, the widely spoken language in Ethiopia. The review highlights the effectiveness of combining neural network architectures, such as Convolutional Neural Networks (CNN) and Bidirectional Long Short-Term Memory (Bi-LSTM) models, as well as clustering techniques like Gaussian Mixture Models (GMM) and Support Vector Machine (SVM) in sentiment analysis for Afaan Oromo. These approaches have demonstrated promising results in various domains, including social media content and SMS texts. However, the lack of a standardized corpus for Afaan Oromo NLP tasks remains a major challenge, which indicates the need for comprehensive data collection and preparation. Additionally, challenges related to domain-specific language, informal expressions, and context-specific polarity orientations pose difficulties for sentiment analysis in Afaan Oromo.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"6 ","pages":"Article 100051"},"PeriodicalIF":0.0,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949719123000481/pdfft?md5=e70b97eefccb0378b45c08e181baa491&pid=1-s2.0-S2949719123000481-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139195139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a large sized curated and annotated corpus for discriminating between human written and AI generated texts: A case study of text sourced from Wikipedia and ChatGPT","authors":"Aakash Singh, Deepawali Sharma, Abhirup Nandy, Vivek Kumar Singh","doi":"10.1016/j.nlp.2023.100050","DOIUrl":"https://doi.org/10.1016/j.nlp.2023.100050","url":null,"abstract":"<div><p>The recently launched large language models have the capability to generate text and engage in human-like conversations and question-answering. Owing to their capabilities, these models are now being widely used for a variety of purposes, ranging from question answering to writing scholarly articles. These models are producing such good outputs that it is becoming very difficult to identify what texts are written by human beings and what by these programs. This has also led to different kinds of problems such as out-of-context literature, lack of novelty in articles, and issues of plagiarism and lack of proper attribution and citations to the original texts. Therefore, there is a need for suitable computational resources for developing algorithmic approaches that can identify and discriminate between human and machine generated texts. This work contributes towards this research problem by providing a large sized curated and annotated corpus comprising of 44,162 text articles sourced from Wikipedia and ChatGPT. Some baseline models are also applied on the developed dataset and the results obtained are analyzed and discussed. The curated corpus offers a valuable resource that can be used to advance the research in this important area and thereby contribute to the responsible and ethical integration of AI language models into various fields.</p></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"6 ","pages":"Article 100050"},"PeriodicalIF":0.0,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294971912300047X/pdfft?md5=48afd2554f84aa4af2b6e1f9fb5dbc60&pid=1-s2.0-S294971912300047X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139100584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}