{"title":"A Fine-Grained Network for Joint Multimodal Entity-Relation Extraction","authors":"Li Yuan;Yi Cai;Jingyu Xu;Qing Li;Tao Wang","doi":"10.1109/TKDE.2024.3485107","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3485107","url":null,"abstract":"Joint multimodal entity-relation extraction (JMERE) is a challenging task that involves two joint subtasks, i.e., named entity recognition and relation extraction, from multimodal data such as text sentences with associated images. Previous JMERE methods have primarily employed 1) pipeline models, which apply pre-trained unimodal models separately and ignore the interaction between tasks, or 2) word-pair relation tagging methods, which neglect neighboring word pairs. To address these limitations, we propose a fine-grained network for JMERE. Specifically, we introduce a fine-grained alignment module that utilizes a phrase-patch to establish connections between text phrases and visual objects. This module can learn consistent multimodal representations from multimodal data. Furthermore, we address the task-irrelevant image information issue by proposing a gate fusion module, which mitigates the impact of image noise and ensures a balanced representation between image objects and text representations. Furthermore, we design a multi-word decoder that enables ensemble prediction of tags for each word pair. This approach leverages the predicted results of neighboring word pairs, improving the ability to extract multi-word entities. Evaluation results from a series of experiments demonstrate the superiority of our proposed model over state-of-the-art models in JMERE.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"1-14"},"PeriodicalIF":8.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142797912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guangjie Zeng;Hao Peng;Angsheng Li;Jia Wu;Chunyang Liu;Philip S. Yu
{"title":"Scalable Semi-Supervised Clustering via Structural Entropy With Different Constraints","authors":"Guangjie Zeng;Hao Peng;Angsheng Li;Jia Wu;Chunyang Liu;Philip S. Yu","doi":"10.1109/TKDE.2024.3486530","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3486530","url":null,"abstract":"Semi-supervised clustering leverages prior information in the form of constraints to achieve higher-quality clustering outcomes. However, most existing methods struggle with large-scale datasets owing to their high time and space complexity. Moreover, they encounter the challenge of seamlessly integrating various constraints, thereby limiting their applicability. In this paper, we present \u0000<underline>S</u>\u0000calable \u0000<underline>S</u>\u0000emi-supervised clustering via \u0000<underline>S</u>\u0000tructural \u0000<underline>E</u>\u0000ntropy (SSSE), a novel method that tackles scalable datasets with different types of constraints from diverse sources to perform both semi-supervised partitioning and hierarchical clustering, which is fully explainable compared to deep learning-based methods. Specifically, we design objectives based on structural entropy, integrating constraints for semi-supervised partitioning and hierarchical clustering. To achieve scalability on data size, we develop efficient algorithms based on graph sampling to reduce the time and space complexity. To achieve generalization on constraint types, we formulate a uniform view for widely used pairwise and label constraints. Extensive experiments on real-world clustering datasets at different scales demonstrate the superiority of SSSE in clustering accuracy and scalability with different constraints. Additionally, Cell clustering experiments on single-cell RNA-seq datasets demonstrate the functionality of SSSE for biological data analysis.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"478-492"},"PeriodicalIF":8.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Y-Graph: A Max-Ascent-Angle Graph for Detecting Clusters","authors":"Junyi Guan;Sheng Li;Xiongxiong He;Jiajia Chen;Yangyang Zhao;Yuxuan Zhang","doi":"10.1109/TKDE.2024.3486221","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3486221","url":null,"abstract":"Graph clustering technique is highly effective in detecting complex-shaped clusters, in which graph building is a crucial step. Nevertheless, building a reasonable graph that can exhibit high connectivity within clusters and low connectivity across clusters is challenging. Herein, we design a max-ascent-angle graph called the “Y-graph”, a high-sparse graph that automatically allocates dense edges within clusters and sparse edges across clusters, regardless of their shapes or dimensionality. In the graph, every point \u0000<inline-formula><tex-math>$x$</tex-math></inline-formula>\u0000 is allowed to connect its nearest higher-density neighbor \u0000<inline-formula><tex-math>$delta$</tex-math></inline-formula>\u0000, and another higher-density neighbor \u0000<inline-formula><tex-math>$gamma$</tex-math></inline-formula>\u0000, satisfying that the angle \u0000<inline-formula><tex-math>$angle delta xgamma$</tex-math></inline-formula>\u0000 is the largest, called “max-ascent-angle”. By seeking the max-ascent-angle, points are automatically connected as the Y-graph, which is a reasonable graph that can effectively balance inter-cluster connectivity and intra-cluster non-connectivity. Besides, an edge weight function is designed to capture the similarity of the neighbor probability distribution, which effectively represents the density connectivity between points. By employing the Normalized-Cut (Ncut) technique, a Ncut-Y algorithm is proposed. Benefiting from the excellent performance of Y-graph, Ncut-Y can fast seek and cut the edges located in the low-density boundaries between clusters, thereby, capturing clusters effectively. Experimental results on both synthetic and real datasets demonstrate the effectiveness of Y-graph and Ncut-Y.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"542-556"},"PeriodicalIF":8.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PUMA: Efficient Continual Graph Learning for Node Classification With Graph Condensation","authors":"Yilun Liu;Ruihong Qiu;Yanran Tang;Hongzhi Yin;Zi Huang","doi":"10.1109/TKDE.2024.3485691","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3485691","url":null,"abstract":"When handling streaming graphs, existing graph representation learning models encounter a catastrophic forgetting problem, where previously learned knowledge of these models is easily overwritten when learning with newly incoming graphs. In response, Continual Graph Learning (CGL) emerges as a novel paradigm enabling graph representation learning from static to streaming graphs. Our prior work, Condense and Train (CaT) (Liu et al. 2023) is a replay-based CGL framework with a balanced continual learning procedure, which designs a small yet effective memory bank for replaying data by condensing incoming graphs. Although the CaT alleviates the catastrophic forgetting problem, there exist three issues: (1) The graph condensation algorithm derived in CaT only focuses on labelled nodes while neglecting abundant information carried by unlabelled nodes; (2) The continual training scheme of the CaT overemphasises on the previously learned knowledge, limiting the model capacity to learn from newly added memories; (3) Both the condensation process and replaying process of the CaT are time-consuming. In this paper, we propose a \u0000<bold>P</b>\u0000s\u0000<bold>U</b>\u0000do-label guided \u0000<bold>M</b>\u0000emory b\u0000<bold>A</b>\u0000nk (PUMA) CGL framework, extending from the CaT to enhance its efficiency and effectiveness by overcoming the above-mentioned weaknesses and limits. To fully exploit the information in a graph, PUMA expands the coverage of nodes during graph condensation with both labelled and unlabelled nodes. Furthermore, a training-from-scratch strategy is proposed to upgrade the previous continual learning scheme for a balanced training between the historical and the new graphs. Besides, PUMA uses a one-time prorogation and wide graph encoders to accelerate the graph condensation and the graph encoding process in the training stage to improve the efficiency of the whole framework. Extensive experiments on seven datasets for the node classification task demonstrate the state-of-the-art performance and efficiency over existing methods.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"449-461"},"PeriodicalIF":8.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Domain Adversarial Active Learning for Domain Generalization Classification","authors":"Jianting Chen;Ling Ding;Yunxiao Yang;Zaiyuan Di;Yang Xiang","doi":"10.1109/TKDE.2024.3486204","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3486204","url":null,"abstract":"Domain generalization (DG) tasks aim to learn cross-domain models from source domains and apply them to unknown target domains. Recent research has demonstrated that diverse and rich source domain samples can enhance domain generalization capability. This work argues that the impact of each sample on the model's generalization ability varies. Even a small-scale but high-quality dataset can achieve a notable level of generalization. Motivated by this, we propose a domain-adversarial active learning (DAAL) algorithm for classification tasks in DG. First, we analyze that the objective of DG tasks is to maximize the inter-class distance within the same domain and minimize the intra-class distance across different domains. We design a domain adversarial selection method that prioritizes challenging samples in an active learning (AL) framework. Second, we hypothesize that even in a converged model, some feature subsets lack discriminatory power within each domain. We develop a method to identify and optimize these feature subsets, thereby maximizing inter-class distance of features. Lastly, We experimentally compare our DAAL algorithm with various DG and AL algorithms across four datasets. The results demonstrate that the DAAL algorithm can achieve strong generalization ability with fewer data resources, thereby significantly reducing data annotation costs in DG tasks.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"226-238"},"PeriodicalIF":8.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142798045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hierarchy-Aware Adaptive Graph Neural Network","authors":"Dengsheng Wu;Huidong Wu;Jianping Li","doi":"10.1109/TKDE.2024.3485736","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3485736","url":null,"abstract":"Graph Neural Networks (GNNs) have gained attention for their ability in capturing node interactions to generate node representations. However, their performances are frequently restricted in real-world directed networks with natural hierarchical structures. Most current GNNs incorporate information from immediate neighbors or within predefined receptive fields, potentially overlooking long-range dependencies inherent in hierarchical structures. They also tend to neglect node adaptability, which varies based on their positions. To address these limitations, we propose a new model called Hierarchy-Aware Adaptive Graph Neural Network (HAGNN) to adaptively capture hierarchical long-range dependencies. Technically, HAGNN creates a hierarchical structure based on directional pair-wise node interactions, revealing underlying hierarchical relationships among nodes. The inferred hierarchy helps to identify certain key nodes, named Source Hubs in our research, which serve as hierarchical contexts for individual nodes. Shortcuts adaptively connect these Source Hubs with distant nodes, enabling efficient message passing for informative long-range interactions. Through comprehensive experiments across multiple datasets, our proposed model outperforms several baseline methods, thus establishing a new state-of-the-art in performance. Further analysis demonstrates the effectiveness of our approach in capturing relevant adaptive hierarchical contexts, leading to improved and explainable node representation.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"365-378"},"PeriodicalIF":8.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanni Li;Bing Liu;Tihua Duan;Zhi Wang;Hui Li;Jiangtao Cui
{"title":"A Novel Key Point Based MLCS Algorithm for Big Sequences Mining","authors":"Yanni Li;Bing Liu;Tihua Duan;Zhi Wang;Hui Li;Jiangtao Cui","doi":"10.1109/TKDE.2024.3485234","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3485234","url":null,"abstract":"Mining multiple longest common subsequences (\u0000<i>MLCS</i>\u0000) from a set of sequences of length three or more over a finite alphabet (a classical NP-hard problem) is an important task in many fields, e.g., bioinformatics, computational genomics, pattern recognition, information extraction, etc. Applications in these fields often involve generating very long sequences (length \u0000<inline-formula><tex-math>$geqslant$</tex-math></inline-formula>\u0000 10,000), referred to as big sequences. Despite efforts in improving the time and space complexities of \u0000<i>MLCS</i>\u0000 mining algorithms, both existing exact and approximate algorithms face challenges in handling big sequences due to the overwhelming size of their problem-solving graph model \u0000<i>MLCS-DAG</i>\u0000 (\u0000<u>D</u>\u0000irected \u0000<u>A</u>\u0000cyclic \u0000<u>G</u>\u0000raph), leading to the issue of memory explosion or extremely high time complexity. To bridge the gap, this paper first proposes a new identification and deletion strategy for different classes of non-critical points in the mining of \u0000<i>MLCS</i>\u0000, which are the points that do not contribute to their \u0000<i>MLCS</i>\u0000s mining in the \u0000<i>MLCS-DAG</i>\u0000. It then proposes a new \u0000<i>MLCS</i>\u0000 problem-solving graph model, namely \u0000<inline-formula><tex-math>$DAG_{KP}$</tex-math></inline-formula>\u0000 (a new \u0000<i>MLCS-<u>DAG</u></i>\u0000 containing only \u0000<u>K</u>\u0000ey \u0000<u>P</u>\u0000oints). A novel parallel \u0000<i>MLCS</i>\u0000 algorithm, called \u0000<i>KP-MLCS</i>\u0000 (\u0000<u>K</u>\u0000ey \u0000<u>P</u>\u0000oint based \u0000<i><u>MLCS</u></i>\u0000), is also presented, which can mine and compress all \u0000<i>MLCS</i>\u0000s of big sequences effectively and efficiently. Extensive experiments on both synthetic and real-world biological sequences show that the proposed algorithm \u0000<i>KP-MLCS</i>\u0000 drastically outperforms the existing state-of-the-art \u0000<i>MLCS</i>\u0000 algorithms in terms of both efficiency and effectiveness.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"15-28"},"PeriodicalIF":8.9,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142797995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Road Network Index Structure for Efficient Map Matching","authors":"Zhidan Liu;Yingqian Zhou;Xiaosi Liu;Haodi Zhang;Yabo Dong;Dongming Lu;Kaishun Wu","doi":"10.1109/TKDE.2024.3485195","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3485195","url":null,"abstract":"Map matching aims to align GPS trajectories to their actual travel routes on a road network, which is an essential pre-processing task for most of trajectory-based applications. Many map matching approaches utilize Hidden Markov Model (HMM) as their backbones. Typically, HMM treats GPS samples of a trajectory as observations and nearby road segments as hidden states. During map matching, HMM determines candidate states for each observation with a fixed searching range, and computes the most likely travel route using the \u0000<i>Viterbi</i>\u0000 algorithm. Although HMM-based approaches can derive high matching accuracy, they still suffer from high computation overheads. By inspecting the HMM process, we find that the computation bottleneck mainly comes from improper candidate sets, which contain many irrelevant candidates and incur unnecessary computations. In this paper, we present \u0000<inline-formula><tex-math>$mathtt {LiMM}$</tex-math></inline-formula>\u0000 – a learned road network index structure for efficient map matching. \u0000<inline-formula><tex-math>$mathtt {LiMM}$</tex-math></inline-formula>\u0000 improves existing HMM-based approaches from two aspects. First, we propose a novel learned index for road networks, which considers the characteristics of road data. Second, we devise an adaptive searching range mechanism to dynamically adjust the searching range for GPS samples based on their locations. As a result, \u0000<inline-formula><tex-math>$mathtt {LiMM}$</tex-math></inline-formula>\u0000 can provide refined candidate sets for GPS samples and thus accelerate the map matching process. Extensive experiments are conducted with three large real-world GPS trajectory datasets. The results demonstrate that \u0000<inline-formula><tex-math>$mathtt {LiMM}$</tex-math></inline-formula>\u0000 significantly reduces computation overheads by achieving an average speedup of \u0000<inline-formula><tex-math>$11.7times$</tex-math></inline-formula>\u0000 than baseline methods, merely with a subtle accuracy loss of 1.8%.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"423-437"},"PeriodicalIF":8.9,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Task-Aware Data Selectivity in Pervasive Edge Computing Environments","authors":"Athanasios Koukosias;Christos Anagnostopoulos;Kostas Kolomvatsos","doi":"10.1109/TKDE.2024.3485531","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3485531","url":null,"abstract":"Context-aware data selectivity in Edge Computing (EC) requires nodes to efficiently manage the data collected from Internet of Things (IoT) devices, e.g., sensors, for supporting real-time and data-driven pervasive analytics. Data selectivity at the network edge copes with the challenge of deciding which data should be kept at the edge for future analytics tasks under limited computational and storage resources. Our challenge is to efficiently learn the access patterns of data-driven tasks (analytics) and predict which data are \u0000<italic>relevant</i>\u0000, thus, being stored in nodes’ local datasets. Task patterns directly indicate which data need to be accessed and processed to support end-users’ applications. We introduce a task workload-aware mechanism which adopts one-class classification to learn and predict the relevant data requested by past tasks. The inherent uncertainty in learning task patterns, identifying inliers and eliminating outliers is handled by introducing a lightweight fuzzy inference estimator that dynamically adapts nodes’ local data filters ensuring accurate data relevance prediction. We analytically describe our mechanism and comprehensively evaluate and compare against baselines and approaches found in the literature showcasing its applicability in pervasive EC.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"513-525"},"PeriodicalIF":8.9,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Puning Zhang;Miao Fu;Rongjian Zhao;Hongbin Zhang;Changchun Luo
{"title":"PURE: Personality-Coupled Multi-Task Learning Framework for Aspect-Based Multimodal Sentiment Analysis","authors":"Puning Zhang;Miao Fu;Rongjian Zhao;Hongbin Zhang;Changchun Luo","doi":"10.1109/TKDE.2024.3485108","DOIUrl":"https://doi.org/10.1109/TKDE.2024.3485108","url":null,"abstract":"Aspect-Based Multimodal Sentiment Analysis (ABMSA) aims to infer the users’ sentiment polarities over individual aspects using visual, textual, and acoustic signals. Although psychological studies have shown that personality has a direct impact on people's sentiment orientations, most existing methods disregard the potential personality character while executing ABMSA tasks. To tackle this issue, a novel psychological perspective, the people's personalities are introduced. To the best of our knowledge, this paper is the very first study in this field. Different from current pipelined multi-task sentiment analysis methods, an end-to-end ABMSA method called Personality-coupled mUlti-task leaRning framEwork (PURE) is proposed, which strongly couples personality mining and ABMSA tasks in a unified architecture to avoid error propagation and enhance the overall system robustness. Specifically, an adaptive personality feature extraction method is designed to accurately model the first impression of different people's personalities. Then, a multi-task ABMSA framework is designed to strongly couple the multimodal features of aspects extracted by the interactive attention fusion network with people's personalities. Subsequently, the proposed framework optimizes them parallel via extended Bayesian meta-learning. Finally, compared to the current optimal model, the classification accuracy and macro F1 score of the proposed model have both shown significant improvements on public datasets. In addition, PURE is transferable and can effectively couple personality modeling tasks with any other sentiment analysis methods.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 1","pages":"462-477"},"PeriodicalIF":8.9,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}