Information FusionPub Date : 2025-08-10DOI: 10.1016/j.inffus.2025.103580
Bian Gao, Xiangchu Feng, Kun Wang, Hui Zhu
{"title":"Underwater image restoration via transmission and Wasserstein distance constraints","authors":"Bian Gao, Xiangchu Feng, Kun Wang, Hui Zhu","doi":"10.1016/j.inffus.2025.103580","DOIUrl":"10.1016/j.inffus.2025.103580","url":null,"abstract":"<div><div>Due to the absorption, refraction, and scattering of light at different wavelengths in the underwater environment, underwater images often suffer from low contrast, blurred textures, and severe color distortions. These degradations are further exacerbated by environmental factors such as water depth, salinity, and the presence of suspended particles. However, widely used real-world underwater datasets-such as UIQS and UCCS typically lack detailed metadata, including water depth, which limits the ability to explicitly model these variables in algorithm design.</div><div>To address the challenge of visual degradation under such unconstrained conditions, we propose a joint optimization model that combines both explicit and implicit regularization strategies. Specifically, we introduce a Wasserstein distance constraint to align the histogram distribution of the restored image with that of natural scenes, thereby mitigating color cast. Additionally, transmission estimation guided by the maximum attenuation prior is incorporated to regulate the transmission map. Beyond these explicit terms, we further introduce three implicit regularization components to capture richer image priors. The entire model is optimized using an alternating iterative scheme. Extensive experiments on multiple underwater datasets lacking depth annotations demonstrate that our method significantly improves image quality and exhibits strong robustness across diverse underwater conditions.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"126 ","pages":"Article 103580"},"PeriodicalIF":15.5,"publicationDate":"2025-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144861175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-08-10DOI: 10.1016/j.inffus.2025.103630
Siyang Huang , Shaojing Su , Junyu Wei , Liushun Hu , Zhangjunjie Cheng
{"title":"Vector-quantized dual-branch fusion network for robust image fusion and anomaly suppression","authors":"Siyang Huang , Shaojing Su , Junyu Wei , Liushun Hu , Zhangjunjie Cheng","doi":"10.1016/j.inffus.2025.103630","DOIUrl":"10.1016/j.inffus.2025.103630","url":null,"abstract":"<div><div>Multimodal image fusion (MMIF) plays a crucial role in image information processing yet faces persistent challenges in handling anomalies to achieve robust multimodal image fusion. This paper proposes a novel vector quantization based dual branch autoencoder fusion algorithm to overcome these limitations. First, we establish two VQ codebooks for global and local features, which are learned and disentangled through two network branches. Subsequently, we employed attention-based network to learn the global features, enhancing it with a novel hybrid rotary position embedding (HRPE) module. Within the CNN branch, the Convolutional Block Attention Module (CBAM) is employed to capture detailed features. Finally, the fused image is formed through decoder based on the VQ features. Extensive experiments and quantitative metrics across three benchmark datasets (OGSOD, GF-cloud, MSRS) indicate that our method outperforms state-of-the-art methods, particularly excelling in anomaly suppression and structural fidelity preservation. Overall, the proposed framework offers a robust solution for all-weather multimodal image fusion tasks, with immediate applications in agricultural image analysis, surveillance imaging, and disaster response systems requiring reliable multimodal information integration.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"126 ","pages":"Article 103630"},"PeriodicalIF":15.5,"publicationDate":"2025-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144861174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-08-10DOI: 10.1016/j.inffus.2025.103603
Zhiyuan Ning , Zaitian Wang , Ran Zhang , Ping Xu , Kunpeng Liu , Pengyang Wang , Wei Ju , Pengfei Wang , Yuanchun Zhou , Erik Cambria , Chong Chen
{"title":"Deep cut-informed graph embedding and clustering","authors":"Zhiyuan Ning , Zaitian Wang , Ran Zhang , Ping Xu , Kunpeng Liu , Pengyang Wang , Wei Ju , Pengfei Wang , Yuanchun Zhou , Erik Cambria , Chong Chen","doi":"10.1016/j.inffus.2025.103603","DOIUrl":"10.1016/j.inffus.2025.103603","url":null,"abstract":"<div><div>Graph clustering aims to divide the graph into different clusters. The recently emerging deep graph clustering approaches are largely built on graph neural networks (GNN). However, GNN is designed for general graph encoding and there is a common issue of representation collapse in existing GNN-based deep graph clustering algorithms. We attribute two main reasons for such issues: (i) the inductive bias of GNN models: GNNs tend to generate similar representations for proximal nodes. Since graphs often contain a non-negligible amount of inter-cluster links, the bias results in error message passing and leads to biased clustering; (ii) the clustering guided loss function: most traditional approaches strive to make all samples closer to pre-learned cluster centers, which causes a degenerate solution assigning all data points to a single label thus making all samples similar and less discriminative. To address these challenges, we investigate graph clustering from a graph cut perspective and propose an innovative and non-GNN-based <strong>D</strong>eep <strong>C</strong>ut-informed <strong>G</strong>raph embedding and <strong>C</strong>lustering framework, namely <strong><em>DCGC</em></strong>. This framework includes two modules: (i) cut-informed graph encoding; (ii) self-supervised graph clustering via optimal transport. For the encoding module, we derive a cut-informed graph embedding objective to fuse graph structure and attributes by minimizing their joint normalized cut. For the clustering module, we utilize the optimal transport theory to obtain the clustering assignments, which can balance the guidance of “proximity to the pre-learned cluster center”. With the above two tailored designs, DCGC is more suitable for the graph clustering task, which can effectively alleviate the problem of representation collapse and achieve better performance. We conduct extensive experiments to demonstrate that our method is simple but effective compared with benchmarks.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"126 ","pages":"Article 103603"},"PeriodicalIF":15.5,"publicationDate":"2025-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144841086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-08-10DOI: 10.1016/j.inffus.2025.103604
Chunhui Zhang , Li Liu , Jialin Gao , Xin Sun , Hao Wen , Xi Zhou , Shiming Ge , Yanfeng Wang
{"title":"COST: Contrastive one-stage transformer for vision-language small object tracking","authors":"Chunhui Zhang , Li Liu , Jialin Gao , Xin Sun , Hao Wen , Xi Zhou , Shiming Ge , Yanfeng Wang","doi":"10.1016/j.inffus.2025.103604","DOIUrl":"10.1016/j.inffus.2025.103604","url":null,"abstract":"<div><div>Transformer has recently demonstrated great potential in improving vision-language (VL) tracking algorithms. However, most of the existing VL trackers rely on carefully designed mechanisms to perform the multi-stage multi-modal fusion. Additionally, direct multi-modal fusion without alignment ignores distribution discrepancy between modalities in feature space, potentially leading to suboptimal representations. In this work, we propose COST, a contrastive one-stage transformer fusion framework for VL tracking, aiming to learn semantically consistent and unified VL representations. Specifically, we introduce a contrastive alignment strategy that maximizes mutual information (MI) between a video and its corresponding language description. This enables effective cross-modal alignment, yielding semantically consistent features in the representation space. By leveraging a visual-linguistic transformer, we establish an efficient multi-modal fusion and reasoning mechanism, empirically demonstrating that a simple stack of transformer encoders effectively enables unified VL representations. Moreover, we contribute a newly collected VL tracking benchmark dataset for small object tracking, named VL-SOT500, with bounding boxes and language descriptions. Our dataset comprises two challenging subsets, VL-SOT230 and VL-SOT270, dedicated to evaluating <em>generic</em> and <em>high-speed</em> small object tracking, respectively. Small object tracking is notoriously challenging due to weak appearance and limited features, and this dataset is, to the best of our knowledge, the first to explore the usage of language cues to enhance visual representation for small object tracking. Extensive experiments demonstrate that COST achieves state-of-the-art performance on five existing VL tracking datasets, as well as on our proposed VL-SOT500 dataset. Source codes and dataset will be made publicly available at <span><span>here</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"126 ","pages":"Article 103604"},"PeriodicalIF":15.5,"publicationDate":"2025-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144879531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A comprehensive survey on image fusion: Which approach fits which need","authors":"Gwendal Bernardi , Godefroy Brisebarre , Sébastien Roman , Mohsen Ardabilian , Emmanuel Dellandrea","doi":"10.1016/j.inffus.2025.103594","DOIUrl":"10.1016/j.inffus.2025.103594","url":null,"abstract":"<div><div>Image fusion is a fundamental task in computer vision that involves combining information from multiple images to produce a more informative and consistent representation. Once the relevant features are identified, they are fused to achieve specific application goals. The field of image fusion encompasses several categories, including multi-focus, multi-exposure, multi-modal, and multi-view fusion. Most state-of-the-art solutions focus on optimizing methods to address a specific fusion category (e.g., multi-view, multi-modal, multi-exposure, or multi-focus). However, some use cases require universal methods capable of handling all these challenges. While recent advancements, particularly in deep learning, have achieved remarkable results within individual categories, the growing need for general-purpose solutions across diverse fusion tasks calls for a broader perspective. This survey provides a comprehensive and unified review of image fusion techniques, systematically covering all four major categories. Special attention is given to deep learning-based methods, which have become dominant in recent years across various fusion types. A key contribution of this work is the integration of multi-view image fusion, often overlooked in prior surveys, with other fusion approaches. We introduce a novel taxonomy that distinguishes between mono-category methods, which target a single fusion domain, and multi-category methods, which are capable of addressing multiple fusion types. Particular emphasis is placed on generalist multi-category approaches, which handle cross-domain scenarios and represent a promising line of research. Additionally, this survey provides practical guidance through summaries of available datasets, evaluation metrics, and representative methods. By offering this structured overview and highlighting unexplored directions, the survey serves as both a foundational reference and a roadmap for future research on unified and adaptive deep learning-based image fusion techniques.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"126 ","pages":"Article 103594"},"PeriodicalIF":15.5,"publicationDate":"2025-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144827923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-08-09DOI: 10.1016/j.inffus.2025.103602
Jacek Karolczak, Jerzy Stefanowski
{"title":"Explaining data changes with prototypes: A measure-driven approach","authors":"Jacek Karolczak, Jerzy Stefanowski","doi":"10.1016/j.inffus.2025.103602","DOIUrl":"10.1016/j.inffus.2025.103602","url":null,"abstract":"<div><div>Prototype explanations of machine learning models have been considered solely for static data, while their use for concept drifting data still remains underexplored. In this work, this challenge is addressed using the algorithm that explains the predictions of the Random Forest tree ensemble classifier with a limited number of prototypes. This also involves the proposal of new measures to evaluate prototypes in static and evolving settings, enabling comparison of prototype sets before and after the data change and the construction of new drift detectors. The presented proposals are evaluated through many experiments. In the first experiments with synthetic datasets, the new measures – mean minimal distance, mean centroid displacement, and prototype reassignment impact – proved effective when evaluated using a set of diverse synthetic data generators and real-world data streams. Then, for incremental learning, the RACE-P algorithm is introduced, leveraging prototypes for interpretable drift detection. Experiments demonstrate competitive performance against established detection methods such as ADWIN and Page–Hinkley. Additionally, the use of prototypes to analyse and explain detected drifts is discussed, underscoring their potential to enhance understanding of data evolution.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"126 ","pages":"Article 103602"},"PeriodicalIF":15.5,"publicationDate":"2025-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144827922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-08-08DOI: 10.1016/j.inffus.2025.103616
Xin Wang, Hongkai Jiang, Tao Zeng, Yutong Dong
{"title":"An adaptive fused domain-cycling variational generative adversarial network for machine fault diagnosis under data scarcity","authors":"Xin Wang, Hongkai Jiang, Tao Zeng, Yutong Dong","doi":"10.1016/j.inffus.2025.103616","DOIUrl":"10.1016/j.inffus.2025.103616","url":null,"abstract":"<div><div>Data synthesis is reshaping the way that artificial intelligence tackles the challenge of machine fault diagnosis under data scarcity in the practical industry. Effectively fusing highly realistic synthetic data with multi-source scarce real-world data to enhance model performance is a critical and pressing need. Existing data synthesis methods are limited by the insufficient exploitation of rich multi-domain information and the difficulty of achieving high-quality fusion between synthetic and real data, which ultimately constrains diagnostic performance. Therefore, an adaptive fused domain-cycling variational generative adversarial network (AFDVGAN) is proposed. Firstly, a smooth-regularized variational framework is constructed to stabilize latent space representation, enhancing the structural consistency of synthetic data and improving training stability. Secondly, a ratio-controlled domain-cycling mechanism is established to dynamically coordinate feature transfer across spatial, time-frequency, and frequency domains, thereby strengthening multi-domain feature modeling and improving data synthesis quality. Finally, a multi-metric guided adaptive data fusion strategy is designed to lead to high-quality fusion of synthetic and real data based on statistical and time-frequency metrics, providing robust data support to enhance the decision-making accuracy of the diagnostic model. For the case studies involving electric locomotive bearings and high-speed aerospace bearings, comparisons with typical and state-of-the-art methods show that AFDVGAN generates higher-quality synthetic data. After data fusion, the diagnostic accuracies reach 99.81% for the locomotive case and 99.16% for the aerospace bearing case. These results verify the effectiveness and advantages of AFDVGAN for fault diagnosis in engineering scenarios under data scarcity.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"126 ","pages":"Article 103616"},"PeriodicalIF":15.5,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144841171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-08-08DOI: 10.1016/j.inffus.2025.103584
Shiling Wu , Siyang Song , Songhe Deng , Weicheng Xie , Linlin Shen
{"title":"Variable-length time series classification: Benchmarking, analysis and effective spectral pooling strategy","authors":"Shiling Wu , Siyang Song , Songhe Deng , Weicheng Xie , Linlin Shen","doi":"10.1016/j.inffus.2025.103584","DOIUrl":"10.1016/j.inffus.2025.103584","url":null,"abstract":"<div><div>Real-world time series classification (TSC) is challenging as time series collected in real-world conditions usually exhibit variations in their lengths, which makes standard deep learning (DL) models being difficult to directly process them (i.e., multiple variable-length time series (VTS)). Despite the existence of many pre-processing and pooling-based methods for achieving length normalization for VTS, there lacks a comprehensive and fair comparison across these methods through a uniform benchmark (e.g., standard backbones, datasets and evaluation strategies). To address this gap, we conduct the first comprehensive benchmark for variable-length time series classification tasks, evaluating the effectiveness of 22 previously widely-used length normalization methods across 14 publicly available VTS datasets and 8 backbones. Since these existing methods lead to varying degrees of information loss and distortion of the input VTS, we also propose a novel spectral pooling (SP) for variable-length time series classification (VTS classification) tasks, which is a plugin layer that can be inserted at any location within various DL models. Our SP allows DL models to process VTS or their variable-length representations in an end-to-end manner within mini-batches, without distortion or significant information loss. Experimental results demonstrate that the end-to-end length normalization methods generally outperformed pre-processing-based methods for VTS classification, where our SP achieved state-of-the-art performance across eight backbones over all existing 22 methods. Our code is publicly available at <span><span>https://github.com/CVI-SZU/VTS_benchmark</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"126 ","pages":"Article 103584"},"PeriodicalIF":15.5,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144841080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusion of deep feature and apparent feature for flotation grade prediction based on apparent information guidance encoder–decoder network","authors":"Yuming Wu , Yongfang Xie , Shiwen Xie , Zongze Wu , Zhaohui Tang","doi":"10.1016/j.inffus.2025.103496","DOIUrl":"10.1016/j.inffus.2025.103496","url":null,"abstract":"<div><div>Grade prediction is a critical component of the froth flotation production process. Recent grade prediction methods are typically based on deep features or apparent features. Deep feature based approaches achieve superior prediction accuracy, but the non-interpretability of these features hinders parameter tuning when process conditions change. Apparent features offer a more intuitive representation of froth grade, but the sparsity of apparent features limits the predictive capability in grade prediction. Hence, to improve the predictive performance and adaptability of the model under fluctuating conditions, it is essential to integrate apparent feature information into the deep representations. In this study, the feature fusion model for grade prediction based on apparent information guidance encoder–decoder network is proposed. Firstly, to integrate apparent features and deep features, a feature fusion module for merging apparent features and deep features is designed. Additionally, to guide the extraction rules of deep features and enhance the interpretability of deep features, an apparent information guidance module based on the variational autoencoder (VAE) is proposed. Moreover, to verify that the latent variables of the guidance module can represent the apparent characteristics of the input froth image, a froth image reconstruction module based on transposed convolutional layer is organized to generate corresponding froth images according to apparent features. Ablation experiments validated the effectiveness of the proposed method, and comparative experiments with other mainstream deep models demonstrated the superior grade prediction performance of our proposed method on the flotation dataset.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"126 ","pages":"Article 103496"},"PeriodicalIF":15.5,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144851908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-08-07DOI: 10.1016/j.inffus.2025.103587
Linyao Yang , Hongyang Chen , Xiao Wang , Jing Yang , Fei-Yue Wang , Han Liu
{"title":"Integrating knowledge from knowledge graphs and large language models for explainable entity alignment","authors":"Linyao Yang , Hongyang Chen , Xiao Wang , Jing Yang , Fei-Yue Wang , Han Liu","doi":"10.1016/j.inffus.2025.103587","DOIUrl":"10.1016/j.inffus.2025.103587","url":null,"abstract":"<div><div>Entity alignment, a critical task in integrating knowledge from multiple knowledge graphs (KGs), aims to identify equivalent entities across different KGs. Traditional approaches predominantly rely on knowledge embedding models to generate entity representations and compute similarity scores for alignment. However, these methods often lack interpretability, rendering their predictions opaque to end users. Recently, large language models (LLMs) have demonstrated strong semantic reasoning capabilities and have been applied to various KG-related tasks, including entity alignment. Despite this progress, existing methods still suffer from three key limitations: inaccurate retrieval of candidate entities, noisy prompt construction, and weak interaction between the retrieval module and the LLM. To address these challenges, we propose EARAG (entity alignment-oriented retrieval-augmented generation), a novel framework that effectively integrates structured knowledge from KGs with the semantic reasoning power of LLMs. EARAG first employs a convolutional neural network (CNN)-based retriever that jointly models multiple similarity metrics and captures relative ranking information to retrieve high-quality candidate entities. It then constructs carefully designed prompts that guide the LLM to not only determine entity equivalence but also generate human-understandable explanations. Extensive experiments on benchmark datasets demonstrate that EARAG achieves state-of-the-art alignment accuracy while offering superior interpretability. These results highlight the potential of retrieval-augmented LLMs as transparent and effective solutions for real-world entity alignment tasks. Code and datasets are publicly available at: <span><span>https://github.com/linyaoyang/EARAG</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"126 ","pages":"Article 103587"},"PeriodicalIF":15.5,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144841172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}