Information FusionPub Date : 2025-09-16DOI: 10.1016/j.inffus.2025.103746
Wei Zhang , Xinci Liu , Tong Chen , Wenxin Xu , Collin Sakal , Ximing Nie , Long Wang , Xinyue Li
{"title":"Bridging imaging and genomics: Domain knowledge guided spatial transcriptomics analysis","authors":"Wei Zhang , Xinci Liu , Tong Chen , Wenxin Xu , Collin Sakal , Ximing Nie , Long Wang , Xinyue Li","doi":"10.1016/j.inffus.2025.103746","DOIUrl":"10.1016/j.inffus.2025.103746","url":null,"abstract":"<div><div>Spatial Transcriptomics (ST) provides spatially resolved gene expression distributions mapped onto high-resolution Whole Slide Images (WSIs), revealing the association between cellular morphology and gene expression profiles. However, the high costs and equipment constraints associated with ST data collection have led to a scarcity of ST datasets. Moreover, existing ST datasets often exhibit sparse gene expression distributions, which limit the accuracy and generalizability of gene expression prediction models derived from WSIs. To address these challenges, we propose DomainST (Domain knowledge-guided Spatial Transcriptomics analysis), a novel framework that leverages domain knowledge through Large Language Models (LLMs) to extract effective gene representations and utilizes foundation models to obtain robust image features for enhanced spatial gene expression prediction. Specifically, we utilize public gene reference databases to retrieve comprehensive gene summaries and employ LLMs to refine gene descriptions and generate informative gene embeddings. Concurrently, we apply medical visual-language foundation models to distill robust image representations at multiple scales, capturing the spatial context of WSIs. We further design a multimodal mixture of experts fusion module to effectively integrate multimodal data, leveraging complementary information across modalities. Extensive experiments conducted on three public ST datasets indicate that our method consistently outperforms state-of-the-art (SOTA) methods, with increases ranging from 6.7 % to 13.7 % in PCC@50 across all datasets compared to the SOTA, demonstrating the effectiveness of combining foundation models and LLM-derived domain knowledge for gene expression prediction. Our code and gene features are available at <span><span>https://github.com/coffeeNtv/DomainST</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103746"},"PeriodicalIF":15.5,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145107589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-16DOI: 10.1016/j.inffus.2025.103745
Ruobing Li , Yifan Feng , Lin Shen , Liuxian Ma , Haojie Zhang , Kun Qian , Bin Hu , Yoshiharu Yamamoto , Björn W. Schuller
{"title":"FedVCPL-Diff: A federated convolutional prototype learning framework with a diffusion model for speech emotion recognition","authors":"Ruobing Li , Yifan Feng , Lin Shen , Liuxian Ma , Haojie Zhang , Kun Qian , Bin Hu , Yoshiharu Yamamoto , Björn W. Schuller","doi":"10.1016/j.inffus.2025.103745","DOIUrl":"10.1016/j.inffus.2025.103745","url":null,"abstract":"<div><div>Speech Emotion Recognition (SER), a key emotion analysis technology, has shown significant value in various research areas. Previous SER models have achieved good emotion recognition accuracy, but typical centrally-based training requires centralised processing of speech data, which has a serious risk of privacy leakage. Federated learning (FL) can avoid centralised data processing through distributed learning, providing a solution for privacy protection in SER. However, FL faces several challenges in practical applications, including imbalanced data distribution and inconsistent labelling. Furthermore, typical FL frameworks focus on client-side enhancement and ignore server-side aggregation strategy optimisation, which can increase the computational load on the client side. To address the aforementioned problems, we propose a novel approach, FedVCPL-Diff. Firstly, regarding information fusion, we introduce a diffusion model on the server side to generate Valence-Arousal-Dominance emotion space features, which replaces the typical aggregation framework and effectively promotes global information fusion. In addition, in terms of information exchange, we propose a lightweight and personalised FL transmission framework based on the exchange of VAD features. FedVCPL-Diff optimises the local model by updating the data distribution anchors, which not only avoids the privacy risk but also reduces the communication cost. Experimental results show that the framework significantly improves emotion recognition performance compared to four commonly used FL frameworks. The overall performance of our framework also shows a significant advantage compared to locally independent models.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103745"},"PeriodicalIF":15.5,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-16DOI: 10.1016/j.inffus.2025.103714
Zhenyu Liu , Heye Zhang , Yiwen Wang , Zhifan Gao
{"title":"Embracing knowledge integration from the vision-language model for federated domain generalization on multi-source fused data","authors":"Zhenyu Liu , Heye Zhang , Yiwen Wang , Zhifan Gao","doi":"10.1016/j.inffus.2025.103714","DOIUrl":"10.1016/j.inffus.2025.103714","url":null,"abstract":"<div><div>Federated Domain Generalization (FedDG) has attracted attention for its potential to enable privacy-preserving fusion of multi-source data. It aims to develop a global model in a distributed manner that generalizes to unseen clients. However, it faces the challenge of the tradeoff between inter-client and intra-client domain shifts. Knowledge distillation from the vision-language model may address this challenge by transferring its zero-shot generalization ability to client models. However, it may suffer from distribution discrepancies between the pretraining data of the vision-language model and the downstream data. Although pre-distillation fine-tuning may alleviate this issue in centralized settings, it may not be compatible with FedDG. In this paper, we introduce an in-distillation selective adaptation framework for FedDG. It selectively fine-tunes unreliable outputs while directly distilling reliable ones from the vision-language model, effectively using knowledge distillation to address the challenge in FedDG. Furthermore, we propose a federated energy-driven reliability appraisal (FedReap) method to support this framework by appraising the reliability of outputs from the vision-language model. It includes hypersphere-constraint energy construction and label-guided energy partition. These two processes enable FedReap to acquire reliable and unreliable outputs for direct distillation and adaptation. In addition, FedReap employs a dual-level distillation strategy and a dual-stage adaptation strategy for distillation and adaptation. Extensive experiments on five datasets demonstrate the effectiveness of FedReap compared to twelve state-of-the-art methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103714"},"PeriodicalIF":15.5,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DeepFake detection in the AIGC era: A survey, benchmarks, and future perspectives","authors":"Shichuang Xie , Tong Qiao , Sheng Li , Xinpeng Zhang , Jiantao Zhou , Guorui Feng","doi":"10.1016/j.inffus.2025.103740","DOIUrl":"10.1016/j.inffus.2025.103740","url":null,"abstract":"<div><div>In recent years, DeepFake has further developed, driven by continuous advances in data, computing power, and deep generative models. This emerging digital media forgery technique can manipulate or generate fake face content, increasingly blurring the boundaries between real and fake media. With the growing misuse of DeepFake, the associated risks are also intensifying. Although some research on DeepFake detection has been conducted, the research on detection is obviously falling behind DeepFake generation, and there is a lack of comprehensive and up-to-date surveys on DeepFake detection. Therefore, to effectively counter the proliferation of DeepFake face and promote the evolution of DeepFake detection, we conduct comprehensive survey and analysis. Specifically, (1) we analyze the key factors driving the proliferation of DeepFake, and we review the four representative types of DeepFake face and introduce a novel cross-modal face manipulation based on foundation models; (2) we reorganize DeepFake detection methods and establish a detection evaluation benchmark, emphasizing the potential of emerging detectors; (3) we focus on the current challenges of DeepFake forensic research and the corresponding development trends, and provide future perspectives, aiming to provide new insights for DeepFake forensic research in the AIGC era.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103740"},"PeriodicalIF":15.5,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145221680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-16DOI: 10.1016/j.inffus.2025.103752
Shan Jiang , Wenchang Chai , Mingjin Zhang , Jiannong Cao , Shichang Xuan , Jiaxing Shen
{"title":"Verifying energy generation via edge LLM for web3-based decentralized clean energy networks","authors":"Shan Jiang , Wenchang Chai , Mingjin Zhang , Jiannong Cao , Shichang Xuan , Jiaxing Shen","doi":"10.1016/j.inffus.2025.103752","DOIUrl":"10.1016/j.inffus.2025.103752","url":null,"abstract":"<div><div>The global transition to clean energy is critical to achieving climate goals, yet traditional centralized systems face challenges in flexibility, grid resilience, and equitable access. While decentralized web3-based energy networks offer promising alternatives, existing solutions lack robust architectures to integrate distributed generation with real-time demand and fail to provide trustworthy energy verification mechanisms. This work introduces DeCEN, a decentralized clean energy network that synergizes collaborative edge computing and web3 technologies to address these gaps. DeCEN leverages autonomous edge devices to collect and process sensory data from renewable generators, enabling localized decision-making and verification of energy production. A layer-2 blockchain solution establishes a transparent web3 ecosystem, connecting clean energy generators and consumers through tokenized incentives for green energy activities. To combat fraud, DeCEN incorporates a novel large language model (LLM)-based energy verification protocol that analyzes sensory data to validate renewable claims, ensuring accountability and stabilizing token value. Additionally, a distributed LLM inference algorithm partitions LLMs into shards deployable on resource-constrained edge devices, enabling decentralized, low-latency processing while preserving data privacy and minimizing communication overhead. By integrating edge computing, blockchain, and AI-driven verification, DeCEN improves the reliability, trust, and efficiency of decentralized clean energy networks, offering a scalable pathway toward global renewable energy targets.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103752"},"PeriodicalIF":15.5,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Brain tumor segmentation via cross-modality semi-supervised transfer learning with 3D MRI diffusion model synthetic ultrasound","authors":"Yuhua Li , Shan Jiang, Zhiyong Yang, Liwen Wang, Shuangying Wang, Zeyang Zhou","doi":"10.1016/j.inffus.2025.103757","DOIUrl":"10.1016/j.inffus.2025.103757","url":null,"abstract":"<div><div>Accurate ultrasound segmentation is crucial for intraoperative brain navigation and can improve non-rigid registration between preoperative MRI and intraoperative ultrasound, compensating for brain shift. However, limited annotated ultrasound data hinder the application of deep learning methods. Given recent advances in brain MRI-based medical image processing, transferring MRI datasets and deep learning models to US image research via cross-modal translation may potentially enhance intelligent brain US image processing. In this paper, we propose a novel cross-modality semi-supervised transfer learning from MRI to US by leveraging annotated data in the MRI modality. A diffusion model, leveraging conditional texture features and guided mutual information, transforms well-annotated MRI images into synthetic US images with a distribution closer to real US images. Subsequently, we employ a segmentation framework that involves pretraining with synthetic US images derived from MRI through image translation, followed by semi-supervised fine-tuning using a hybrid dataset that integrates both labeled and unlabeled ultrasound data. Extensive assessments are reported on the utility of SL-DDPM against competing GAN and diffusion models in MRI-US translation. The experimental results demonstrate that our proposed transfer learning strategy achieves a segmentation accuracy of DSC of 93.43 ± 3.72 %. The effectiveness of our strategy is validated through ablation studies on fine-tuning strategies and semi-supervised learning, as well as comparisons with other state-of-the-art methods. Our transfer learning strategy enhances the accuracy and generalization of brain ultrasound segmentation models, even with limited hybrid training data, thereby assisting surgeons in identifying lesion.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103757"},"PeriodicalIF":15.5,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145107549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-15DOI: 10.1016/j.inffus.2025.103722
Huaping Zhou , Tao Wu , Kelei Sun , Jin Wu , Bin Deng
{"title":"Task-oriented multi-scale dynamic feature fusion for robust conveyor belt monitoring","authors":"Huaping Zhou , Tao Wu , Kelei Sun , Jin Wu , Bin Deng","doi":"10.1016/j.inffus.2025.103722","DOIUrl":"10.1016/j.inffus.2025.103722","url":null,"abstract":"<div><div>Existing conveyor belt monitoring methods suffer from unreasonable multi-task feature allocation and limited boundary feature extraction capability. To address these issues, this study develops a novel information fusion framework integrating Mask R-CNN-based detection and segmentation for conveyor belt status monitoring. Firstly, we propose the Multi-Scale Dynamic Feature Fusion (MS-DFF) module. It uses a multi-stage parallel multi-scale convolution network and dynamic weight adjustment mechanism to flexibly fuse and optimize multi-scale features. Secondly, we propose the Task-Oriented Module (TOM). It optimizes task adaptability between the detection and segmentation branches, combining frequency domain and spatial-domain features to meet multi-task requirements. Thirdly, we also design a Laplacian convolution fixed-weight structure to enhance target boundary information, leading to the new Boundary Enhanced (BE) segmentation head. Finally, we design the Dynamic Weighted Hybrid Loss (DWH Loss), combining Dice loss, Focal loss, and BCE loss. It dynamically adjusts weights to balance multi-task optimization, further improving segmentation boundary clarity and overall performance. We conduct extensive experiments on the conveyor belt monitoring dataset and the COCO dataset. On the conveyor belt dataset, the AP<span><math><msub><mrow></mrow><mn>50</mn></msub></math></span> for the detection task reaches 98.4 %, and the AP<span><math><msub><mrow></mrow><mn>50</mn></msub></math></span> for the segmentation task reaches 73.5 %. These results outperform most state-of-the-art methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103722"},"PeriodicalIF":15.5,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145107554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-15DOI: 10.1016/j.inffus.2025.103724
Wenlong Liu , Jiaohua Qin
{"title":"Focus and learn: boosting deep multi-view clustering via hard instance awareness","authors":"Wenlong Liu , Jiaohua Qin","doi":"10.1016/j.inffus.2025.103724","DOIUrl":"10.1016/j.inffus.2025.103724","url":null,"abstract":"<div><div>Deep contrastive multi-view clustering aims to use contrastive mechanisms to exploit the complementary information from multiple features, which has attracted increasing attention in recent years. However, we observe that most contrastive multi-view clustering methods neglect the false sample pairs caused by hard samples during the process of constructing contrastive sample pairs, including negative samples exhibit high similarity and positive samples exhibit low similarity. To address this problems, we propose a novel deep contrastive multi-view clustering network for hard sample mining, termed <strong>MVC-HSM</strong>. Specifically, we propose a strategy that incorporates both coarse-grained and fine-grained perspectives. At the coarse-grained level, we perform contrastive learning by utilizing prototypes from each view, thereby mitigating hard samples at the sample level. At the fine-grained level, we first construct a comprehensive evaluation function to measure the similarity for the samples based on representation relationships and structures. In combination with the filtering effect of high-confidence pseudo-labels, we further design a contrastive learning loss for hard samples. Thus, the model could automatically increase the weight of hard samples while reducing the weight of easy samples. The superior of MVC-HSM is verified by extensive experiments on public multi-view datasets, demonstrating the proposed MVC-HSM outperforms other state-of-the-art multi-view clustering.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103724"},"PeriodicalIF":15.5,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145107897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CM2-STNet: Cross-modal image matching with modal-adaptive feature modulation and sparse transformer fusion","authors":"Zhizheng Zhang , Pengcheng Wei , Peilian Wu , Jindou Zhang , Boshen Chang , Zhenfeng Shao , Mingqiang Guo , Liang Wu , Jiayi Ma","doi":"10.1016/j.inffus.2025.103750","DOIUrl":"10.1016/j.inffus.2025.103750","url":null,"abstract":"<div><div>Multimodal image matching is a fundamental task in geospatial analysis, aiming to establish accurate correspondences between images captured by heterogeneous imaging devices. However, significant geometric inconsistencies and nonlinear radiometric distortions lead to large distribution gaps, posing a major challenge for cross-modal matching. Moreover, existing methods often struggle to adaptively capture intra- and inter-modal features at multiple scales and to focus on semantically relevant regions in large-scale scenes. To address these issues, we propose a novel cross-modal image matching network called CM<sup>2</sup>-STNet. Specifically, we introduce a modal-adaptive feature modulation (MAFM) module that dynamically adjusts cross-modal feature representations at multiple scales, thereby enhancing semantic consistency between modalities. In addition, a cross-modal sparse transformer fusion (CM-STF) module is developed to guide the network to concentrate on the most relevant regions, where a Top-k selection mechanism is employed to retain discriminative features while filtering out irrelevant content. Extensive experiments on multimodal remote sensing datasets demonstrate that CM<sup>2</sup>-STNet achieves accurate and robust matching performance, validating its effectiveness and generalization ability in complex real-world scenarios. Code and pre-trained model are available at https://github.com/whuzzzz/CM<sup>2</sup>-STNet.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103750"},"PeriodicalIF":15.5,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145119852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2025-09-15DOI: 10.1016/j.inffus.2025.103739
Han Zhou, Shuli Sun
{"title":"Distributed estimation for multi-sensor networked stochastic uncertain systems with correlated noises under a general stochastic communication protocol","authors":"Han Zhou, Shuli Sun","doi":"10.1016/j.inffus.2025.103739","DOIUrl":"10.1016/j.inffus.2025.103739","url":null,"abstract":"<div><div>The distributed state estimation problem is studied for multi-sensor networked stochastic uncertain systems with correlated noises under a stochastic communication protocol (SCP). Random parameter matrices are utilized to describe the stochastic uncertainties within the system model. Given the limited channel bandwidth among sensor nodes, a general SCP is set to randomly select multiple components from the complete state prediction estimate for transmission. A set of random variables is introduced to indicate which combination of state prediction components is selected for transmission at each time step. In the case that the sensor node does not know which combination of state prediction components from each neighboring node is transmitted to it at each time step, a distributed Kalman-like recursive estimator structure that depends on the probability distributions of random variables is developed. Under this estimator structure, an optimal distributed estimation algorithm is presented based on the linear unbiased minimum variance criterion, which necessitates the computation of estimation error cross-covariance matrices between different nodes. To avert the computation of cross-covariance matrices, a suboptimal distributed estimation algorithm is also proposed, where optimal gains are achieved by minimizing the upper bound of estimation error covariance matrix at each node. In addition, the scalar parameters in the upper bound of the covariance matrix are optimized to obtain a minimum upper bound. Stability and steady-state properties of two distributed estimation algorithms are analyzed. Finally, the effectiveness of the presented algorithms is validated through a simulation example.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103739"},"PeriodicalIF":15.5,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145269567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}