{"title":"Image2Gene: A Minimalist and Weakly-Supervised Framework for Morphology-Aligned Gene Expression Prediction From Histology Images.","authors":"Weiqi Fu, Xiongwen Quan, Shuang Bai, Han Zhang","doi":"10.1109/JBHI.2026.3691387","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3691387","url":null,"abstract":"<p><p>Gene expression prediction from histological images offers a promising approach for spatial transcriptome analysis without expensive sequencing. We present Image2Gene, a simple yet effective weakly supervised contrastive learning framework that predicts gene expression profiles directly from tissue morphology using only an image encoder and multiple fully connected layers. Rather than relying on complex modules or gene expression-based embedding spaces, our approach does not rely on complex modules or gene expression-based embedding spaces. Instead, it encodes spatial coordinates via a learnable embedding and uses an image encoder to extract histological image features. We then propose a novel contrastive loss function that minimizes the difference between the cosine self-similarity of image embeddings and the Pearson autocorrelation of corresponding gene expression profiles to learn a structured image representation that reflects gene expression variations. Unlike previous methods for cross-heterogeneous modality matching, our approach aligns samples solely in image space, enabling more robust and biologically meaningful similarity learning. Finally, we perform gene expression inference via k-Nearest Neighbor interpolation in the learned image embedding space. Despite its simple architecture, extensive experiments on HER+ and cSCC spatial transcriptome datasets demonstrate that Image2Gene achieves highly competitive performance, highlighting its potential as a scalable, annotation-free alternative for inferring transcriptome patterns directly from histological sections.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147856262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kang Wang, Yanan Zhang, Yingwei Zhang, Fa Zhang, Jian Shen, Bin Hu
{"title":"MSFSNet: Multi-Source Few-Shot Adaptation Network for Cross-Subject Depression Recognition from EEG Signals.","authors":"Kang Wang, Yanan Zhang, Yingwei Zhang, Fa Zhang, Jian Shen, Bin Hu","doi":"10.1109/JBHI.2026.3691159","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3691159","url":null,"abstract":"<p><p>Depression is a prevalent mental disorder with severe socio-economic implications, and its early identification and intervention are crucial for mitigating disease progression. However, existing machine learning and deep learning-based approaches for depression recognition exhibit limited generalization across individuals, making them less adaptable to new subjects and restricting their practical applications. To address this issue, we propose a cross-subject depression recognition method based on Multi-Source Few-Shot Adaptation (MSFSA) using electroencephalography (EEG). The proposed method integrates multi-source domain adaptation and ensemble learning strategies. Specifically, the multi-source domain adaptation module employs an alternating training mechanism combining unsupervised domain adaptation and few-shot adaptation, reducing the model's dependency on specific subjects. Meanwhile, ensemble learning improves model robustness and stability by aggregating multiple model predictions, reducing the impact of individual model biases and enhancing classification reliability. Experiments were conducted on the public MODMA EEG dataset, comprising 53 subjects (24 patients with major depressive disorder and 29 healthy controls). With a theoretical chance level of 50% for the cross-subject classification setting, the results demonstrate that, compared with traditional machine learning methods, existing EEG-based depression recognition models, and advanced domain adaptation algorithms, leveraging the Alpha and low-Gamma band features as the key contributing factors, the proposed method achieves a significant improvement in accuracy, reaching 87.12%, which outperforms the state-of-the-art HEMAsNet (80.67%) and WDANet (70.94%) on the same dataset under the 10-fold cross-subject validation protocol. These findings indicate that the proposed approach effectively reduces subject dependency in EEG-based depression recognition and provides a promising solution for improving cross-subject adaptability.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Minghui Chen, Guohua Zhao, Lei Yang, Haowen Zhu, Hongwei Xu, Huiqin Jiang, Ling Ma
{"title":"A Segmentation-Guided Feature Alignment and Fusion Network for Glioma IDH Genotyping.","authors":"Minghui Chen, Guohua Zhao, Lei Yang, Haowen Zhu, Hongwei Xu, Huiqin Jiang, Ling Ma","doi":"10.1109/JBHI.2026.3691144","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3691144","url":null,"abstract":"<p><p>Isocitrate dehydrogenase (IDH) is a pivotal molecular marker for glioma diagnosis, prognosis, and treatment planning. Multi-modal deep learning methods, which integrate features from multiple magnetic resonance imaging (MRI) sequences, have become a powerful solution for non-invasive IDH genotyping. However, existing methods still have limitations in feature extraction and fusion, which constrains their robustness. In this work, we propose a novel segmentation-guided feature alignment and fusion network (SFAF-Net) for glioma IDH genotyping, with three key innovations: 1) The Segmentation-guided Feature Alignment (SFA) module leverages tumor segmentation supervision to facilitate cross-modal feature alignment; 2) The Redundancy-Attenuated Fusion (RAF) module implements similarity-based selective fusion of modality pairs to reduce feature redundancy; 3) A randomized modality dropout mechanism within RAF enhances model robustness against input variations. Comprehensive experiments conducted on public and private datasets demonstrate that SFAF-Net outperforms state-of-the-art methods across diverse MRI sequences. Moreover, SFAF-Net supports an arbitrary number of input sequences, enabling flexible adaptation to diverse clinical scanning protocols in personalized diagnosis.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rongjun Ge, Hanyuan Zheng, Yuxin Liu, Liutao Yang, Li Wang, Xu Ji, Jingtao Shen, Nan Li, Shengji He, Daoqiang Zhang, Chengyu Liu, Yang Chen, Shuo Li, Yuting He
{"title":"Direct PET-to-CT Generation for Attenuation Correction: A Slice-to-Slice Continual Transformer Segmentation-Aware Network.","authors":"Rongjun Ge, Hanyuan Zheng, Yuxin Liu, Liutao Yang, Li Wang, Xu Ji, Jingtao Shen, Nan Li, Shengji He, Daoqiang Zhang, Chengyu Liu, Yang Chen, Shuo Li, Yuting He","doi":"10.1109/JBHI.2026.3691253","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3691253","url":null,"abstract":"<p><p>Direct synthetic computed tomography (CT) generation from positron emission tomography (PET) plays a crucial role in PET attenuation correction, yet providing detailed structural information to compensate for functional imaging. Compared to the widely used PET/CT and indirect PET/MR-CT, the direct PET-to-CT translation method (denoted as PET-to-CT) offers several advantages: 1) The CT required for PET-to-CT is directly obtained from PET, thereby avoiding the intermediate errors generated in the inter-step processes of multimodal scanning in PET/CT and PET/MR-CT. 2) Furthermore, direct PET-to-CT eliminates the requirement for supplementary imaging equipment, thereby reducing complexity and scan duration in contrast to PET/CT and PET/MR-CT imaging. Thus, direct PET-to-CT is highly promising for clinical applications. However, it faces challenges, including spatial resolution mismatches between PET and CT, as well as voxel-wise semantic differences arising from functional and structural imaging. To address these challenges, this paper proposes a 2D hierarchical method called S2SCT (Slice-to-Slice Continual Transformer)-SA (Segmentation-aware) Network. It uses a slice-continual network to acquire semantic transformation knowledge from each PET slice to a CT slice, facilitating the conversion between functional and structural imaging domains. Subsequently, the segmentation-aware network is designed to futher capture spatial correlations both between slices and within slice, resulting in improved CT spatial resolution. The experiment results demonstrate that our proposed method outperforms mainstream methods in both CT generation and attenuation correction, as evidenced by both visual results and metric values.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alisher Myrgyyassov, Zhen Song, Yu Sun, Bruce Xiao Wang, Min Ney Wong, Yongping Zheng
{"title":"UltraUNet: Real-Time Ultrasound Tongue Segmentation for Diverse Linguistic and Imaging Conditions.","authors":"Alisher Myrgyyassov, Zhen Song, Yu Sun, Bruce Xiao Wang, Min Ney Wong, Yongping Zheng","doi":"10.1109/JBHI.2026.3691369","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3691369","url":null,"abstract":"<p><p>Ultrasound tongue imaging (UTI) provides a non-invasive, cost-effective modality for investigating speech articulation, speech motor control, and speech-related disorders. However, real-time tongue contour segmentation remains a significant challenge due to the inherently low signal-to-noise ratio, variability in imaging conditions, and computational demands of real-time performance. In this study, we proposed UltraUNet, a lightweight and efficient encoder-decoder architecture specifically optimized for real-time segmentation of tongue contours in ultrasound images. UltraUNet introduces several domain-informed innovations, including lightweight Squeeze-and-Excitation blocks for channel-wise feature recalibration in deeper layers, Group Normalization for enhanced stability in small-batch training, and summation-based skip connections to minimize memory and computational overhead. These architectural refinements enabled UltraUNet to achieve a high segmentation accuracy while maintaining an exceptional processing speed of 250 frames per second, making it suitable for real-time clinical workflows. UltraUNet integrates ultrasound-specific augmentation techniques, including denoising and blur simulation using point spread function. Additionally, we annotated UTI images from 8 different datasets with various imaging conditions. Comprehensive evaluations demonstrated the model's robustness and precision, with superior segmentation metrics on single-dataset testing (Dice = 0.855, MSD = 0.993px) compared to established architectures. Furthermore, cross-dataset testing on 7 unseen datasets with 1 train dataset revealed UltraUNet's generalization capabilities and high accuracy, achieving average Dice Scores of 0.734 and 0.761, respectively, in Experiments 1 and 2. The proposed framework offers a competitive solution for time-critical applications in speech research, speech motor disorder analysis, and clinical diagnostics, with real-time performance in tongue functional analysis in diverse medical and research settings.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhichao Zhu, Bo Bai, Jianqiang Li, Han Wang, Rui Li, Lan Lan
{"title":"LELN: A Large Language Model-Dynamically Enhanced Learning Network for Patient Similarity Calculation.","authors":"Zhichao Zhu, Bo Bai, Jianqiang Li, Han Wang, Rui Li, Lan Lan","doi":"10.1109/JBHI.2026.3691375","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3691375","url":null,"abstract":"<p><p>The rapid expansion of Electronic Medical Record (EMR) data has advanced AI-driven patient similarity computation, a key technology for intelligent healthcare. However, the handling of heterogeneous EMR formats and the integration of domain knowledge constrain existing methods. While graph-based approaches show promise, they still struggle with these issues. To address this, we propose a Large Language Model-Dynamically Enhanced Learning Network (LELN), leveraging LLMs' commonsense knowledge and reasoning to dynamically structure EMR data and enhance medical knowledge integration. LELN in tegrates two LLM-basedmodules:DS-EE(DeepSeek-Event Extraction) extracts medical events to construct structured EMR event graphs, and DS-KB (DeepSeek-Knowledge Base) infers disease-relevant knowledge to augment feature representations. The model employs a dual-stage spatial-temporal feature aggregation strategy: a Graph Attention Network captures intra- and inter-event dependencies, followed by a Bidirectional Long-Short Term Memory (BiLSTM) with attention to model temporal disease progression. Additionally, a clinical prior-guided attention mechanism emphasizes discriminative diagnostic features, improving clinical relevance. Extensive experiments on heterogeneous datasets-a real-world Chinese dataset and public MIMIC-III-show LELN outperforms baselines, achieving F1 scores of 87.66% and 85.95%, demonstrating robustness and accuracy.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Harnessing Terminal Signal-Aware Deep Learning for Accurate Multi-Class Secreted Effector Identification.","authors":"Lesong Wei, Shida He, Quan Zou, Chen Lin","doi":"10.1109/JBHI.2026.3690894","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3690894","url":null,"abstract":"<p><p>Gram-negative bacterial secreted effectors are translocated through specialized secretion systems to manipulate host cellular processes, and their accurate identification is crucial for understanding bacterial pathogenesis. Recent deep learning methods have significantly advanced this field, yet current approaches primarily rely on global sequence representations, overlooking the biological significance of terminal regions where secretion signals reside. Moreover, severe class imbalance among different secreted effector types remains a critical challenge for multi-class prediction. Here, we propose TermSE, a terminal signal-aware framework for multi-class secreted effector identification. TermSE explicitly captures N-terminal and C-terminal sequence features through convolutional neural networks applied to protein language model embeddings, and integrates them with global sequence representations for multi-view sequence characterization. To address class imbalance, TermSE employs a cosine-normalized classifier combined with weighted sampling to mitigate feature magnitude bias and ensure sufficient learning from minority classes. Extensive experiments demonstrate that TermSE outperforms existing methods in both cross-validation and independent test settings, with robust generalization across varying sequence identity levels. Furthermore, interpretability analysis confirms that TermSE learns to focus on biologically meaningful terminal patterns specific to each secreted effector type. These results highlight the potential of TermSE as an effective and interpretable tool for secreted effector discovery.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Movement Anywhere: An Open-Source Distributed 2D Video-Based Movement Analysis Platform Empowered by Active Learning.","authors":"Ming-Yang Ho, Yufeng Jane Tseng","doi":"10.1109/JBHI.2026.3690720","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3690720","url":null,"abstract":"<p><p>Movement analysis plays a pivotal role in diagnosing and monitoring neurodegenerative and musculoskeletal diseases. Traditional tools, such as 3D motion capture systems and electronic walkways, though effective, are costly and spatially demanding, limiting their accessibility. To address these challenges, we introduce Movement Anywhere, an open-source, distributed plat form for 2D video-based movement analysis. This plat form features advanced tracking algorithms that robustly handle scenarios where patients require assistance, and it rigorously establishes the conditions necessary for precise depth information extraction, crucial for accurate motion parameter estimation. Additionally, Movement Anywhere is adaptable to various 2D cameras and in corporates an active learning framework to streamline algorithm updates. Evaluated with datasets from multiple medical centers, our approach demonstrates substantial improvements over previous methods. Movement Anywhere provides a cost-effective, scalable, extensible, and user-friendly solution for effective disease monitoring and progression tracking. Movement Anywhere is accessible at https://movement-anywhere.cmdm.tw/, with source codes provided at https://github.com/ Kaminyou/Movement-Anywhere.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MGAPep: LLM-Augmented Multimodal Graph Attention for Protein-Peptide Binding Site Prediction and Cross-Domain Transfer.","authors":"Xiangzheng Fu, Xiaowen Li, Bosheng Song, Xiuxiu Chao, Sisi Yuan, Mingqiang Rong, Zhen Xia","doi":"10.1109/JBHI.2026.3690953","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3690953","url":null,"abstract":"<p><p>Protein-peptide interactions drive peptide therapeutics, precision design, and biomarker discovery, yet most predictors underuse complementary sequence-structure information. LLM-augmented multimodal approaches offer a promising solution to these limitations. We introduce MGAPep, which fuses pre-trained large language model embeddings with protein sequence and structural descriptors via a residual graph attention backbone and a multi-head dual-attention module to capture fine-grained interface patterns. Leveraging large-scale corpora of protein fragment-peptide interaction data, MGAPep employs self-supervised pre-training, transfer learning, and task-specific fine-tuning to obtain rich, transferable representations. Extensive benchmarking shows consistent state-of-the-art accuracy for protein-peptide binding site prediction, with robust generalization to unseen proteins and peptides. The framework also transfers effectively across modalities, yielding superior performance to most baselines on protein-nucleic acid binding site prediction without architecture changes, underscoring broad applicability. Together with evidence that graph-enhanced LLMs improve biomolecular binding modeling, these results establish MGAPep as a general paradigm for protein-biomolecule interaction prediction.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fanqing Meng, Chong Feng, Ge Shi, Xia Liu, Bo Wang, Kaiyuan Zhang, Yan Zhuang
{"title":"Medical Knowledge-Driven Contrastive Learning for Similar Patient Retrieval.","authors":"Fanqing Meng, Chong Feng, Ge Shi, Xia Liu, Bo Wang, Kaiyuan Zhang, Yan Zhuang","doi":"10.1109/JBHI.2026.3690515","DOIUrl":"https://doi.org/10.1109/JBHI.2026.3690515","url":null,"abstract":"<p><p>Similar patient retrieval is a fundamental task in medical informatics, aiming to identify patients with similar clinical characteristics to assist in diagnosis and treatment plan recommendation. While traditional methods relying on lexical features or medical ontologies often fail to capture implicit semantic relationships, recent advancements in dense retrieval methods powered by deep learning have shown promise yet face challenges in adapting to specific tasks such as similar patient retrieval. To address these limitations, we propose a medical knowledge-driven contrastive learning approach to enhance the representation capacity of general-purpose embedding models for medical text. Specifically, our approach introduces a novel negative sampling strategy leveraging International Classification of Diseases (ICD) codes to identify hard negatives. However, due to data imbalance issues, this method struggles to adequately mine negative examples. To overcome this limitation, we develop an external knowledge-based negative sampling method that incorporates both statistical and ambiguous knowledge, thereby enhancing the model's ability to differentiate between fine-grained medical conditions and complex clinical scenarios. We then integrate these methods into a contrastive learning framework to train more robust patient representations. Extensive experiments on real-world medical datasets show that our proposed method achieves significant improvements over existing state-of-the-art baseline models.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147837145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}