DisplaysPub Date : 2025-05-08DOI: 10.1016/j.displa.2025.103059
Jinhao Li , Zijian Chen , Tingzhu Chen , Zhiji Liu , Changbo Wang
{"title":"OBIFormer: A fast attentive denoising framework for oracle bone inscriptions","authors":"Jinhao Li , Zijian Chen , Tingzhu Chen , Zhiji Liu , Changbo Wang","doi":"10.1016/j.displa.2025.103059","DOIUrl":"10.1016/j.displa.2025.103059","url":null,"abstract":"<div><div>Oracle bone inscriptions (OBI) are the earliest known form of Chinese characters and serve as a valuable resource for research in anthropology and archaeology. However, most excavated fragments are severely degraded due to thousands of years of natural weathering, corrosion, and man-made destruction, making automatic OBI recognition extremely challenging. Previous methods either focus on pixel-level information or utilize vanilla transformers for glyph-based OBI denoising, which leads to tremendous computational overhead. Therefore, this paper proposes a fast attentive denoising framework for oracle bone inscriptions, i.e., OBIFormer. It leverages channel-wise self-attention, glyph extraction, and selective kernel feature fusion to reconstruct denoised images precisely while being computationally efficient. Our OBIFormer achieves state-of-the-art denoising performance for PSNR and SSIM metrics on synthetic and original OBI datasets. Furthermore, comprehensive experiments on a real-world OBI dataset demonstrate the great potential of our OBIFormer in assisting automatic OBI recognition. The code will be made available at <span><span>https://github.com/LJHolyGround/OBIFormer</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"89 ","pages":"Article 103059"},"PeriodicalIF":3.7,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143935753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-05-06DOI: 10.1016/j.displa.2025.103063
Yuanyuan Li, Zetian Mi, Xianping Fu
{"title":"Multiframe-to-multiframe network for underwater unpaired video enhancement","authors":"Yuanyuan Li, Zetian Mi, Xianping Fu","doi":"10.1016/j.displa.2025.103063","DOIUrl":"10.1016/j.displa.2025.103063","url":null,"abstract":"<div><div>Underwater video enhancement (UVE) technology plays an indispensable role in accurately perceiving underwater environments. In recent years, researchers have proposed many high-performance underwater image enhancement (UIE) techniques. However, these methods enhance each frame independently, ignoring complementary information between adjacent frames over time, which can lead to visual flickering. Additionally, it is impractical to simultaneously capture degraded underwater videos and their high-quality counterparts. Considering these factors, a multiframe-to-multiframe network for unpaired underwater video enhancement (MMUVE) is proposed for the first time. First, a generative adversarial network based on unpaired contrastive learning is designed to conduct adversarial training between the selected key frames from the video frame sequence and unpaired high-quality images, resulting in an initially optimized video frame sequence. Then, the original frame sequence undergoes temporal enhancement, while the initially optimized frame sequence is subjected to secondary optimization in the spatial-channel dimension. Finally, a dual-branch feature fusion is performed to obtain multi-frame enhancement results. Extensive subjective and objective comparative experiments demonstrate that the proposed method not only maintains temporal consistency during multi-frame enhancement but also achieves better single-frame image enhancement results.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"89 ","pages":"Article 103063"},"PeriodicalIF":3.7,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143941624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-05-01DOI: 10.1016/j.displa.2025.103073
Jianing Zhou , Chao Long , Hao Zou , Yan Han , Hui Tan , Liming Duan
{"title":"Non-uniform sparse scanning angle selection method for limited angle industrial CT detection of laminated cells","authors":"Jianing Zhou , Chao Long , Hao Zou , Yan Han , Hui Tan , Liming Duan","doi":"10.1016/j.displa.2025.103073","DOIUrl":"10.1016/j.displa.2025.103073","url":null,"abstract":"<div><div>Laminated cells can be rapidly scanned using sparse angle computed tomography (CT), but the traditional uniform sparse scanning method fails to adequately capture internal structural differences, leading to missing structures in the reconstructed image. To address this issue, we introduce a scanning method—a non-uniform sparse scanning angle selection method for limited angle industrial CT detection of laminated cells. First, the spectrum distribution map is generated by applying Fourier transform to the projection data. A threshold is established by taking the average of frequency amplitudes. Next, the number of frequency categories with amplitudes exceeding the threshold is counted to select a suitable limited angle range. Then, the non-uniform sparse scanning angles within the limited angle range are determined based on the singularity distribution curve in the projection domain. This scanning method ensures that more relevant data is collected while avoiding the data redundancy. Finally, the effectiveness of the proposed method is verified through numerical simulation and actual scanning experiments. In comparison with the latest scanning angle selection methods, our method collects more data and significantly improves image reconstruction quality while maintaining the same number of scanning angles.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"89 ","pages":"Article 103073"},"PeriodicalIF":3.7,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143916171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-05-01DOI: 10.1016/j.displa.2025.103072
Yu Chen , Yuehui Liao , Panfei Li , Wei Jin , Jingwan Fang , Junwei Huang , Yaning Feng , Changxiong Xie , Ruipeng Li , Qun Jin , Xiaobo Lai
{"title":"A systematic review and meta-analysis of deep learning and radiomics in predicting MGMT promoter methylation status in glioblastoma: Efficacy, reliability, and clinical implications","authors":"Yu Chen , Yuehui Liao , Panfei Li , Wei Jin , Jingwan Fang , Junwei Huang , Yaning Feng , Changxiong Xie , Ruipeng Li , Qun Jin , Xiaobo Lai","doi":"10.1016/j.displa.2025.103072","DOIUrl":"10.1016/j.displa.2025.103072","url":null,"abstract":"<div><h3>Background</h3><div>O<sup>6</sup>-methylguanine-DNA methyltransferase (MGMT) promoter methylation is a critical predictive biomarker for assessing temozolomide response in glioblastoma (GBM). Deep learning (DL) and radiomics offer promising non-invasive alternatives for evaluating MGMT promoter methylation status.</div></div><div><h3>Objective</h3><div>To evaluate the diagnostic performance and methodological rigor of published deep learning and radiomic models for predicting MGMT promoter methylation.</div></div><div><h3>Methods</h3><div>A comprehensive literature search was conducted across PubMed, Ovid Embase, EBSCOhost Cumulative Index to Nursing and Allied Health Literature (EBSCO CINAHL), Web of Science, IEEE Xplore, and ACM Digital Library databases through December 31, 2024. Studies using magnetic resonance imaging (MRI)-based radiomic features and DL algorithms to classify MGMT promoter methylation status in GBM patients were included. The review protocol was registered with PROSPERO (CRD42021279221), and study selection adhered to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and Preferred Reporting Items for Systematic Reviews and Meta-Analyses of Diagnostic Test Accuracy Studies (PRISMA-DTA) guidelines. Methodological quality was assessed using the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis + Artificial Intelligence (TRIPOD + AI) checklist and the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. A <em>meta</em>-analysis of diagnostic performance was performed using Stata v.17.1.</div></div><div><h3>Results</h3><div>The pooled area under the curve (AUC) was 0.86 (95 % CI: 0.83–0.89), reflecting strong diagnostic performance. However, external validation studies revealed a significantly lower mean AUC of 0.69, indicating potential overfitting. High heterogeneity (I<sup>2</sup> > 90 %) was attributed to variations in imaging protocols, feature extraction techniques, and data sources.</div></div><div><h3>Conclusion</h3><div>While radiomics and DL-based models show potential for non-invasive MGMT promoter methylation prediction, their clinical applicability is hindered by a lack of standardized datasets and robust external validation. Future studies should focus on addressing these limitations to enhance reliability and generalizability.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"89 ","pages":"Article 103072"},"PeriodicalIF":3.7,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143903635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-04-30DOI: 10.1016/j.displa.2025.103066
Yongzhi Li , Chunyang Jia , Yulin Liu , Wei Duan , Xiaolong Qing , Jianguo Zhang , Xiaolong Weng
{"title":"Evaluation of electrochromic film camouflage effect based on visual perception","authors":"Yongzhi Li , Chunyang Jia , Yulin Liu , Wei Duan , Xiaolong Qing , Jianguo Zhang , Xiaolong Weng","doi":"10.1016/j.displa.2025.103066","DOIUrl":"10.1016/j.displa.2025.103066","url":null,"abstract":"<div><div>Modern battlefields demand adaptive camouflage technologies for target survivability. Electrochromic films that adjust their color to blend with changing environments are applicable in active stealth camouflage. Assessing the camouflage capabilities of such devices is a critical component of their effective deployment. However, achieving consistency between subjective assessments and objective measurements represents a significant and complex challenge in camouflage evaluation. Therefore, this paper introduces a novel method for assessing the camouflage effectiveness of electrochromic films based on visual perception. This approach leverages the visual attention mechanism and target saliency detection theory to analyze target saliency through brightness, color, and texture features. The weight assigned to each feature is determined by the focus of visual attention. Subsequently, the significance value is calculated employing a Markov chain, thereby quantifying the camouflage effect. The method’s scientific validity was demonstrated through observer experiments, with subjective results that were consistent with the objective evaluation.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"89 ","pages":"Article 103066"},"PeriodicalIF":3.7,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143916172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-04-30DOI: 10.1016/j.displa.2025.103064
Yanning Ma , Zhiyuan Qu , Xulin Liu , Jiaman Lin , Zuolin Jin
{"title":"A high-precision framework for teeth instance segmentation in panoramic radiographs","authors":"Yanning Ma , Zhiyuan Qu , Xulin Liu , Jiaman Lin , Zuolin Jin","doi":"10.1016/j.displa.2025.103064","DOIUrl":"10.1016/j.displa.2025.103064","url":null,"abstract":"<div><div>Panoramic radiography plays a vital role in dental diagnosis and treatment, characterized by low radiation exposure, cost-effectiveness, and high accessibility, rendering it suitable for initial screening of oral diseases. However, inexperienced dentists may find it challenging to accurately interpret the information presented in panoramic images regarding the teeth, jaw bone, and maxillary sinus, which can result in missed diagnoses or misdiagnoses. This study proposed a deep learning-based framework for segmenting teeth and alveolar bone from panoramic radiographs and also provided examples of its application for disease diagnosis. This study incorporated relevant medical knowledge when designing algorithms, including graphic optimization algorithms and medical optimization algorithms. The experimental results indicated that the proposed segmentation method was very accurate in segmenting teeth and alveolar bone. The proposed method also improved the accuracy of disease diagnosis in panoramic radiographs, further demonstrating the clinical value of the method for segmenting teeth and alveolar bone.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"89 ","pages":"Article 103064"},"PeriodicalIF":3.7,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143935751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-04-29DOI: 10.1016/j.displa.2025.103057
Lijuan Duan , Wendi Zuo , Ke Gu , Zhi Gong
{"title":"Burst image super-resolution based on dual branch fusion and adaptive frame selection","authors":"Lijuan Duan , Wendi Zuo , Ke Gu , Zhi Gong","doi":"10.1016/j.displa.2025.103057","DOIUrl":"10.1016/j.displa.2025.103057","url":null,"abstract":"<div><div>The modern handheld camera is capable of rapidly capturing multiple images and subsequently merging them into a single image. The extant methodologies typically select the first frame as the reference frame, utilising the information from the remaining frames and the information from the reference frame to calculate the high-resolution image. However, for complex scenes and unstable shooting situations, this fixed frame selection is not the optimum solution. In this direction, we propose an adaptive frame selection method which, by calculating the frame channel weight information, selects the best frame image as the reference frame to perform the subsequent computation. Moreover, to enhance the visual quality of high-resolution images, we propose a dual-branch fusion module in the feature fusion phase for the sequence attributes of the input frame images. This allows the network to concentrate on the temporal global features of the input sequence and the spatial local detail features of the frame images. Subsequently, the feature map is obtained through residual computation utilising the adaptive selected frame image and the image features obtained through fusion. The image is then reconstructed through up-sampling to obtain a high-resolution image. The experimental results on the BurstSR and RealBSR datasets demonstrate that our approach not only outperforms existing techniques in terms of evaluation metrics but also exhibits superior visual effects.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"89 ","pages":"Article 103057"},"PeriodicalIF":3.7,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143916169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-04-29DOI: 10.1016/j.displa.2025.103071
Xin Huang , Yi Rui , Shiqi Dou , Guanlin Huang , Xiaojun Li , Yuxin Zhang
{"title":"Impact of tunnel lighting on traffic emissions based on VR experiment and car-following method","authors":"Xin Huang , Yi Rui , Shiqi Dou , Guanlin Huang , Xiaojun Li , Yuxin Zhang","doi":"10.1016/j.displa.2025.103071","DOIUrl":"10.1016/j.displa.2025.103071","url":null,"abstract":"<div><div>The complex lighting environments within tunnels have been established as significant factors influencing drivers’ visual processing abilities, which in turn affect driving safety and comfort. However, there is a lack of research exploring whether tunnel lighting impacts drivers’ eco-driving behaviors, particularly in terms of vehicle carbon emissions. To address this gap, this study designs a virtual reality (VR) driving experiment, utilizing luminance and correlated color temperature (CCT) as key lighting parameters to quantitatively assess the influence of tunnel lighting environments on traffic carbon emissions. Furthermore, the intelligent driver model (IDM) is employed to simulate and analyze traffic flow within tunnels. Carbon emissions are calculated using the MOVES methodology. The findings reveal that car platoons exhibit the lowest carbon emissions under lighting environments of (1 cd/m<sup>2</sup>, 5000 K) and (3 cd/m<sup>2</sup>, 2000 K), which are optimal for reducing traffic emissions. Compared to the scenario with the highest total carbon emissions, occurring under lighting conditions of (3 cd/m<sup>2</sup>, 8000 K), the total carbon emissions are reduced by 26.8 % when the lighting is set to (3 cd/m<sup>2</sup>, 2000 K). By integrating VR experiments with traffic simulations, this study bridges the existing research gap regarding the effects of tunnel lighting on traffic emissions and provides valuable insights for the low-carbon design of tunnel lighting environments.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"89 ","pages":"Article 103071"},"PeriodicalIF":3.7,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143916170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-04-29DOI: 10.1016/j.displa.2025.103062
Yunqi Liu , Xue Ouyang , Xiaohui Cui
{"title":"Advanced defense against GAN-based facial manipulation: A multi-domain and multi-dimensional feature fusion approach","authors":"Yunqi Liu , Xue Ouyang , Xiaohui Cui","doi":"10.1016/j.displa.2025.103062","DOIUrl":"10.1016/j.displa.2025.103062","url":null,"abstract":"<div><div>Powerful facial image manipulation offered by encoder-based GAN inversion techniques raises concerns about potential misuse in identity fraud and misinformation. This study introduces the Multi-Domain and Multi-Dimensional Feature Fusion (MDFusion) method, a novel approach that counters encoder-based GAN inversion by generating adversarial samples. Firstly, MDFusion transforms the luminance channel of the target image into spatial, frequency, and spatial-frequency hybrid domains. Secondly, we use the specifically adapted Feature Pyramid Network (FPN) to extract and fuse high-dimensional and low-dimensional features that enhance the robustness of adversarial noise generation. Then, we embed adversarial noise into the spatial-frequency hybrid domain to produce effective adversarial samples. Finally, the adversarial samples are guided by our designed hybrid training loss to achieve a balance between imperceptibility and effectiveness. Tests were conducted on five encoder-based GAN inversion models using ASR, LPIPS, and FID metrics. These tests demonstrated the superiority of MDFusion over 13 baseline methods, highlighting its robust defense and generalization abilities. The implementation code is available at <span><span>https://github.com/LuckAlex/MDFusion</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"89 ","pages":"Article 103062"},"PeriodicalIF":3.7,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143891078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-04-28DOI: 10.1016/j.displa.2025.103068
Kubilay Muhammed Sünnetci
{"title":"Biomedical text-based detection of colon, lung, and thyroid cancer: A deep learning approach with novel dataset","authors":"Kubilay Muhammed Sünnetci","doi":"10.1016/j.displa.2025.103068","DOIUrl":"10.1016/j.displa.2025.103068","url":null,"abstract":"<div><div>Pre-trained Language Models (PLMs) are widely used nowadays and increasingly popular. These models can be used to solve Natural Language Processing (NLP) challenges, and their focus on specific topics allows the models to provide answers to directly relevant issues. As a sub-branch of this, Biomedical Text Classification (BTC) is a fundamental task that can be used in various applications and is used to aid clinical decisions. Therefore, this study detects colon, lung, and thyroid cancer from biomedical texts. A dataset including 3070 biomedical texts is generated by artificial intelligence and used in the study. In this dataset, there are 1020 texts labeled colon cancer, while the number of samples labeled lung and thyroid cancer is equal to 1020 and 1030, respectively. In the study, 70 % of the data is used in the training set, while the remaining data is split for validation and test sets. After preprocessing all the data used in the study, word encoding is used to prepare the model inputs. Furthermore, these documents in the dataset are converted into sequences of numeric indices. Afterward, Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Bidirectional LSTM (BiLSTM), LSTM+LSTM, GRU+GRU, BiLSTM+BiLSTM, and LSTM+GRU+BiLSTM architectures are trained with train and validation sets, and these models are tested with the test set. Both validation and test performances of all developed models are determined, and a Graphical User Interface (GUI) software is prepared in which the most successful architecture has been embedded. The results show that LSTM is the most successful model, and the accuracy and specificity values achieved by this model in the validation set are equal to 91.32 % and 95.67 %, respectively. The F1 score value achieved by this model for the validation set is also equal to 91.32 %. The accuracy, specificity, and F1 score values achieved by this model in the test set are equal to 85.87 %, 92.94 %, and 85.90 %, respectively. The sensitivity values achieved by this model for the validation and test set are 91.33 % and 85.88 %, respectively. These developed models both provide comparative results and have shown successful performances. Focusing these models on specific issues can provide more effective results for related problems. Furthermore, the presentation of a user-friendly GUI application developed in the study allows users to use the models effectively.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"89 ","pages":"Article 103068"},"PeriodicalIF":3.7,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143895706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}