DisplaysPub Date : 2025-04-02DOI: 10.1016/j.displa.2025.103050
Jinzhang Li, Jue Wang, Bo Li, Hangfan Gu
{"title":"MWPRFN: Multilevel Wavelet Pyramid Recurrent Fusion Network for underwater image enhancement","authors":"Jinzhang Li, Jue Wang, Bo Li, Hangfan Gu","doi":"10.1016/j.displa.2025.103050","DOIUrl":"10.1016/j.displa.2025.103050","url":null,"abstract":"<div><div>Underwater images often suffer from color distortion, blurry details, and low contrast due to light scattering and water-type changes. Existing methods mainly focus on spatial information and ignore frequency-difference processing, which hinders the solution to the mixing degradation problem. To overcome these challenges, we propose a multi-scale wavelet pyramid recurrent fusion network (MWPRFN). This network retains low-frequency features at all levels, integrates them into a low-frequency enhancement branch, and fuses image features using a multi-scale dynamic cross-layer mechanism (DCLM) to capture the correlation between high and low frequencies. Each stage of the multi-level framework consists of a multi-frequency information interaction pyramid network (MFIPN) and an atmospheric light compensation estimation network (ALCEN). The low-frequency branch of the MFIPN enhances global details through an efficient context refinement module (ECRM). In contrast, the high-frequency branch extracts texture and edge features through a multi-scale difference expansion module (MSDC). After the inverse wavelet transform, ALCEN uses atmospheric light estimation and frequency domain compensation to compensate for color distortion. Experimental results show that MWPRFN significantly improves the quality of underwater images on five benchmark datasets. Compared with state-of-the-art methods, objective image quality metrics including PSNR, SSIM, and NIQE are improved by an average of 3.45%, 1.32%, and 4.50% respectively. Specifically, PSNR increased from 24.03 decibels to 24.86 decibels, SSIM increased from 0.9002 to 0.9121, and NIQE decreased from 3.261 to 3.115.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103050"},"PeriodicalIF":3.7,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143768235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-04-02DOI: 10.1016/j.displa.2025.103045
Yang Lu , Zilu Zhou , Zifan Yang , Shuangyao Han , Xiaoheng Jiang , Mingliang Xu
{"title":"Multi-Layer Cross-Modal Prompt Fusion for No-Reference Image Quality Assessment","authors":"Yang Lu , Zilu Zhou , Zifan Yang , Shuangyao Han , Xiaoheng Jiang , Mingliang Xu","doi":"10.1016/j.displa.2025.103045","DOIUrl":"10.1016/j.displa.2025.103045","url":null,"abstract":"<div><div>No-Reference Image Quality Assessment (NR-IQA) predicts image quality without reference images and exhibits high consistency with human visual perception. Multi-modal approaches based on vision-language (VL) models, like CLIP, have demonstrated remarkable generalization capabilities in NR-IQA tasks. While prompt learning has improved CLIP’s adaptation to downstream tasks, existing methods often lack synergy between textual and visual prompts, limiting their ability to capture complex cross-modal semantics. In response to this limitation, this paper proposes an innovative framework named MCPF-IQA with multi-layer cross-modal prompt fusion to further enhance the performance of CLIP model on NR-IQA tasks. Specifically, we introduce multi-layer prompt learning in both the text and visual branches of CLIP to improve the model’s comprehension of visual features and image quality. Additionally, we design a novel cross-modal prompt fusion module that deeply integrates text and visual prompts to enhance the accuracy of image quality assessment. We also develop five auxiliary quality-related category labels to describe image quality more precisely. Experimental results demonstrate MCPF-IQA model delivers exceptional performance on natural image datasets, with SRCC of 0.988 on the LIVE dataset (1.8% higher than the second-best method) and 0.913 on the LIVEC dataset (1.0% superior to the second-best method). Furthermore, it also exhibits strong performance on AI-generated image datasets. Ablation study results demonstrate the effectiveness and advantages of our method.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103045"},"PeriodicalIF":3.7,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143768236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-04-01DOI: 10.1016/j.displa.2025.103043
Xinzhou Fan, Jinze Xu, Feng Ye, Yizong Lai
{"title":"Misleading Supervision Removal Mechanism for self-supervised monocular depth estimation","authors":"Xinzhou Fan, Jinze Xu, Feng Ye, Yizong Lai","doi":"10.1016/j.displa.2025.103043","DOIUrl":"10.1016/j.displa.2025.103043","url":null,"abstract":"<div><div>Self-supervised monocular depth estimation leverages the photometric consistency assumption and exploits geometric relations between image frames to convert depth errors into reprojection photometric errors. This allows the model train effectively without explicit depth labels. However, due to factors such as the incomplete validity of the photometric consistency assumption, inaccurate geometric relationships between image frames, and sensor noise, there are limitations to photometric error loss, which can easily introduce inaccurate supervision information and mislead the model into local optimal solutions. To address this issue, this paper introduces a Misleading Supervision Removal Mechanism(MSRM), aimed at enhancing the accuracy of supervisory information by eliminating misleading cues. MSRM employs a composite masking strategy that incorporates both pixel-level and image-level masks, where pixel-level masks include sky masks, edge masks, and edge consistency techniques. MSRM largely eliminate misleading supervision information introduced by sky regions, edge regions, and images with low viewpoint changes. Without altering network architecture, MSRM ensures no increase in inference time, making it a plug-and-play solution. Implemented across various self-supervised monocular depth estimation algorithms, experiments on KITTI, Cityscapes, and Make3D datasets demonstrate that MSRM significantly improves the prediction accuracy and generalization performance of the original algorithms.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103043"},"PeriodicalIF":3.7,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143768234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-03-31DOI: 10.1016/j.displa.2025.103051
Hosung Jeon , Youngmin Kim , Joonku Hahn
{"title":"Perspective distortion correction in a compact, full-color holographic stereogram printer","authors":"Hosung Jeon , Youngmin Kim , Joonku Hahn","doi":"10.1016/j.displa.2025.103051","DOIUrl":"10.1016/j.displa.2025.103051","url":null,"abstract":"<div><div>Holography technology has advanced so much that it is now possible to record the object wavefront information on thin holographic recording media. Holographic stereogram printing techniques capture numerous ’hogels’—the smallest unit in holographic printing—storing an extensive range of optical information that surpasses the capabilities of other holographic applications. In this paper, we design a compact holographic stereogram printer that utilizes optical fibers to achieve significant system miniaturization. Specifically, the integration of polarization maintaining/single mode (PM/SM) fibers allows for the customization of the printer’s optical path. However, due to the wide field of view of our holographic stereograms, perspective distortion is hard to be avoided especially when the wavelengths or positions of the light source are not the same as designed values. The flat transverse plane is bent if the light source deviates from the optical axis. This distortion is easily understood by using a k-vector diagram, which illustrates how the direction of the outgoing light’s k-vector changes when it is diffracted by the grating vector of the volume hologram due to the incident light with undesirable direction. In this paper, the feasibility of our perspective-distortion correction algorithm is experimentally demonstrated.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103051"},"PeriodicalIF":3.7,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143792334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-03-30DOI: 10.1016/j.displa.2025.103030
Huilin Yin , Mina Sun , Linchuan Zhang , Gerhard Rigoll
{"title":"Online dynamic object removal for LiDAR-inertial SLAM via region-wise pseudo occupancy and two-stage scan-to-map optimization","authors":"Huilin Yin , Mina Sun , Linchuan Zhang , Gerhard Rigoll","doi":"10.1016/j.displa.2025.103030","DOIUrl":"10.1016/j.displa.2025.103030","url":null,"abstract":"<div><div>SLAM technology has become the core solution for mobile robots to achieve autonomous navigation. It provides the foundational information required for path planning. However, dynamic objects in the real world, such as moving vehicles, pedestrians, and temporarily constructed walls, affect the accuracy and stability of localization and mapping. Existing dynamic methods still face challenges, such as poor localization accuracy caused by reliance on IMU to provide initial poses before removing dynamic objects in highly dynamic environments, and decreased execution efficiency after incorporating complex additional processing modules. To improve positioning accuracy and efficiency in complex environments, this paper introduces dynamic object removal in front-end registration. Firstly, a two-stage scan-to-map optimization strategy is implemented to ensure the accuracy of poses before and after the removal of dynamic objects, where initial scan-to-map optimization is performed for precise pose estimation, followed by the identification and removal of dynamic objects, and a subsequent scan-to-map optimization to fine-tune the pose. Secondly, during the identification and filtering of dynamic objects, the method encodes the query frame and local map data that have already defined volume of interest (VOI) to generate a region-wise pseudo occupancy descriptor (R-POD), respectively. Subsequently, a scan ratio test (SRT) is conducted between query frame R-POD and the local map R-POD, identifying and filtering out dynamic objects region by region. This approach removes dynamic objects online and has demonstrated good mapping results and accuracy across multiple sequences in both the MulRan and UrbanLoco datasets, enhancing the performance of SLAM systems when dealing with dynamic environments.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103030"},"PeriodicalIF":3.7,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143808297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-03-27DOI: 10.1016/j.displa.2025.103047
Yao Haiyang , Guo Ruige , Zhao Zhongda , Zang Yuzhang , Zhao Xiaobo , Lei Tao , Wang Haiyan
{"title":"U-TransCNN: A U-shape transformer-CNN fusion model for underwater image enhancement","authors":"Yao Haiyang , Guo Ruige , Zhao Zhongda , Zang Yuzhang , Zhao Xiaobo , Lei Tao , Wang Haiyan","doi":"10.1016/j.displa.2025.103047","DOIUrl":"10.1016/j.displa.2025.103047","url":null,"abstract":"<div><div>Underwater imaging faces significant challenges due to nonuniform optical absorption and scattering, resulting in visual quality issues like color distortion, contrast reduction, and image blurring. These factors hinder the accurate capture and clear depiction of underwater imagery. To address these complexities, we propose U-TransCNN, a U-shape Transformer- Convolutional Neural Networks (CNN) model, designed to enhance underwater images by integrating the strengths of CNNs and Transformers. The core of U-TransCNN is the Global-Detail Feature Synchronization Fusion Module. This innovative component enhances global color and contrast while meticulously preserving the intricate texture details, ensuring that both macroscopic and microscopic aspects of the image are enhanced in unison. Then we design the Multiscale Detail Fusion Block to aggregate a richer spectrum of feature information using a variety of convolution kernels. Furthermore, our optimization strategy is augmented with a joint loss function, adynamic approach allowing the model to assign varying weights to the loss associated with different pixel points, depending on their loss magnitude. Six experiments (including reference and non-reference) on three public underwater datasets confirm that U-TransCNN comprehensively surpasses other contemporary state-of-the-art deep learning algorithms, demonstrating marked improvement in visualization quality and quantization parameters of underwater images. Our code is available at <span><span>https://github.com/GuoRuige/UTransCNN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103047"},"PeriodicalIF":3.7,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-03-27DOI: 10.1016/j.displa.2025.103040
Kang Liu, Zhihao Xv, Zhe Yang, Lian Liu, Xinyu Li, Xiaopeng Hu
{"title":"Continuous detail enhancement framework for low-light image enhancement","authors":"Kang Liu, Zhihao Xv, Zhe Yang, Lian Liu, Xinyu Li, Xiaopeng Hu","doi":"10.1016/j.displa.2025.103040","DOIUrl":"10.1016/j.displa.2025.103040","url":null,"abstract":"<div><div>Low-light image enhancement is a crucial task for improving image quality in scenarios such as nighttime surveillance, autonomous driving at twilight, and low-light photography. Existing enhancement methods often focus on directly increasing brightness and contrast but neglect the importance of structural information, leading to information loss. In this paper, we propose a Continuous Detail Enhancement Framework for low-light image enhancement, termed as C-DEF. More specifically, we design an enhanced U-Net network that leverages dense connections to promote feature propagation to maintain consistency within the feature space and better preserve image details. Then, multi-perspective fusion enhancement module (MPFEM) is proposed to capture image features from multiple perspectives and further address the problem of feature space discontinuity. Moreover, an elaborate loss function drives the network to preserve critical information to achieve excess performance improvement. Extensive experiments on various benchmarks demonstrate the superiority of our method over state-of-the-art alternatives in both qualitative and quantitative evaluations. In addition, promising outcomes have been obtained by directly applying the trained model to the coal-rock dataset, indicating the model’s excellent generalization capability. The code is publicly available at <span><span>https://github.com/xv994/C-DEF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103040"},"PeriodicalIF":3.7,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-03-25DOI: 10.1016/j.displa.2025.103006
Kerui Xi , Zijian Chen , Pengfei Wang , Feifei An , Wenqi Zhou , Xin Li , Tianyi Wu , Feng Qin , Xuhui Peng
{"title":"A halo effect measurement for transparent micro-LED display under simulation","authors":"Kerui Xi , Zijian Chen , Pengfei Wang , Feifei An , Wenqi Zhou , Xin Li , Tianyi Wu , Feng Qin , Xuhui Peng","doi":"10.1016/j.displa.2025.103006","DOIUrl":"10.1016/j.displa.2025.103006","url":null,"abstract":"<div><div>Halo effects, characterized by luminous rings or diffuse light surrounding bright objects on a screen, are influenced by factors such as display technology, ambient lighting conditions, and the human visual system. These effects can degrade image quality, reduce contrast, and impair the accuracy of visual tasks, making them a critical area of investigation in both academia and industry. Current halo measurement methods mainly focus on the mini-LED partition backlit liquid crystal displays (LCDs) and are nearly all physical-based, limiting their application and generalization ability. In this paper, we propose a simulation-based halo measurement framework, which offers a more flexible evaluation scheme with scene-fused halo generation. Specifically, we first simulate the four existing categories of halo manifestation externally through code, i.e., regular ring-shaped halo, rectangular halo, irregular content-dependent halo, and localized surrounding halo. Then, we build a 3D indoor environment to simulate the laboratory measurement environment, where a micro-LED-based display with variable parameters is taken as the evaluation object. To demonstrate the usefulness of this new and unique halo measurement resource, we conduct both subjective and objective experiments. The high subjective and objective consistency achieved by a simple deep learning-based image quality assessment (IQA) model demonstrates its utility that broadens the scene limits of display design.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103006"},"PeriodicalIF":3.7,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143705697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-03-25DOI: 10.1016/j.displa.2025.103034
Yucheng Jiang , Songping Mai , Peng Zhang , Junwei Hu , Jie Yu , Jian Cheng
{"title":"Enhancing real-time UHD intra-frame coding with parallel–serial hybrid neural networks","authors":"Yucheng Jiang , Songping Mai , Peng Zhang , Junwei Hu , Jie Yu , Jian Cheng","doi":"10.1016/j.displa.2025.103034","DOIUrl":"10.1016/j.displa.2025.103034","url":null,"abstract":"<div><div>The primary objective of a video encoder is to achieve both high real-time performance and a high compression ratio. Delivering these capabilities in a cost-effective hardware environment is crucial for practical applications. Numerous institutions have developed highly-optimized implementations for the mainstream video coding standards, such as x265 for HEVC, VVenC for VVC, and uAVS3e for AVS3. However, these implementations are still not capable of performing real-time encoding of 4K/8K UHD videos without significantly reducing compression complexity. This paper presents a parallel–serial hybrid neural network scheme, specifically tailored to expedite intra-frame block partitioning decisions. The parallel network is designed to extract effective features while minimizing the impact of network inference time. Simultaneously, the lightweight serial network effectively overcomes accuracy issue related to the data dependency introduced by the reconstructed pixels. The proposed enhancement scheme is integrated into the uAVS3 Real-Time encoder. The experimental results for CTC 4K UHD sequences demonstrate a significant increase in encoding speed (+30.2%) and an improvement in encoding quality, as evidenced by a 0.24% reduction in BD-BR. Compared to the previous work, we achieve the optimal trade-off in these two critical metrics. Furthermore, we integrated the enhanced encoder into the FFmpeg framework, enabling an efficient video encoding system capable of achieving 4K@50FPS and 8K@9FPS on affordable hardware configurations.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103034"},"PeriodicalIF":3.7,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143725268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2025-03-25DOI: 10.1016/j.displa.2025.103021
Feng Qin , Zijian Chen , Pengfei Wang , Peixuan Chen , Feifei An , Meng Wang , Sitao Huo , Tianyi Wu , Kerui Xi , Xuhui Peng
{"title":"A dataset and model for the readability assessment of transparent micro-LED displays in smart cockpit","authors":"Feng Qin , Zijian Chen , Pengfei Wang , Peixuan Chen , Feifei An , Meng Wang , Sitao Huo , Tianyi Wu , Kerui Xi , Xuhui Peng","doi":"10.1016/j.displa.2025.103021","DOIUrl":"10.1016/j.displa.2025.103021","url":null,"abstract":"<div><div>In the increasingly advanced automotive smart cockpit, the requirements for display with high readability and pleasant viewing experiences under various viewing directions, lead to significant challenges in product manufacture and modification. The micro light-emitting diode (micro-LED) display which has outstanding features, such as low power consumption, wider color gamut, longer lifetime, and small chip size, makes it a perfect candidate to design next-generation immersion vehicle display. However, the wide range of in- and out-vehicle lighting conditions that these displays should be able to operate in, makes the design of the evaluation set-up even more challenging. In this paper, we investigate a novel simulation-based evaluation framework for transparent micro-LED displays. Specifically, we collect the first display readability dataset by conducting comprehensive subjective studies. Based on this, we propose a novel objective display readability assessment model, which is comprised of three branches that are designed to extract readability-related features including scene semantics, technical distortions, and salient screen regions. In the experiments, we evaluate various blind image quality assessment algorithms, including both handcrafted feature-based models and deep learning-based models, on the proposed display readability dataset. The results show the effectiveness of our proposed objective display readability evaluator that achieves better subjective consistency than other baselines. The ablation studies further demonstrate the effectiveness of the proposed multi-branch feature extraction strategy and the image pre-processing scheme to filter out readability irrelevant information.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"88 ","pages":"Article 103021"},"PeriodicalIF":3.7,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143705698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}