DisplaysPub Date : 2024-11-15DOI: 10.1016/j.displa.2024.102890
Xiaoxiao Liu, Yan Zhao, Shigang Wang, Jian Wei
{"title":"Mambav3d: A mamba-based virtual 3D module stringing semantic information between layers of medical image slices","authors":"Xiaoxiao Liu, Yan Zhao, Shigang Wang, Jian Wei","doi":"10.1016/j.displa.2024.102890","DOIUrl":"10.1016/j.displa.2024.102890","url":null,"abstract":"<div><div>High-precision medical image segmentation provides a reliable basis for clinical analysis and diagnosis. Researchers have developed various models to enhance the segmentation performance of medical images. Among these methods, two-dimensional models such as Unet exhibit a simple structure, low computational resource requirements, and strong local feature capture capabilities. However, their spatial information utilization is insufficient, limiting their segmentation accuracy. Three-dimensional models, such as 3D Unet, utilize spatial information more fully and are suitable for complex tasks, but they require high computational resources and have limited real-time performance. In this paper, we propose a virtual 3D module (Mambav3d) based on mamba, which introduces spatial information into 2D segmentation tasks to more fully integrate the 3D information of the image and further improve segmentation accuracy under conditions of low computational resource requirements. Mambav3d leverages the properties of hidden states in the state space model, combined with the shift of visual perspective, to incorporate semantic information between different anatomical planes in different slices of the same 3D sample. The voxel segmentation is converted to pixel segmentation to reduce model training data requirements and model complexity while ensuring that the model integrates 3D information and enhances segmentation accuracy. The model references the information from previous layers when labeling the current layer, thereby facilitating the transfer of semantic information between slice layers and avoiding the high computational cost associated with using structures such as Transformers between layers. We have implemented Mambav3d on Unet and evaluated its performance on the BraTs, Amos, and KiTs datasets, demonstrating superiority over other state-of-the-art methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102890"},"PeriodicalIF":3.7,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-11-14DOI: 10.1016/j.displa.2024.102881
Zikang Chen , Zhouyan He , Ting Luo , Chongchong Jin , Yang Song
{"title":"Luminance decomposition and Transformer based no-reference tone-mapped image quality assessment","authors":"Zikang Chen , Zhouyan He , Ting Luo , Chongchong Jin , Yang Song","doi":"10.1016/j.displa.2024.102881","DOIUrl":"10.1016/j.displa.2024.102881","url":null,"abstract":"<div><div>Tone-Mapping Operators (TMOs) play a crucial role in converting High Dynamic Range (HDR) images into Tone-Mapped Images (TMIs) with standard dynamic range for optimal display on standard monitors. Nevertheless, TMIs generated by distinct TMOs may exhibit diverse visual artifacts, highlighting the significance of TMI Quality Assessment (TMIQA) methods in predicting perceptual quality and guiding advancements in TMOs. Inspired by luminance decomposition and Transformer, a new no-reference TMIQA method based on deep learning is proposed in this paper, named LDT-TMIQA. Specifically, a TMI will change under the influence of different TMOs, potentially resulting in either over-exposure or under-exposure, leading to structure distortion and changes in texture details. Therefore, we first decompose the luminance channel of a TMI into a base layer and a detail layer that capture structure information and texture information, respectively. Then, they are employed with the TMI collectively as inputs to the Feature Extraction Module (FEM) to enhance the availability of prior information on luminance, structure, and texture. Additionally, the FEM incorporates the Cross Attention Prior Module (CAPM) to model the interdependencies among the base layer, detail layer, and TMI while employing the Iterative Attention Prior Module (IAPM) to extract multi-scale and multi-level visual features. Finally, a Feature Selection Fusion Module (FSFM) is proposed to obtain final effective features for predicting the quality scores of TMIs by reducing the weight of unnecessary features and fusing the features of different levels with equal importance. Extensive experiments on the publicly available TMI benchmark database indicate that the proposed LDT-TMIQA reaches the state-of-the-art level.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102881"},"PeriodicalIF":3.7,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-11-09DOI: 10.1016/j.displa.2024.102882
Zhichao Chen , Shuyu Xiao , Yongfang Wang , Yihan Wang , Hongming Cai
{"title":"GLDBF: Global and local dual-branch fusion network for no-reference point cloud quality assessment","authors":"Zhichao Chen , Shuyu Xiao , Yongfang Wang , Yihan Wang , Hongming Cai","doi":"10.1016/j.displa.2024.102882","DOIUrl":"10.1016/j.displa.2024.102882","url":null,"abstract":"<div><div>No-reference Point Cloud Quality Assessment (NR-PCQA) is a challenge in the field of media quality assessment, such as inability to accurately capture quality-related features due to the unique scattered structure of points and less considering global features and local features jointly in the existing no-reference PCQA metrics. To address these challenges, we propose a Global and Local Dual-Branch Fusion (GLDBF) network for no-reference point cloud quality assessment. Firstly, sparse convolution is used to extract the global quality feature of distorted Point Clouds (PCs). Secondly, graph weighted PointNet++ is proposed to extract the multi-level local features of point cloud, and the offset attention mechanism is further used to enhance local effective features. Transformer-based fusion module is also proposed to fuse multi-level local features. Finally, we joint the global and local dual branch fusion modules via multilayer perceptron to predict the quality score of distorted PCs. Experimental results show that the proposed algorithm can achieves state-of-the-art performance compared with existing methods in assessing the quality of distorted PCs.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102882"},"PeriodicalIF":3.7,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-11-06DOI: 10.1016/j.displa.2024.102874
Faruk Enes Oğuz , Ahmet Alkan
{"title":"Weighted ensemble deep learning approach for classification of gastrointestinal diseases in colonoscopy images aided by explainable AI","authors":"Faruk Enes Oğuz , Ahmet Alkan","doi":"10.1016/j.displa.2024.102874","DOIUrl":"10.1016/j.displa.2024.102874","url":null,"abstract":"<div><div>Gastrointestinal diseases are significant health issues worldwide, requiring early diagnosis due to their serious health implications. Therefore, detecting these diseases using artificial intelligence-based medical decision support systems through colonoscopy images plays a critical role in early diagnosis. In this study, a deep learning-based method is proposed for the classification of gastrointestinal diseases and colon anatomical landmarks using colonoscopy images. For this purpose, five different Convolutional Neural Network (CNN) models, namely Xception, ResNet-101, NASNet-Large, EfficientNet, and NASNet-Mobile, were trained. An ensemble model was created using class-based recall values derived from the validation performances of the top three models (Xception, ResNet-101, NASNet-Large). A user-friendly Graphical User Interface (GUI) was developed, allowing users to perform classification tasks and use Gradient-weighted Class Activation Mapping (Grad-CAM), an explainable AI tool, to visualize the regions from which the model derives information. Grad-CAM visualizations contribute to a better understanding of the model’s decision-making processes and play an important role in the application of explainable AI. In the study, eight labels, including anatomical markers such as z-line, pylorus, and cecum, as well as pathological findings like esophagitis, polyps, and ulcerative colitis, were classified using the KVASIR V2 dataset. The proposed ensemble model achieved a 94.125% accuracy on the KVASIR V2 dataset, demonstrating competitive performance compared to similar studies in the literature. Additionally, the precision and F1 score values of this model are equal to 94.168% and 94.125%, respectively. These results suggest that the proposed method provides an effective solution for the diagnosis of GI diseases and can be beneficial for medical education.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102874"},"PeriodicalIF":3.7,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-11-06DOI: 10.1016/j.displa.2024.102870
Mohammed Kadri , Fatima-Ezzahra Boubakri , Timothy Teo , Fatima-Zahra Kaghat , Ahmed Azough , Khalid Alaoui Zidani
{"title":"Virtual reality in medical education: Effectiveness of Immersive Virtual Anatomy Laboratory (IVAL) compared to traditional learning approaches","authors":"Mohammed Kadri , Fatima-Ezzahra Boubakri , Timothy Teo , Fatima-Zahra Kaghat , Ahmed Azough , Khalid Alaoui Zidani","doi":"10.1016/j.displa.2024.102870","DOIUrl":"10.1016/j.displa.2024.102870","url":null,"abstract":"<div><div>Immersive Virtual Anatomy Laboratory (IVAL) is an innovative learning tool that combines virtual reality and serious games elements to enhance anatomy education. This experimental study compares IVAL with traditional learning methods in terms of educational effectiveness and user acceptance. An experimental design was implemented with 120 undergraduate health-science students, randomly assigned to two groups: an experimental group using IVAL, and a control group following traditional learning methods. Data collection focused on quantitative measures such as pretest and posttest vocabulary assessment scores and task completion times, alongside qualitative measures obtained through a user experience questionnaire. This study utilizes the Technology Acceptance Model (TAM), incorporating variables such as Perceived Usefulness and Perceived Ease of Use. Results revealed significant improvements in the experimental group, with a 55.95% increase in vocabulary scores and an 18.75% reduction in task completion times compared to the control group. Qualitative data indicated that IVAL users reported greater Perceived Usefulness of the technology, improved Perceived Ease of Use, a more positive Attitude Towards Using IVAL, and stronger Behavioral Intention to continue using IVAL for anatomy learning. This study demonstrates that the integration of immersive virtual reality in the IVAL approach offers a promising method to enhance anatomy education. The findings provide insights into the effectiveness of immersive learning environments in improving learning outcomes and user acceptance. While further research is needed to explore long-term effects, this innovative approach not only enhances the effectiveness and enjoyment of anatomy learning but also provides valuable data on optimizing educational technology for improved learning outcomes.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102870"},"PeriodicalIF":3.7,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142650755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-11-02DOI: 10.1016/j.displa.2024.102872
Xinyu Li , Qiaohong Liu , Xuewei Li , Tiansheng Huang , Min Lin , Xiaoxiang Han , Weikun Zhang , Keyan Chen , Yuanjie Lin
{"title":"CIFTC-Net: Cross information fusion network with transformer and CNN for polyp segmentation","authors":"Xinyu Li , Qiaohong Liu , Xuewei Li , Tiansheng Huang , Min Lin , Xiaoxiang Han , Weikun Zhang , Keyan Chen , Yuanjie Lin","doi":"10.1016/j.displa.2024.102872","DOIUrl":"10.1016/j.displa.2024.102872","url":null,"abstract":"<div><div>Polyp segmentation plays a crucial role in the early diagnosis and treatment of colorectal cancer, which is the third most common cancer worldwide. Despite remarkable successes achieved by recent deep learning-related works, accurate segmentation of polyps remains challenging due to the diversity in their shapes, sizes, appearances, and other factors. To address these problems, a novel cross information fusion network with Transformer and convolutional neural network (CNN) for polyp segmentation, named CIFTC-Net, is proposed to improve the segmentation performance of colon polyps. In particular, a dual-branch encoder with Pyramid Vision Transformer (PVT) and ResNet50 is employed to take full advantage of both the global semantic information and local spatial features to enhance the feature representation ability. To effectively fuse the two types of features, a new global–local feature fusion (GLFF) module is designed. Additionally, in the PVT branch, a multi-scale feature integration (MSFI) module is introduced to fuse multi-scale features adaptively. At the bottom of the model, a multi-scale atrous pyramid bridging (MSAPB) module is proposed to achieve rich and robust multi-level features and improve the segmentation accuracy. Experimental results on four public polyp segmentation datasets demonstrate that CIFTC-Net surpasses current state-of-the-art methods across various metrics, showcasing its superiority in segmentation accuracy, generalization ability, and handling of complex images.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102872"},"PeriodicalIF":3.7,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142592945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-10-30DOI: 10.1016/j.displa.2024.102869
Haibo Yin , Rui Li , Yingjie Victor Chen
{"title":"From hardware to software integration: A comparative study of usability and safety in vehicle interaction modes","authors":"Haibo Yin , Rui Li , Yingjie Victor Chen","doi":"10.1016/j.displa.2024.102869","DOIUrl":"10.1016/j.displa.2024.102869","url":null,"abstract":"<div><div>The increasing advancement of human–machine interaction (HMI) technology has brought the modes of vehicle HMI into focus, as they are closely related to driver and passenger safety and directly affect the travel experiences. This study compared the usability and safety of three vehicle HMI modes: hardware interaction (HI), hardware and software interaction (HSI), and software interaction (SI). The evaluation comprised two dimensions: usability and safety. Sixty participants’ performance on these tasks was evaluated at two driving speeds (30 km/h and 60 km/h). The results of the nonparametric tests indicated significant differences between the three interaction modes: (1) HI was the highest safety-oriented interaction mode with participants had the highest average vehicle speed and maximum acceleration measured at 60 km/h and the lowest glance frequency at both speeds; (2) HSI was the most usable interaction mode. Participants had the shortest task-completion time measured at 60 km/h and the highest score on the NASA-TLX and SUS scales taken for both speeds; (3) SI was the lowest secure and usable in-vehicle interaction mode. Participants had the longest task-completion time at 60 km/h, the highest error frequency under 30 and 60 km/h and the highest glance frequency, the longest total glance duration and the longest average glance time. In conclusion, HI and HSI were more secure and usable in-vehicle interaction modes than SI. From a theoretical exploration perspective, this paper elaborates on some exploratory thoughts and innovative ideas for practical application to the screen HMI mode selection and design in intelligent vehicle cabins.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102869"},"PeriodicalIF":3.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-10-30DOI: 10.1016/j.displa.2024.102862
Fangyuan Zhang , Rukai Wei , Yanzhao Xie , Yangtao Wang , Xin Tan , Lizhuang Ma , Maobin Tang , Lisheng Fan
{"title":"Cross-coupled prompt learning for few-shot image recognition","authors":"Fangyuan Zhang , Rukai Wei , Yanzhao Xie , Yangtao Wang , Xin Tan , Lizhuang Ma , Maobin Tang , Lisheng Fan","doi":"10.1016/j.displa.2024.102862","DOIUrl":"10.1016/j.displa.2024.102862","url":null,"abstract":"<div><div>Prompt learning based on large models shows great potential to reduce training time and resource costs, which has been progressively applied to visual tasks such as image recognition. Nevertheless, the existing prompt learning schemes suffer from either inadequate prompt information from a single modality or insufficient prompt interaction between multiple modalities, resulting in low efficiency and performance. To address these limitations, we propose a <u>C</u>ross-<u>C</u>oupled <u>P</u>rompt <u>L</u>earning (CCPL) architecture, which is designed with two novel components (i.e., Cross-Coupled Prompt Generator (CCPG) module and Cross-Modal Fusion (CMF) module) to achieve efficient interaction between visual and textual prompts. Specifically, the CCPG module incorporates a cross-attention mechanism to automatically generate visual and textual prompts, each of which will be adaptively updated using the self-attention mechanism in their respective image and text encoders. Furthermore, the CMF module implements a deep fusion to reinforce the cross-modal feature interaction from the output layer with the Image–Text Matching (ITM) loss function. We conduct extensive experiments on 8 image datasets. The experimental results verify that our proposed CCPL outperforms the SOTA methods on few-shot image recognition tasks. The source code of this project is released at: <span><span>https://github.com/elegantTechie/CCPL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102862"},"PeriodicalIF":3.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142571781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessing arbitrary style transfer like an artist","authors":"Hangwei Chen, Feng Shao, Baoyang Mu, Qiuping Jiang","doi":"10.1016/j.displa.2024.102859","DOIUrl":"10.1016/j.displa.2024.102859","url":null,"abstract":"<div><div>Arbitrary style transfer (AST) is a distinctive technique for transferring artistic style into content images, with the goal of generating stylized images that approximates real artistic paintings. Thus, it is natural to develop a quantitative evaluation metric to act like an artist for accurately assessing the quality of AST images. Inspired by this, we present an artist-like network (AL-Net) which can analyze the quality of the stylized images like an artist from the fine knowledge of artistic painting (e.g., aesthetics, structure, color, texture). Specifically, the AL-Net consists of three sub-networks: an aesthetic prediction network (AP-Net), a content preservation prediction network (CPP-Net), and a style resemblance prediction network (SRP-Net), which can be regarded as specialized feature extractors, leveraging professional artistic painting knowledge through pre-training by different labels. To more effectively predict the final overall quality, we apply transfer learning to integrate the pre-trained feature vectors representing different painting elements into overall vision quality regression. The loss determined by the overall vision label fine-tunes the parameters of AL-Net, and thus our model can establish a tight connection with human perception. Extensive experiments on the AST-IQAD dataset validate that the proposed method achieves the state-of-the-art performance.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102859"},"PeriodicalIF":3.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142571779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DisplaysPub Date : 2024-10-28DOI: 10.1016/j.displa.2024.102868
Brandy Murovec , Julia Spaniol , Behrang Keshavarz
{"title":"The role of image realism and expectation in illusory self-motion (vection) perception in younger and older adults","authors":"Brandy Murovec , Julia Spaniol , Behrang Keshavarz","doi":"10.1016/j.displa.2024.102868","DOIUrl":"10.1016/j.displa.2024.102868","url":null,"abstract":"<div><div>Research on the illusion of self-motion (vection) has primarily focused on younger adults, with few studies including older adults. In light of documented age differences in bottom-up and top-down perception and attention, the current study examined the impact of stimulus properties (speed), cognitive factors (expectancy), and a combination of both (stimulus realism) on vection in younger (18–35 years) and older (65+ years) adults. Participants were led to believe through manipulation of the study instructions that they were either likely or unlikely to experience vection before they were exposed to a rotating visual stimulus aimed to induce circular vection. Realism was manipulated by disrupting the global consistency of the visual stimulus comprised of an intact 360° panoramic photograph, resulting in two images (intact, scrambled). The speed of the stimulus was varied (faster, slower). Vection was measured using self-ratings of onset latency, duration, and intensity. Results showed that intact images produced more vection than scrambled images, especially at faster speeds. In contrast, expectation did not significantly impact vection. Overall, these patterns were similar across both age groups, although younger adults reported earlier vection onsets than older adults at faster speeds. These findings suggest that vection results from an interplay of stimulus-driven and cognitive factors in both younger and older adults.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102868"},"PeriodicalIF":3.7,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}