Frontiers in NeuroroboticsPub Date : 2025-07-30eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1643011
Mohammed Alshehri, Tingting Xue, Ghulam Mujtaba, Yahya AlQahtani, Nouf Abdullah Almujally, Ahmad Jalal, Hui Liu
{"title":"Integrated neural network framework for multi-object detection and recognition using UAV imagery.","authors":"Mohammed Alshehri, Tingting Xue, Ghulam Mujtaba, Yahya AlQahtani, Nouf Abdullah Almujally, Ahmad Jalal, Hui Liu","doi":"10.3389/fnbot.2025.1643011","DOIUrl":"10.3389/fnbot.2025.1643011","url":null,"abstract":"<p><strong>Introduction: </strong>Accurate vehicle analysis from aerial imagery has become increasingly vital for emerging technologies and public service applications such as intelligent traffic management, urban planning, autonomous navigation, and military surveillance. However, analyzing UAV-captured video poses several inherent challenges, such as the small size of target vehicles, occlusions, cluttered urban backgrounds, motion blur, and fluctuating lighting conditions which hinder the accuracy and consistency of conventional perception systems. To address these complexities, our research proposes a fully end-to-end deep learning-driven perception pipeline specifically optimized for UAV-based traffic monitoring. The proposed framwork integrates multiple advanced modules: RetinexNet for preprocessing, segmentation using HRNet to preserve high-resolution semantic information, and vehicle detection using the YOLOv11 framework. Deep SORT is employed for efficient vehicle tracking, while CSRNet facilitates high-density vehicle counting. LSTM networks are integrated to predict vehicle trajectories based on temporal patterns, and a combination of DenseNet and SuperPoint is utilized for robust feature extraction. Finally, classification is performed using Vision Transformers (ViTs), leveraging attention mechanisms to ensure accurate recognition across diverse categories. The modular yet unified architecture is designed to handle spatiotemporal dynamics, making it suitable for real-time deployment in diverse UAV platforms.</p><p><strong>Method: </strong>The framework suggests using today's best neural networks that are made to solve different problems in aerial vehicle analysis. RetinexNet is used in preprocessing to make the lighting of each input frame consistent. Using HRNet for semantic segmentation allows for accurate splitting between vehicles and their surroundings. YOLOv11 provides high precision and quick vehicle detection and Deep SORT allows reliable tracking without losing track of individual cars. CSRNet are used for vehicle counting that is unaffected by obstacles or traffic jams. LSTM models capture how a car moves in time to forecast future positions. Combining DenseNet and SuperPoint embeddings that were improved with an AutoEncoder is done during feature extraction. In the end, using an attention function, Vision Transformer-based models classify vehicles seen from above. Every part of the system is developed and included to give the improved performance when the UAV is being used in real life.</p><p><strong>Results: </strong>Our proposed framework significantly improves the accuracy, reliability, and efficiency of vehicle analysis from UAV imagery. Our pipeline was rigorously evaluated on two famous datasets, AU-AIR and Roundabout. On the AU-AIR dataset, the system achieved a detection accuracy of 97.8%, a tracking accuracy of 96.5%, and a classification accuracy of 98.4%. Similarly, on the Roundabout dataset, it reached 96.9% det","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1643011"},"PeriodicalIF":2.8,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12343587/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144845568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NeuroVI-based wave compensation system control for offshore wind turbines.","authors":"Fengshuang Ma, Xiangyong Liu, Zhiqiang Xu, Tianhong Ding","doi":"10.3389/fnbot.2025.1648713","DOIUrl":"10.3389/fnbot.2025.1648713","url":null,"abstract":"<p><p>In deep-sea areas, the hoisting operation of offshore wind turbines is seriously affected by waves, and the secondary impact is prone to occur between the turbine and the pile foundation. To address this issue, this study proposes an integrated wave compensation system for offshore wind turbines based on a neuromorphic vision (NeuroVI) camera. The system employs a NeuroVI camera to achieve non-contact, high-precision, and low-latency displacement detection of hydraulic cylinders, overcoming the limitations of traditional magnetostrictive displacement sensors, which exhibit slow response and susceptibility to interference in harsh marine conditions. A dynamic simulation model was developed using AMESim-Simulink co-simulation to analyze the compensation performance of the NeuroVI-based system under step and sinusoidal wave disturbances. Comparative results demonstrate that the NeuroVI feedback system achieves faster response times and superior stability over conventional sensors. Laboratory-scale model tests and real-world application in the installation of a 5.2 MW offshore wind turbine validated the system's feasibility and robustness, enabling real-time collaborative control of turbine and cylinder displacement to effectively mitigate multi-impact risks. This research provides an innovative approach for deploying neural perception technology in complex marine scenarios and advances the development of neuro-robotic systems in ocean engineering.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1648713"},"PeriodicalIF":2.8,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12343490/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144845569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-07-28eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1604453
Chenfei Ma, Xinyu Jiang, Kianoush Nazarpour
{"title":"Pre-training, personalization, and self-calibration: all a neural network-based myoelectric decoder needs.","authors":"Chenfei Ma, Xinyu Jiang, Kianoush Nazarpour","doi":"10.3389/fnbot.2025.1604453","DOIUrl":"10.3389/fnbot.2025.1604453","url":null,"abstract":"<p><p>Myoelectric control systems translate electromyographic signals (EMG) from muscles into movement intentions, allowing control over various interfaces, such as prosthetics, wearable devices, and robotics. However, a major challenge lies in enhancing the system's ability to generalize, personalize, and adapt to the high variability of EMG signals. Artificial intelligence, particularly neural networks, has shown promising decoding performance when applied to large datasets. However, highly parameterized deep neural networks usually require extensive user-specific data with ground truth labels to learn individual unique EMG patterns. However, the characteristics of the EMG signal can change significantly over time, even for the same user, leading to performance degradation during extended use. In this work, we propose an innovative three-stage neural network training scheme designed to progressively develop an adaptive workflow, improving and maintaining the network performance on 28 subjects over 2 days. Experiments demonstrate the importance and necessity of each stage in the proposed framework.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1604453"},"PeriodicalIF":2.8,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12336220/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144821257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-07-11eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1576473
Zhisheng Ma, Shaobin Huang
{"title":"Design and analysis of combined discrete-time zeroing neural network for solving time-varying nonlinear equation with robot application.","authors":"Zhisheng Ma, Shaobin Huang","doi":"10.3389/fnbot.2025.1576473","DOIUrl":"10.3389/fnbot.2025.1576473","url":null,"abstract":"<p><p>Zeroing neural network (ZNN) is viewed as an effective solution to time-varying nonlinear equation (TVNE). In this paper, a further study is shown by proposing a novel combined discrete-time ZNN (CDTZNN) model for solving TVNE. Specifically, a new difference formula, which is called the Taylor difference formula, is constructed for first-order derivative approximation by following Taylor series expansion. The Taylor difference formula is then used to discretize the continuous-time ZNN model in the previous study. The corresponding DTZNN model is obtained, where the direct Jacobian matrix inversion is required (being time consuming). Another DTZNN model for computing the inverse of Jacobian matrix is established to solve the aforementioned limitation. The novel CDTZNN model for solving the TVNE is thus developed by combining the two models. Theoretical analysis and numerical results demonstrate the efficacy of the proposed CDTZNN model. The CDTZNN applicability is further indicated by applying the proposed model to the motion planning of robot manipulators.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1576473"},"PeriodicalIF":2.8,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12289663/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144729707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-06-27eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1630728
Xueqin Ji, Shuting Zhao, Di Liu, Feng Wang, Xinrong Chen
{"title":"A robust and effective framework for 3D scene reconstruction and high-quality rendering in nasal endoscopy surgery.","authors":"Xueqin Ji, Shuting Zhao, Di Liu, Feng Wang, Xinrong Chen","doi":"10.3389/fnbot.2025.1630728","DOIUrl":"10.3389/fnbot.2025.1630728","url":null,"abstract":"<p><p>In nasal endoscopic surgery, the narrow nasal cavity restricts the surgical field of view and the manipulation of surgical instruments. Therefore, precise real-time intraoperative navigation, which can provide precise 3D information, plays a crucial role in avoiding critical areas with dense blood vessels and nerves. Although significant progress has been made in endoscopic 3D reconstruction methods, their application in nasal scenarios still faces numerous challenges. On the one hand, there is a lack of high-quality, annotated nasal endoscopy datasets. On the other hand, issues such as motion blur and soft tissue deformations complicate the nasal endoscopy reconstruction process. To tackle these challenges, a series of nasal endoscopy examination videos are collected, and the pose information for each frame is recorded. Additionally, a novel model named Mip-EndoGS is proposed, which integrates 3D Gaussian Splatting for reconstruction and rendering and a diffusion module to reduce image blurring in endoscopic data. Meanwhile, by incorporating an adaptive low-pass filter into the rendering pipeline, the aliasing artifacts (jagged edges) are mitigated, which occur during the rendering process. Extensive quantitative and visual experiments show that the proposed model is capable of reconstructing 3D scenes within the nasal cavity in real-time, thereby offering surgeons more detailed and precise information about the surgical scene. Moreover, the proposed approach holds great potential for integration with AR-based surgical navigation systems to enhance intraoperative guidance.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1630728"},"PeriodicalIF":2.6,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12245865/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144626010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-06-19eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1480399
Kody Shaw, John L Salmon, Marc D Killpack
{"title":"Understanding human co-manipulation via motion and haptic information to enable future physical human-robotic collaborations.","authors":"Kody Shaw, John L Salmon, Marc D Killpack","doi":"10.3389/fnbot.2025.1480399","DOIUrl":"10.3389/fnbot.2025.1480399","url":null,"abstract":"<p><p>Human teams intuitively and effectively collaborate to move large, heavy, or unwieldy objects. However, understanding of this interaction in literature is limited. This is especially problematic given our goal to enable human-robot teams to work together. Therefore, to better understand how human teams work together to eventually enable intuitive human-robot interaction, in this paper we examine four sub-components of collaborative manipulation (co-manipulation), using motion and haptics. We define co-manipulation as a group of two or more agents collaboratively moving an object. We present a study that uses a large object for co-manipulation as we vary the number of participants (two or three) and the roles of the participants (leaders or followers), and the degrees of freedom necessary to complete the defined motion for the object. In analyzing the results, we focus on four key components related to motion and haptics. Specifically, we first define and examine a static or rest state to demonstrate a method of detecting transitions between the static state and an active state, where one or more agents are moving toward an intended goal. Secondly, we analyze a variety of signals (e.g. force, acceleration, etc.) during movements in each of the six rigid-body degrees of freedom of the co-manipulated object. This data allows us to identify the best signals that correlate with the desired motion of the team. Third, we examine the completion percentage of each task. The completion percentage for each task can be used to determine which motion objectives can be communicated via haptic feedback. Finally, we define a metric to determine if participants divide two degree-of-freedom tasks into separate degrees of freedom or if they take the most direct path. These four components contribute to the necessary groundwork for advancing intuitive human-robot interaction.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1480399"},"PeriodicalIF":2.6,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12222233/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144559877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-06-19eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1616919
Xiaorong Qiu, Yingzhong Shi
{"title":"Multimodal fusion image enhancement technique and CFEC-YOLOv7 for underwater target detection algorithm research.","authors":"Xiaorong Qiu, Yingzhong Shi","doi":"10.3389/fnbot.2025.1616919","DOIUrl":"10.3389/fnbot.2025.1616919","url":null,"abstract":"<p><p>The underwater environment is more complex than that on land, resulting in severe static and dynamic blurring in underwater images, reducing the recognition accuracy of underwater targets and failing to meet the needs of underwater environment detection. Firstly, for the static blurring problem, we propose an adaptive color compensation algorithm and an improved MSR algorithm. Secondly, for the problem of dynamic blur, we adopt the Restormer network to eliminate the dynamic blur caused by the combined effects of camera shake, camera out-of-focus and relative motion displacement, etc. then, through qualitative analysis, quantitative analysis and underwater target detection on the enhanced dataset, the feasibility of our underwater enhancement method is verified. Finally, we propose a target recognition network suitable for the complex underwater environment. The local and global information is fused through the CCBC module and the ECLOU loss function to improve the positioning accuracy. The FasterNet module is introduced to reduce redundant computations and parameter counting. The experimental results show that the CFEC-YOLOv7 model and the underwater image enhancement method proposed by us exhibit excellent performance, can better adapt to the underwater target recognition task, and have a good application prospect.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1616919"},"PeriodicalIF":2.6,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12222134/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144559876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-06-18eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1587973
Xiaofei Han, Xin Dou
{"title":"User recommendation method integrating hierarchical graph attention network with multimodal knowledge graph.","authors":"Xiaofei Han, Xin Dou","doi":"10.3389/fnbot.2025.1587973","DOIUrl":"10.3389/fnbot.2025.1587973","url":null,"abstract":"<p><p>In common graph neural network (GNN), although incorporating social network information effectively utilizes interactions between users, it often overlooks the deeper semantic relationships between items and fails to integrate visual and textual feature information. This limitation can restrict the diversity and accuracy of recommendation results. To address this, the present study combines knowledge graph, GNN, and multimodal information to enhance feature representations of both users and items. The inclusion of knowledge graph not only provides a better understanding of the underlying logic behind user interests and preferences but also aids in addressing the cold-start problem for new users and items. Moreover, in improving recommendation accuracy, visual and textual features of items are incorporated as supplementary information. Therefore, a user recommendation model is proposed that integrates hierarchical graph attention network with multimodal knowledge graph. The model consists of four key components: a collaborative knowledge graph neural layer, an image feature extraction layer, a text feature extraction layer, and a prediction layer. The first three layers extract user and item features, and the recommendation is completed in the prediction layer. Experimental results based on two public datasets demonstrate that the proposed model significantly outperforms existing recommendation methods in terms of recommendation performance.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1587973"},"PeriodicalIF":2.6,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12213718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144553235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Context-Aware Enhanced Feature Refinement for small object detection with Deformable DETR.","authors":"Donghao Shi, Cunbin Zhao, Jianwen Shao, Minjie Feng, Lei Luo, Bing Ouyang, Jiamin Huang","doi":"10.3389/fnbot.2025.1588565","DOIUrl":"10.3389/fnbot.2025.1588565","url":null,"abstract":"<p><p>Small object detection is a critical task in applications like autonomous driving and ship black smoke detection. While Deformable DETR has advanced small object detection, it faces limitations due to its reliance on CNNs for feature extraction, which restricts global context understanding and results in suboptimal feature representation. Additionally, it struggles with detecting small objects that occupy only a few pixels due to significant size disparities. To overcome these challenges, we propose the Context-Aware Enhanced Feature Refinement Deformable DETR, an improved Deformable DETR network. Our approach introduces Mask Attention in the backbone to improve feature extraction while effectively suppressing irrelevant background information. Furthermore, we propose a Context-Aware Enhanced Feature Refinement Encoder to address the issue of small objects with limited pixel representation. Experimental results demonstrate that our method outperforms the baseline, achieving a 2.1% improvement in mAP.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1588565"},"PeriodicalIF":2.6,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12185399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144484070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Depth-aware unpaired image-to-image translation for autonomous driving test scenario generation using a dual-branch GAN.","authors":"Donghao Shi, Chenxin Zhao, Cunbin Zhao, Zhou Fang, Chonghao Yu, Jian Li, Minjie Feng","doi":"10.3389/fnbot.2025.1603964","DOIUrl":"10.3389/fnbot.2025.1603964","url":null,"abstract":"<p><p>Reliable visual perception is essential for autonomous driving test scenario generation, yet adverse weather and lighting variations pose significant challenges to simulation robustness and generalization. Traditional unpaired image-to-image translation methods primarily rely on RGB-based transformations, often resulting in geometric distortions and loss of structural consistency, which can negatively impact the realism and accuracy of generated test scenarios. To address these limitations, we propose a Depth-Aware Dual-Branch Generative Adversarial Network (DAB-GAN) that explicitly incorporates depth information to preserve spatial structures during scenario generation. The dual-branch generator processes both RGB and depth inputs, ensuring geometric fidelity, while a self-attention mechanism enhances spatial dependencies and local detail refinement. This enables the creation of realistic and structure-preserving test environments that are crucial for evaluating autonomous driving perception systems, especially under adverse weather conditions. Experimental results demonstrate that DAB-GAN outperforms existing unpaired image-to-image translation methods, achieving superior visual fidelity and maintaining depth-aware structural integrity. This approach provides a robust framework for generating diverse and challenging test scenarios, enhancing the development and validation of autonomous driving systems under various real-world conditions.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1603964"},"PeriodicalIF":2.6,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12162506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144301898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}