{"title":"A greedy assist-as-needed controller for end-effect upper limb rehabilitation robot based on 3-DOF potential field constraints.","authors":"Yue Lu, Zixuan Lin, Yahui Li, Jinwang Lv, Jiaji Zhang, Cong Xiao, Ye Liang, Xujiao Chen, Tao Song, Guohong Chai, Guokun Zuo","doi":"10.3389/frobt.2024.1404814","DOIUrl":"10.3389/frobt.2024.1404814","url":null,"abstract":"<p><p>It has been proven that robot-assisted rehabilitation training can effectively promote the recovery of upper-limb motor function in post-stroke patients. Increasing patients' active participation by providing assist-as-needed (AAN) control strategies is key to the effectiveness of robot-assisted rehabilitation training. In this paper, a greedy assist-as-needed (GAAN) controller based on radial basis function (RBF) network combined with 3 degrees of freedom (3-DOF) potential constraints was proposed to provide AAN interactive forces of an end-effect upper limb rehabilitation robot. The proposed 3-DOF potential fields were adopted to constrain the tangential motions of three kinds of typical target trajectories (one-dimensional (1D) lines, two-dimensional (2D) curves and three-dimensional (3D) spirals) while the GAAN controller was designed to estimate the motor capability of a subject and provide appropriate robot-assisted forces. The co-simulation (Adams-Matlab/Simulink) experiments and behavioral experiments on 10 healthy volunteers were conducted to validate the utility of the GAAN controller. The experimental results demonstrated that the GAAN controller combined with 3-DOF potential field constraints enabled the subjects to actively participate in kinds of tracking tasks while keeping acceptable tracking accuracies. 3D spirals could be better in stimulating subjects' active participation when compared to 1D and 2D target trajectories. The current GAAN controller has the potential to be applied to existing commercial upper limb rehabilitation robots.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1404814"},"PeriodicalIF":2.9,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11522331/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ipsita Sahin, Mehrnoosh Ayazi, Caio Mucchiani, Jared Dube, Konstantinos Karydis, Elena Kokkoni
{"title":"Evaluation of fabric-based pneumatic actuator enclosure and anchoring configurations in a pediatric soft robotic exosuit.","authors":"Ipsita Sahin, Mehrnoosh Ayazi, Caio Mucchiani, Jared Dube, Konstantinos Karydis, Elena Kokkoni","doi":"10.3389/frobt.2024.1302862","DOIUrl":"10.3389/frobt.2024.1302862","url":null,"abstract":"<p><strong>Introduction: </strong>Soft robotics play an increasing role in the development of exosuits that assist, and in some cases enhance human motion. While most existing efforts have focused on the adult population, devices targeting infants are on the rise. This work investigated how different configurations pertaining to fabric-based pneumatic shoulder and elbow actuator embedding on the passive substrate of an exosuit for pediatric upper extremity motion assistance can affect key performance metrics.</p><p><strong>Methods: </strong>The configurations varied based on actuator anchoring points onto the substrate and the type of fabric used to fabricate the enclosures housing the actuators. Shoulder adduction/abduction and elbow flexion/extension were treated separately. Two different variants (for each case) of similar but distinct actuators were considered. The employed metrics were grouped into two categories; reachable workspace, which includes joint range of motion and end-effector path length; and motion smoothness, which includes end-effector path straightness index and jerk. The former category aimed to capture first-order terms (i.e., rotations and displacements) that capture overall gross motion, while the latter category aimed to shed light on differential terms that correlate with the quality of the attained motion. Extensive experimentation was conducted for each individual considered configuration, and statistical analyses were used to establish distinctive strengths, weaknesses, and trade-offs among those configurations.</p><p><strong>Results: </strong>The main findings from experiments confirm that the performance of the actuators can be significantly impacted by variations in the anchoring and fabric properties of the enclosures while establishing interesting trade-offs. Specifically, the most appropriate anchoring point was not necessarily the same for all actuator variants. In addition, highly stretchable fabrics not only maintained but even enhanced actuator capabilities, in comparison to the less stretchable materials which turned out to hinder actuator performance.</p><p><strong>Conclusion: </strong>The established trade-offs can serve as guiding principles for other researchers and practitioners developing upper extremity exosuits.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1302862"},"PeriodicalIF":2.9,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11502928/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142510382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leonard Bärmann, Rainer Kartmann, Fabian Peller-Konrad, Jan Niehues, Alex Waibel, Tamim Asfour
{"title":"Incremental learning of humanoid robot behavior from natural interaction and large language models.","authors":"Leonard Bärmann, Rainer Kartmann, Fabian Peller-Konrad, Jan Niehues, Alex Waibel, Tamim Asfour","doi":"10.3389/frobt.2024.1455375","DOIUrl":"https://doi.org/10.3389/frobt.2024.1455375","url":null,"abstract":"<p><p>Natural-language dialog is key for an intuitive human-robot interaction. It can be used not only to express humans' intents but also to communicate instructions for improvement if a robot does not understand a command correctly. It is of great importance to let robots learn from such interaction experiences in an incremental way to allow them to improve their behaviors or avoid mistakes in the future. In this paper, we propose a system to achieve such incremental learning of complex high-level behavior from natural interaction and demonstrate its implementation on a humanoid robot. Our system deploys large language models (LLMs) for high-level orchestration of the robot's behavior based on the idea of enabling the LLM to generate Python statements in an interactive console to invoke both robot perception and action. Human instructions, environment observations, and execution results are fed back to the LLM, thus informing the generation of the next statement. Since an LLM can misunderstand (potentially ambiguous) user instructions, we introduce incremental learning from the interaction, which enables the system to learn from its mistakes. For that purpose, the LLM can call another LLM responsible for code-level improvements in the current interaction based on human feedback. Subsequently, we store the improved interaction in the robot's memory so that it can later be retrieved on semantically similar requests. We integrate the system in the robot cognitive architecture of the humanoid robot ARMAR-6 and evaluate our methods both quantitatively (in simulation) and qualitatively (in simulation and real-world) by demonstrating generalized incrementally learned knowledge.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1455375"},"PeriodicalIF":2.9,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11499633/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142510385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julian Schneider, Matthias Brünett, Anne Gebert, Kevin Gisa, Andreas Hermann, Christian Lengenfelder, Arne Roennau, Svea Schuh, Lea Steffen
{"title":"HoLLiECares - Development of a multi-functional robot for professional care.","authors":"Julian Schneider, Matthias Brünett, Anne Gebert, Kevin Gisa, Andreas Hermann, Christian Lengenfelder, Arne Roennau, Svea Schuh, Lea Steffen","doi":"10.3389/frobt.2024.1325143","DOIUrl":"https://doi.org/10.3389/frobt.2024.1325143","url":null,"abstract":"<p><p>Germany's healthcare sector suffers from a shortage of nursing staff, and robotic solutions are being explored as a means to provide quality care. While many robotic systems have already been established in various medical fields (e.g., surgical robots, logistics robots), there are only a few very specialized robotic applications in the care sector. In this work, a multi-functional robot is applied in a hospital, capable of performing activities in the areas of transport and logistics, interactive assistance, and documentation. The service robot platform HoLLiE was further developed, with a focus on implementing innovative solutions for handling non-rigid objects, motion planning for non-holonomic motions with a wheelchair, accompanying and providing haptic support to patients, optical recognition and control of movement exercises, and automated speech recognition. Furthermore, the potential of a robot platform in a nursing context was evaluated by field tests in two hospitals. The results show that a robot can take over or support certain tasks. However, it was noted that robotic tasks should be carefully selected, as robots are not able to provide empathy and affection that are often required in nursing. The remaining challenges still exist in the implementation and interaction of multi-functional capabilities, ensuring ease of use for a complex robotic system, grasping highly heterogeneous objects, and fulfilling formal and infrastructural requirements in healthcare (e.g., safety, security, and data protection).</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1325143"},"PeriodicalIF":2.9,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11496034/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142510383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing Wang, Cheng Guo, Shaoyi Hu, Yibo Wang, Xuhui Fan
{"title":"Enhanced outdoor visual localization using Py-Net voting segmentation approach.","authors":"Jing Wang, Cheng Guo, Shaoyi Hu, Yibo Wang, Xuhui Fan","doi":"10.3389/frobt.2024.1469588","DOIUrl":"https://doi.org/10.3389/frobt.2024.1469588","url":null,"abstract":"<p><p>Camera relocalization determines the position and orientation of a camera in a 3D space. Althouh methods based on scene coordinate regression yield highly accurate results in indoor scenes, they exhibit poor performance in outdoor scenarios due to their large scale and increased complexity. A visual localization method, Py-Net, is therefore proposed herein. Py-Net is based on voting segmentation and comprises a main encoder containing Py-layer and two branch decoders. The Py-layer comprises pyramid convolution and 1 × 1 convolution kernels for feature extraction across multiple levels, with fewer parameters to enhance the model's ability to extract scene information. Coordinate attention was added at the end of the encoder for feature correction, which improved the model robustness to interference. To prevent the feature loss caused by repetitive structures and low-texture images in the scene, deep over-parameterized convolution modules were incorporated into the seg and vote decoders. Landmark segmentation and voting maps were used to establish the relation between images and landmarks in 3D space, reducing anomalies and achieving high precision with a small number of landmarks. The experimental results show that, in multiple outdoor scenes, Py-Net achieves lower distance and angle errors compared to existing methods. Additionally, compared to VS-Net, which also uses a voting segmentation structure, Py-Net reduces the number of parameters by 31.85% and decreases the model size from 236MB to 170 MB.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1469588"},"PeriodicalIF":2.9,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11497456/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142510381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhi-Li Han, Yu-Meng Lei, Jing Yu, Bing-Song Lei, Hua-Rong Ye, Ge Zhang
{"title":"Satisfaction analysis of 5G remote ultrasound robot for diagnostics based on a structural equation model.","authors":"Zhi-Li Han, Yu-Meng Lei, Jing Yu, Bing-Song Lei, Hua-Rong Ye, Ge Zhang","doi":"10.3389/frobt.2024.1413065","DOIUrl":"https://doi.org/10.3389/frobt.2024.1413065","url":null,"abstract":"<p><strong>Objectives: </strong>With the increasing application of 5G remote ultrasound robots in healthcare, robust methods are in critical demand to assess participant satisfaction and identify its influencing factors. At present, there is limited empirical research on multi-parametric and multidimensional satisfaction evaluation of participants with 5G remote ultrasound robot examination. Previous studies have demonstrated that structural equation modeling (SEM) effectively integrates various statistical techniques to examine the relationships among multiple variables. Therefore, this study aimed to evaluate the satisfaction of participants with 5G remote ultrasound robot examination and its influencing factors using SEM.</p><p><strong>Methods: </strong>Between April and June 2022, 213 participants from Wuhan Automobile Manufacturing Company underwent remote ultrasound examinations using the MGIUS-R3 remote ultrasound robot system. After these examinations, the participants evaluated the performance of the 5G remote ultrasound robot based on their personal experiences and emotional responses. They completed a satisfaction survey using a self-developed questionnaire, which included 19 items across five dimensions: examination efficiency, examination perception, communication perception, value perception, and examination willingness. A SEM was established to assess the satisfaction of participants with the 5G remote ultrasound robot examinations and the influencing factors.</p><p><strong>Results: </strong>A total of 201 valid questionnaires were collected. The overall satisfaction of participants with the 5G remote ultrasound robot examination was 45.43 ± 11.60, with 169 participants (84%) expressing satisfaction. In the path hypothesis relationship test, the dimensions of examination efficiency, examination perception, communication perception, and value perception had positive effects on satisfaction, with standardized path coefficients of 0.168, 0.170, 0.175, and 0.191. Satisfaction had a direct positive effect on examination willingness, with a standardized path coefficient of 0.260. Significant differences were observed across different educational levels in the dimensions of examination perception, communication perception, value perception, and examination willingness. Participants with different body mass indices also showed significant differences in examination perception; all <i>p</i>-values were less than 0.05.</p><p><strong>Conclusion: </strong>In this study, value perception was identified as the most significant factor influencing satisfaction. It could be improved by enhancing participants' understanding of the accuracy and safety of 5G remote ultrasound robot examinations. This enhances satisfaction and the willingness to undergo examinations. Such improvements not only facilitate the widespread adoption of this technology but also promote the development of telemedicine services.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1413065"},"PeriodicalIF":2.9,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11496036/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142510386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A comprehensive survey of space robotic manipulators for on-orbit servicing.","authors":"Mohammad Alizadeh, Zheng H Zhu","doi":"10.3389/frobt.2024.1470950","DOIUrl":"https://doi.org/10.3389/frobt.2024.1470950","url":null,"abstract":"<p><p>On-Orbit Servicing (OOS) robots are transforming space exploration by enabling vital maintenance and repair of spacecraft directly in space. However, achieving precise and safe manipulation in microgravity necessitates overcoming significant challenges. This survey delves into four crucial areas essential for successful OOS manipulation: object state estimation, motion planning, and feedback control. Techniques from traditional vision to advanced X-ray and neural network methods are explored for object state estimation. Strategies for fuel-optimized trajectories, docking maneuvers, and collision avoidance are examined in motion planning. The survey also explores control methods for various scenarios, including cooperative manipulation and handling uncertainties, in feedback control. Additionally, this survey examines how Machine learning techniques can further propel OOS robots towards more complex and delicate tasks in space.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1470950"},"PeriodicalIF":2.9,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11496037/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142510379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kashyap Haresamudram, Ilaria Torre, Magnus Behling, Christoph Wagner, Stefan Larsson
{"title":"Talking body: the effect of body and voice anthropomorphism on perception of social agents.","authors":"Kashyap Haresamudram, Ilaria Torre, Magnus Behling, Christoph Wagner, Stefan Larsson","doi":"10.3389/frobt.2024.1456613","DOIUrl":"https://doi.org/10.3389/frobt.2024.1456613","url":null,"abstract":"<p><strong>Introduction: </strong>In human-agent interaction, trust is often measured using human-trust constructs such as competence, benevolence, and integrity, however, it is unclear whether technology-trust constructs such as functionality, helpfulness, and reliability are more suitable. There is also evidence that perception of \"humanness\" measured through anthropomorphism varies based on the characteristics of the agent, but dimensions of anthropomorphism are not highlighted in empirical studies.</p><p><strong>Methods: </strong>In order to study how different embodiments and qualities of speech of agents influence type of trust and dimensions of anthropomorphism in perception of the agent, we conducted an experiment using two agent \"bodies\", a speaker and robot, employing four levels of \"humanness of voice\", and measured perception of the agent using human-trust, technology-trust, and Godspeed series questionnaires.</p><p><strong>Results: </strong>We found that the agents elicit both human and technology conceptions of trust with no significant difference, that differences in body and voice of an agent have no significant impact on trust, even though body and voice are both independently significant in anthropomorphism perception.</p><p><strong>Discussion: </strong>Interestingly, the results indicate that voice may be a stronger characteristic in influencing the perception of agents (not relating to trust) than physical appearance or body. We discuss the implications of our findings for research on human-agent interaction and highlight future research areas.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1456613"},"PeriodicalIF":2.9,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11496039/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142510387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patricia Schwarz, Sandra Hellmers, Sebastian Spanknebel, Rene Hurlemann, Andreas Hein
{"title":"Humanoid patient robot for diagnostic training in medical and psychiatric education.","authors":"Patricia Schwarz, Sandra Hellmers, Sebastian Spanknebel, Rene Hurlemann, Andreas Hein","doi":"10.3389/frobt.2024.1424845","DOIUrl":"https://doi.org/10.3389/frobt.2024.1424845","url":null,"abstract":"<p><p>Simulation-based learning is an integral part of hands-on learning and is often done through role-playing games or patients simulated by professional actors. In this article, we present the use of a humanoid robot as a simulation patient for the presentation of disease symptoms in the setting of medical education. In a study, 12 participants watched both the patient simulation by the robotic patient and the video with the actor patient. We asked participants about their subjective impressions of the robotic patient simulation compared to the video with the human actor patient using a self-developed questionnaire. In addition, we used the Affinity for Technology Interaction Scale. The evaluation of the questionnaire provided insights into whether the robot was able to realistically represent the patient which features still need to be improved, and whether the robot patient simulation was accepted by the participants as a learning method. Sixty-seven percent of the participants indicated that they would use the robot as a training opportunity in addition to the videos with acting patients. The majority of participants indicated that they found it very beneficial to have the robot repeat the case studies at their own pace.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1424845"},"PeriodicalIF":2.9,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11496789/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142510384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep neural network-based robotic visual servoing for satellite target tracking.","authors":"Shayan Ghiasvand, Wen-Fang Xie, Abolfazl Mohebbi","doi":"10.3389/frobt.2024.1469315","DOIUrl":"https://doi.org/10.3389/frobt.2024.1469315","url":null,"abstract":"<p><p>In response to the costly and error-prone manual satellite tracking on the International Space Station (ISS), this paper presents a deep neural network (DNN)-based robotic visual servoing solution to the automated tracking operation. This innovative approach directly addresses the critical issue of motion decoupling, which poses a significant challenge in current image moment-based visual servoing. The proposed method uses DNNs to estimate the manipulator's pose, resulting in a significant reduction of coupling effects, which enhances control performance and increases tracking precision. Real-time experimental tests are carried out using a 6-DOF Denso manipulator equipped with an RGB camera and an object, mimicking the targeting pin. The test results demonstrate a 32.04% reduction in pose error and a 21.67% improvement in velocity precision compared to conventional methods. These findings demonstrate that the method has the potential to improve efficiency and accuracy significantly in satellite target tracking and capturing.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1469315"},"PeriodicalIF":2.9,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11494149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142510380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}