Frontiers in NeuroroboticsPub Date : 2025-05-15eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1567291
Yongqi Zhu, Juan Li, Jianbin Huang, Weida Li, Gai Liu, Lining Sun
{"title":"Analysis and experiment of a positioning and pointing mechanism based on the stick-slip driving principle.","authors":"Yongqi Zhu, Juan Li, Jianbin Huang, Weida Li, Gai Liu, Lining Sun","doi":"10.3389/fnbot.2025.1567291","DOIUrl":"https://doi.org/10.3389/fnbot.2025.1567291","url":null,"abstract":"<p><strong>Introduction: </strong>Traditional positioning and pointing mechanisms often face limitations in simultaneously achieving high speed and high resolution, and their travel range is typically constrained. To overcome these challenges, we propose a novel positioning and pointing mechanism driven by piezoelectric ceramics in this study. This mechanism is capable of achieving both high speed and high resolution by using two driving principles: resonance and stick-slip. This paper will focus on analyzing the stick-slip driving principle.</p><p><strong>Methods: </strong>We propose a configuration of the drive module within the positioning and pointing mechanism. By applying a low-frequency sawtooth wave excitation to the piezoelectric ceramics, the mechanism achieves high resolution based on the stick-slip driving principle. First, a simplified dynamic model of the drive module is established. The motion process of the drive module in stick-slip driving is divided into the stick phase and slip phase. With static and transient dynamic analyses conducted for each phase, the relationship between the output shaft angle, resolution, and driving voltage is derived. It is observed that during the stick phase, the output shaft angle and the driving voltage exhibit an approximately linear relationship, while in the slip phase, the output shaft angle and the driving voltage display nonlinearity due to impact forces and vibrations. Finally, a prototype of the positioning and pointing mechanism is designed, and an experimental platform is constructed to test the resolution of the prototype.</p><p><strong>Results: </strong>We construct a prototype of a dual-axis positioning and pointing mechanism composed of multiple drive modules and conduct resolution tests using two control methods: synchronous control and independent control. When synchronous control is used, the output shaft achieves a resolution of 0.38<i>μrad</i>, while with independent control, the resolution of the output shaft reaches 0.0276<i>μrad</i>.</p><p><strong>Discussion: </strong>The research results show that the positioning and pointing mechanism proposed in this study achieves high resolution through stick-slip driving principle, offering a novel approach for the advancement of such mechanisms.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1567291"},"PeriodicalIF":2.6,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12119557/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144181186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-05-13eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1568811
Yilin Zhang
{"title":"TS-Resformer: a model based on multimodal fusion for the classification of music signals.","authors":"Yilin Zhang","doi":"10.3389/fnbot.2025.1568811","DOIUrl":"10.3389/fnbot.2025.1568811","url":null,"abstract":"<p><p>The number of music of different genres is increasing year by year, and manual classification is costly and requires professionals in the field of music to manually design features, some of which lack the generality of music genre classification. Deep learning has had a large number of scientific research results in the field of music classification, but the existing deep learning methods still have the problems of insufficient extraction of music feature information, low accuracy rate of music genres, loss of time series information, and slow training. To address the problem that different music durations affect the accuracy of music genre classification, we form a Log Mel spectrum with music audio data of different cut durations. After discarding incomplete audio, we design data enhancement with different slicing durations and verify its effect on accuracy and training time through comparison experiments. Based on this, the audio signal is divided into frames, windowed and short-time Fourier transformed, and then the Log Mel spectrum is obtained by using the Mel filter and logarithmic compression. Aiming at the problems of loss of time information, insufficient feature extraction, and low classification accuracy in music genre classification, firstly, we propose a Res-Transformer model that fuses the residual network with the Transformer coding layer. The model consists of two branches, the left branch is an improved residual network, which enhances the spectral feature extraction ability and network expression ability and realizes the dimensionality reduction; the right branch uses four Transformer coding layers to extract the time-series information of the Log Mel spectrum. The output vectors of the two branches are spliced and input into the classifier to realize music genre classification. Then, to further improve the classification accuracy of the model, we propose the TS-Resformer model based on the Res-Transformer model, combined with different attention mechanisms, and design the time-frequency attention mechanism, which employs different scales of filters to fully extract the low-level music features from the two dimensions of time and frequency as the input to the time-frequency attention mechanism, respectively. Finally, experiments show that the accuracy of this method is 90.23% on the FMA-small dataset, which is an improvement in classification accuracy compared with the classical model.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1568811"},"PeriodicalIF":2.6,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12106318/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144157987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-05-09eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1562519
Yanchun Xie, Anna Wang, Xue Zhao, Yang Jiang, Yao Wu, Hailong Yu
{"title":"Motion control and singular perturbation algorithms for lower limb rehabilitation robots.","authors":"Yanchun Xie, Anna Wang, Xue Zhao, Yang Jiang, Yao Wu, Hailong Yu","doi":"10.3389/fnbot.2025.1562519","DOIUrl":"10.3389/fnbot.2025.1562519","url":null,"abstract":"<p><p>To better assist patients with lower limb injuries in their rehabilitation training, this paper focuses on motion control and singular perturbation algorithms and their practical applications. First, the paper conducts an in-depth analysis of the mechanical structure of such robots and establishes detailed kinematics and dynamics models. An optimal S-type planning algorithm is proposed, transforming the S-type planning into an iterative solution problem for efficient and accelerated trajectory planning using dynamic equations. This algorithm comprehensively considers joint range of motion, speed constraints, and dynamic conditions, ensuring the smoothness and continuity of motion trajectories. Second, a zero-force control method is introduced, incorporating friction terms into the traditional dynamic equations and utilizing the LuGre friction model for friction analysis to achieve zero-force control. Furthermore, to address the multi-scale dynamic system characteristics present in rehabilitation training, a control method based on singular perturbation theory is proposed. This method enhances the system's robustness and adaptability by simplifying the system model and optimizing controller design, enabling it to better accommodate complex motion requirements during rehabilitation. Finally, experiments verify the correctness of the kinematics and optimal S-type trajectory planning. In lower limb rehabilitation robots, zero-force control can better assist patients in rehabilitation training for lower limb injuries, while the singular perturbation method improves the accuracy, response speed, and robustness of the control system, allowing it to adapt to individual rehabilitation needs and complex motion patterns. The novelty of this paper lies in the integration of the singular perturbation method with the LuGre friction model, significantly enhancing the precision of joint dynamic control, and improving controller design through the introduction of a torque deviation feedback mechanism, thereby increasing system stability and response speed. Experimental results demonstrate significant improvements in tracking error and system response compared to traditional methods, providing patients with a more comfortable and safer rehabilitation experience.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1562519"},"PeriodicalIF":2.6,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12098330/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144142345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-04-30eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1585386
Stefano Ferraro, Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt
{"title":"FOCUS: object-centric world models for robotic manipulation.","authors":"Stefano Ferraro, Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt","doi":"10.3389/fnbot.2025.1585386","DOIUrl":"https://doi.org/10.3389/fnbot.2025.1585386","url":null,"abstract":"<p><p>Understanding the world in terms of objects and the possible interactions with them is an important cognitive ability. However, current world models adopted in reinforcement learning typically lack this structure and represent the world state in a global latent vector. To address this, we propose FOCUS, a model-based agent that learns an object-centric world model. This novel representation also enables the design of an object-centric exploration mechanism, which encourages the agent to interact with objects and discover useful interactions. We benchmark FOCUS in several robotic manipulation settings, where we found that our method can be used to improve manipulation skills. The object-centric world model leads to more accurate predictions of the objects in the scene and it enables more efficient learning. The object-centric exploration strategy fosters interactions with the objects in the environment, such as reaching, moving, and rotating them, and it allows fast adaptation of the agent to sparse reward reinforcement learning tasks. Using a Franka Emika robot arm, we also showcase how FOCUS proves useful in real-world applications. Website: focus-manipulation.github.io.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1585386"},"PeriodicalIF":2.6,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12075287/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144077540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-04-30eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1531815
Eisa Aghchehli, Milad Jabbari, Chenfei Ma, Matthew Dyson, Kianoush Nazarpour
{"title":"Medium density EMG armband for gesture recognition.","authors":"Eisa Aghchehli, Milad Jabbari, Chenfei Ma, Matthew Dyson, Kianoush Nazarpour","doi":"10.3389/fnbot.2025.1531815","DOIUrl":"https://doi.org/10.3389/fnbot.2025.1531815","url":null,"abstract":"<p><p>Electromyography (EMG) systems are essential for the advancement of neuroprosthetics and human-machine interfaces. However, the gap between low-density and high-density systems poses challenges to researchers in experiment design and knowledge transfer. Medium-density surface EMG systems offer a balanced alternative, providing greater spatial resolution than low-density systems while avoiding the complexity and cost of high-density arrays. In this study, we developed a research-friendly medium-density EMG system and evaluated its performance with eleven volunteers performing grasping tasks. To enhance decoding accuracy, we introduced a novel spatio-temporal convolutional neural network that integrates spatial information from additional EMG sensors with temporal dynamics. The results show that medium-density EMG sensors significantly improve classification accuracy compared to low-density systems while maintaining the same footprint. Furthermore, the proposed neural network outperforms traditional gesture decoding approaches. This work highlights the potential of medium-density EMG systems as a practical and effective solution, bridging the gap between low- and high-density systems. These findings pave the way for broader adoption in research and potential clinical applications.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1531815"},"PeriodicalIF":2.6,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12075175/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144077543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-04-28eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1576438
Yanchun Xie, Binbin Zhu, Yang Jiang, Bin Zhao, Hailong Yu
{"title":"Diagnosis of pneumonia from chest X-ray images using YOLO deep learning.","authors":"Yanchun Xie, Binbin Zhu, Yang Jiang, Bin Zhao, Hailong Yu","doi":"10.3389/fnbot.2025.1576438","DOIUrl":"https://doi.org/10.3389/fnbot.2025.1576438","url":null,"abstract":"<p><p>Early and accurate diagnosis of pneumonia is crucial to improve cure rates and reduce mortality. Traditional chest X-ray analysis relies on physician experience, which can lead to subjectivity and misdiagnosis. To address this, we propose a novel pneumonia diagnosis method using the Fast-YOLO deep learning network that we introduced. First, we constructed a pneumonia dataset containing five categories and applied image enhancement techniques to increase data diversity and improve the model's generalization ability. Next, the YOLOv11 network structure was redesigned to accommodate the complex features of pneumonia X-ray images. By integrating the C3k2 module, DCNv2, and DynamicConv, the Fast-YOLO network effectively enhanced feature representation and reduced computational complexity (FPS increased from 53 to 120). Experimental results subsequently show that our method outperforms other commonly used detection models in terms of accuracy, recall, and mAP, offering better real-time detection capability and clinical application potential.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1576438"},"PeriodicalIF":2.6,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12077197/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144077440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-04-28eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1550939
Yongfei Guo, Bo Li, Wenyue Zhang, Weilong Dong
{"title":"Multi-scale image edge detection based on spatial-frequency domain interactive attention.","authors":"Yongfei Guo, Bo Li, Wenyue Zhang, Weilong Dong","doi":"10.3389/fnbot.2025.1550939","DOIUrl":"https://doi.org/10.3389/fnbot.2025.1550939","url":null,"abstract":"<p><p>Due to the many difficulties in accurately locating edges or boundaries in images of animals, plants, buildings, and the like with complex backgrounds, edge detection has become one of the most challenging tasks in the field of computer vision and is also a key step in many computer vision applications. Although existing deep learning-based methods can detect the edges of images relatively well, when the image background is rather complex and the key target is small, accurately detecting the edge of the main body and removing background interference remains a daunting task. Therefore, this paper proposes a multi-scale edge detection network based on spatial-frequency domain interactive attention, aiming to achieve accurate detection of the edge of the main target on multiple scales. The use of the spatial-frequency domain interactive attention module can not only perform significant edge extraction by filtering out some interference in the frequency domain. Moreover, by utilizing the interaction between the frequency domain and the spatial domain, edge features at different scales can be extracted and analyzed more accurately. The obtained results are superior to the current edge detection networks in terms of performance indicators and output image quality.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1550939"},"PeriodicalIF":2.6,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12066664/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144015855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-04-25eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1570870
Hassene Seddik, Hassen Fourati, Chiraz Ben Jabeur, Nahla Khraief
{"title":"Editorial: Biomedical signals and artificial intelligence towards smart robots control strategies.","authors":"Hassene Seddik, Hassen Fourati, Chiraz Ben Jabeur, Nahla Khraief","doi":"10.3389/fnbot.2025.1570870","DOIUrl":"https://doi.org/10.3389/fnbot.2025.1570870","url":null,"abstract":"","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1570870"},"PeriodicalIF":2.6,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063127/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143996823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-04-17eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1582995
Mohammed Alshehri, Laiba Zahoor, Yahya AlQahtani, Abdulmonem Alshahrani, Dina Abdulaziz AlHammadi, Ahmad Jalal, Hui Liu
{"title":"Unmanned aerial vehicle based multi-person detection via deep neural network models.","authors":"Mohammed Alshehri, Laiba Zahoor, Yahya AlQahtani, Abdulmonem Alshahrani, Dina Abdulaziz AlHammadi, Ahmad Jalal, Hui Liu","doi":"10.3389/fnbot.2025.1582995","DOIUrl":"https://doi.org/10.3389/fnbot.2025.1582995","url":null,"abstract":"<p><strong>Introduction: </strong>Understanding human actions in complex environments is crucial for advancing applications in areas such as surveillance, robotics, and autonomous systems. Identifying actions from UAV-recorded videos becomes more challenging as the task presents unique challenges, including motion blur, dynamic background, lighting variations, and varying viewpoints. The presented work develops a deep learning system that recognizes multi-person behaviors from data gathered by UAVs. The proposed system provides higher recognition accuracy while maintaining robustness along with dynamic environmental adaptability through the integration of different features and neural network models. The study supports the wider development of neural network systems utilized in complicated contexts while creating intelligent UAV applications utilizing neural networks.</p><p><strong>Method: </strong>The proposed study uses deep learning and feature extraction approaches to create a novel method to recognize various actions in UAV-recorded video. The proposed model improves identification capacities and system robustness by addressing motion dynamic problems and intricate environmental constraints, encouraging advancements in UAV-based neural network systems.</p><p><strong>Results: </strong>We proposed a deep learning-based framework with feature extraction approaches that may effectively increase the accuracy and robustness of multi-person action recognition in the challenging scenarios. Compared to the existing approaches, our system achieved 91.50% on MOD20 dataset and 89.71% on Okutama-Action. These results do, in fact, show how useful neural network-based methods are for managing the limitations of UAV-based application.</p><p><strong>Discussion: </strong>Results how that the proposed framework is indeed effective at multi-person action recognition under difficult UAV conditions.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1582995"},"PeriodicalIF":2.6,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12043872/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143997194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2025-04-16eCollection Date: 2025-01-01DOI: 10.3389/fnbot.2025.1558948
Shufan Dai, Shanqin Wang
{"title":"HR-NeRF: advancing realism and accuracy in highlight scene representation.","authors":"Shufan Dai, Shanqin Wang","doi":"10.3389/fnbot.2025.1558948","DOIUrl":"https://doi.org/10.3389/fnbot.2025.1558948","url":null,"abstract":"<p><p>NeRF and its variants excel in novel view synthesis but struggle with scenes featuring specular highlights. To address this limitation, we introduce the Highlight Recovery Network (HRNet), a new architecture that enhances NeRF's ability to capture specular scenes. HRNet incorporates Swish activation functions, affine transformations, multilayer perceptrons (MLPs), and residual blocks, which collectively enable smooth non-linear transformations, adaptive feature scaling, and hierarchical feature extraction. The residual connections help mitigate the vanishing gradient problem, ensuring stable training. Despite the simplicity of HRNet's components, it achieves impressive results in recovering specular highlights. Additionally, a density voxel grid enhances model efficiency. Evaluations on four inward-facing benchmarks demonstrate that our approach outperforms NeRF and its variants, achieving a 3-5 dB PSNR improvement on each dataset while accurately capturing scene details. Furthermore, our method effectively preserves image details without requiring positional encoding, rendering a single scene in ~18 min on an NVIDIA RTX 3090 Ti GPU.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1558948"},"PeriodicalIF":2.6,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12041011/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143988032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}