Frontiers in NeuroroboticsPub Date : 2024-12-04eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1481297
Xiaoxia Xie, Yuan Jia, Tiande Ma
{"title":"An improved graph factorization machine based on solving unbalanced game perception.","authors":"Xiaoxia Xie, Yuan Jia, Tiande Ma","doi":"10.3389/fnbot.2024.1481297","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1481297","url":null,"abstract":"<p><p>The user perception of mobile game is crucial for improving user experience and thus enhancing game profitability. The sparse data captured in the game can lead to sporadic performance of the model. This paper proposes a new method, the balanced graph factorization machine (BGFM), based on existing algorithms, considering the data imbalance and important high-dimensional features. The data categories are first balanced by Borderline-SMOTE oversampling, and then features are represented naturally in a graph-structured way. The highlight is that the BGFM contains interaction mechanisms for aggregating beneficial features. The results are represented as edges in the graph. Next, BGFM combines factorization machine (FM) and graph neural network strategies to concatenate any sequential feature interactions of features in the graph with an attention mechanism that assigns inter-feature weights. Experiments were conducted on the collected game perception dataset. The performance of proposed BGFM was compared with eight state-of-the-art models, significantly surpassing all of them by AUC, precision, recall, and F-measure indices.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1481297"},"PeriodicalIF":2.6,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11652536/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2024-12-04eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1443678
Yawar Abbas, Naif Al Mudawi, Bayan Alabdullah, Touseef Sadiq, Asaad Algarni, Hameedur Rahman, Ahmad Jalal
{"title":"Unmanned aerial vehicles for human detection and recognition using neural-network model.","authors":"Yawar Abbas, Naif Al Mudawi, Bayan Alabdullah, Touseef Sadiq, Asaad Algarni, Hameedur Rahman, Ahmad Jalal","doi":"10.3389/fnbot.2024.1443678","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1443678","url":null,"abstract":"<p><strong>Introduction: </strong>Recognizing human actions is crucial for allowing machines to understand and recognize human behavior, with applications spanning video based surveillance systems, human-robot collaboration, sports analysis systems, and entertainment. The immense diversity in human movement and appearance poses a significant challenge in this field, especially when dealing with drone-recorded (RGB) videos. Factors such as dynamic backgrounds, motion blur, occlusions, varying video capture angles, and exposure issues greatly complicate recognition tasks.</p><p><strong>Methods: </strong>In this study, we suggest a method that addresses these challenges in RGB videos captured by drones. Our approach begins by segmenting the video into individual frames, followed by preprocessing steps applied to these RGB frames. The preprocessing aims to reduce computational costs, optimize image quality, and enhance foreground objects while removing the background.</p><p><strong>Result: </strong>This results in improved visibility of foreground objects while eliminating background noise. Next, we employ the YOLOv9 detection algorithm to identify human bodies within the images. From the grayscale silhouette, we extract the human skeleton and identify 15 important locations, such as the head, neck, shoulders (left and right), elbows, wrists, hips, knees, ankles, and hips (left and right), and belly button. By using all these points, we extract specific positions, angular and distance relationships between them, as well as 3D point clouds and fiducial points. Subsequently, we optimize this data using the kernel discriminant analysis (KDA) optimizer, followed by classification using a deep neural network (CNN). To validate our system, we conducted experiments on three benchmark datasets: UAV-Human, UCF, and Drone-Action.</p><p><strong>Discussion: </strong>On these datasets, our suggested model produced corresponding action recognition accuracies of 0.68, 0.75, and 0.83.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1443678"},"PeriodicalIF":2.6,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11652500/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2024-12-04eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1462023
Xinyu Jiang, Chenfei Ma, Kianoush Nazarpour
{"title":"Posture-invariant myoelectric control with self-calibrating random forests.","authors":"Xinyu Jiang, Chenfei Ma, Kianoush Nazarpour","doi":"10.3389/fnbot.2024.1462023","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1462023","url":null,"abstract":"<p><strong>Introduction: </strong>Myoelectric control systems translate different patterns of electromyographic (EMG) signals into the control commands of diverse human-machine interfaces via hand gesture recognition, enabling intuitive control of prosthesis and immersive interactions in the metaverse. The effect of arm position is a confounding factor leading to the variability of EMG characteristics. Developing a model with its characteristics and performance invariant across postures, could largely promote the translation of myoelectric control into real world practice.</p><p><strong>Methods: </strong>Here we propose a self-calibrating random forest (RF) model which can (1) be pre-trained on data from many users, then one-shot calibrated on a new user and (2) self-calibrate in an unsupervised and autonomous way to adapt to varying arm positions.</p><p><strong>Results: </strong>Analyses on data from 86 participants (66 for pre-training and 20 in real-time evaluation experiments) demonstrate the high generalisability of the proposed RF architecture to varying arm positions.</p><p><strong>Discussion: </strong>Our work promotes the use of simple, explainable, efficient and parallelisable model for posture-invariant myoelectric control.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1462023"},"PeriodicalIF":2.6,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11652494/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EEG-based action anticipation in human-robot interaction: a comparative pilot study.","authors":"Rodrigo Vieira, Plinio Moreno, Athanasios Vourvopoulos","doi":"10.3389/fnbot.2024.1491721","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1491721","url":null,"abstract":"<p><p>As robots become integral to various sectors, improving human-robot collaboration is crucial, particularly in anticipating human actions to enhance safety and efficiency. Electroencephalographic (EEG) signals offer a promising solution, as they can detect brain activity preceding movement by over a second, enabling predictive capabilities in robots. This study explores how EEG can be used for action anticipation in human-robot interaction (HRI), leveraging its high temporal resolution and modern deep learning techniques. We evaluated multiple Deep Learning classification models on a motor imagery (MI) dataset, achieving up to 80.90% accuracy. These results were further validated in a pilot experiment, where actions were accurately predicted several hundred milliseconds before execution. This research demonstrates the potential of combining EEG with deep learning to enhance real-time collaborative tasks, paving the way for safer and more efficient human-robot interactions.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1491721"},"PeriodicalIF":2.6,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11649676/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142845975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2024-11-27eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1362444
Naïg Chenais, Arno Görgen
{"title":"Immersive interfaces for clinical applications: current status and future perspective.","authors":"Naïg Chenais, Arno Görgen","doi":"10.3389/fnbot.2024.1362444","DOIUrl":"10.3389/fnbot.2024.1362444","url":null,"abstract":"<p><p>Digital immersive technologies have become increasingly prominent in clinical research and practice, including medical communication and technical education, serious games for health, psychotherapy, and interfaces for neurorehabilitation. The worldwide enthusiasm for digital health and digital therapeutics has prompted the development and testing of numerous applications and interaction methods. Nevertheless, the lack of consistency in the approaches and the peculiarity of the constructed environments contribute to an increasing disparity between the eagerness for new immersive designs and the long-term clinical adoption of these technologies. Several challenges emerge in aligning the different priorities of virtual environment designers and clinicians. This article seeks to examine the utilization and mechanics of medical immersive interfaces based on extended reality and highlight specific design challenges. The transfer of skills from virtual to clinical environments is often confounded by perceptual and attractiveness factors. We argue that a multidisciplinary approach to development and testing, along with a comprehensive acknowledgement of the shared mechanisms that underlie immersive training, are essential for the sustainable integration of extended reality into clinical settings. The present review discusses the application of a multilevel sensory framework to extended reality design, with the aim of developing brain-centered immersive interfaces tailored for therapeutic and educational purposes. Such a framework must include broader design questions, such as the integration of digital technologies into psychosocial care models, clinical validation, and related ethical concerns. We propose that efforts to bridge the virtual gap should include mixed methodologies and neurodesign approaches, integrating user behavioral and physiological feedback into iterative design phases.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1362444"},"PeriodicalIF":2.6,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11631914/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142812874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2024-11-26eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1439195
Zhang Juan, Jing Zhang, Ming Gao
{"title":"A multimodal travel route recommendation system leveraging visual Transformers and self-attention mechanisms.","authors":"Zhang Juan, Jing Zhang, Ming Gao","doi":"10.3389/fnbot.2024.1439195","DOIUrl":"10.3389/fnbot.2024.1439195","url":null,"abstract":"<p><strong>Introduction: </strong>With the rapid development of the tourism industry, the demand for accurate and personalized travel route recommendations has significantly increased. However, traditional methods often fail to effectively integrate visual and sequential information, leading to recommendations that are both less accurate and less personalized.</p><p><strong>Methods: </strong>This paper introduces SelfAM-Vtrans, a novel algorithm that leverages multimodal data-combining visual Transformers, LSTMs, and self-attention mechanisms-to enhance the accuracy and personalization of travel route recommendations. SelfAM-Vtrans integrates visual and sequential information by employing a visual Transformer to extract features from travel images, thereby capturing spatial relationships within them. Concurrently, a Long Short-Term Memory (LSTM) network encodes sequential data to capture the temporal dependencies within travel sequences. To effectively merge these two modalities, a self-attention mechanism fuses the visual features and sequential encodings, thoroughly accounting for their interdependencies. Based on this fused representation, a classification or regression model is trained using real travel datasets to recommend optimal travel routes.</p><p><strong>Results and discussion: </strong>The algorithm was rigorously evaluated through experiments conducted on real-world travel datasets, and its performance was benchmarked against other route recommendation methods. The results demonstrate that SelfAM-Vtrans significantly outperforms traditional approaches in terms of both recommendation accuracy and personalization. By comprehensively incorporating both visual and sequential data, this method offers travelers more tailored and precise route suggestions, thereby enriching the overall travel experience.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1439195"},"PeriodicalIF":2.6,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11628496/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142806763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2024-11-21eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1479694
Jie Chang, Zhenmeng Wang, Chao Yan
{"title":"MusicARLtrans Net: a multimodal agent interactive music education system driven via reinforcement learning.","authors":"Jie Chang, Zhenmeng Wang, Chao Yan","doi":"10.3389/fnbot.2024.1479694","DOIUrl":"10.3389/fnbot.2024.1479694","url":null,"abstract":"<p><strong>Introduction: </strong>In recent years, with the rapid development of artificial intelligence technology, the field of music education has begun to explore new teaching models. Traditional music education research methods have primarily focused on single-modal studies such as note recognition and instrument performance techniques, often overlooking the importance of multimodal data integration and interactive teaching. Existing methods often struggle with handling multimodal data effectively, unable to fully utilize visual, auditory, and textual information for comprehensive analysis, which limits the effectiveness of teaching.</p><p><strong>Methods: </strong>To address these challenges, this project introduces MusicARLtrans Net, a multimodal interactive music education agent system driven by reinforcement learning. The system integrates Speech-to-Text (STT) technology to achieve accurate transcription of user voice commands, utilizes the ALBEF (Align Before Fuse) model for aligning and integrating multimodal data, and applies reinforcement learning to optimize teaching strategies.</p><p><strong>Results and discussion: </strong>This approach provides a personalized and real-time feedback interactive learning experience by effectively combining auditory, visual, and textual information. The system collects and annotates multimodal data related to music education, trains and integrates various modules, and ultimately delivers an efficient and intelligent music education agent. Experimental results demonstrate that MusicARLtrans Net significantly outperforms traditional methods, achieving an accuracy of <b>96.77%</b> on the LibriSpeech dataset and <b>97.55%</b> on the MS COCO dataset, with marked improvements in recall, F1 score, and AUC metrics. These results highlight the system's superiority in speech recognition accuracy, multimodal data understanding, and teaching strategy optimization, which together lead to enhanced learning outcomes and user satisfaction. The findings hold substantial academic and practical significance, demonstrating the potential of advanced AI-driven systems in revolutionizing music education.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1479694"},"PeriodicalIF":2.6,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11617572/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142785067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2024-11-20eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1483131
Ni Wang
{"title":"Multimodal robot-assisted English writing guidance and error correction with reinforcement learning.","authors":"Ni Wang","doi":"10.3389/fnbot.2024.1483131","DOIUrl":"10.3389/fnbot.2024.1483131","url":null,"abstract":"<p><strong>Introduction: </strong>With the development of globalization and the increasing importance of English in international communication, effectively improving English writing skills has become a key focus in language learning. Traditional methods for English writing guidance and error correction have predominantly relied on rule-based approaches or statistical models, such as conventional language models and basic machine learning algorithms. While these methods can aid learners in improving their writing quality to some extent, they often suffer from limitations such as inflexibility, insufficient contextual understanding, and an inability to handle multimodal information. These shortcomings restrict their effectiveness in more complex linguistic environments.</p><p><strong>Methods: </strong>To address these challenges, this study introduces ETG-ALtrans, a multimodal robot-assisted English writing guidance and error correction technology based on an improved ALBEF model and VGG19 architecture, enhanced by reinforcement learning. The approach leverages VGG19 to extract visual features and integrates them with the ALBEF model, achieving precise alignment and fusion of images and text. This enhances the model's ability to comprehend context. Furthermore, by incorporating reinforcement learning, the model can adaptively refine its correction strategies, thereby optimizing the effectiveness of writing guidance.</p><p><strong>Results and discussion: </strong>Experimental results demonstrate that the proposed ETG-ALtrans method significantly improves the accuracy of English writing error correction and the intelligence level of writing guidance in multimodal data scenarios. Compared to traditional methods, this approach not only enhances the precision of writing suggestions but also better caters to the personalized needs of learners, thereby effectively improving their writing skills. This research is of significant importance in the field of language learning technology and offers new perspectives and methodologies for the development of future English writing assistance tools.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1483131"},"PeriodicalIF":2.6,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11614782/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142779207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2024-11-20eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1473937
Chao Li, Yang Hu, Jianqiang Liu, Jianhai Jin, Jun Sun
{"title":"ISFM-SLAM: dynamic visual SLAM with instance segmentation and feature matching.","authors":"Chao Li, Yang Hu, Jianqiang Liu, Jianhai Jin, Jun Sun","doi":"10.3389/fnbot.2024.1473937","DOIUrl":"10.3389/fnbot.2024.1473937","url":null,"abstract":"<p><strong>Introduction: </strong>Simultaneous Localization and Mapping (SLAM) is a technology used in intelligent systems such as robots and autonomous vehicles. Visual SLAM has become a more popular type of SLAM due to its acceptable cost and good scalability when applied in robot positioning, navigation and other functions. However, most of the visual SLAM algorithms assume a static environment, so when they are implemented in highly dynamic scenes, problems such as tracking failure and overlapped mapping are prone to occur.</p><p><strong>Methods: </strong>To deal with this issue, we propose ISFM-SLAM, a dynamic visual SLAM built upon the classic ORB-SLAM2, incorporating an improved instance segmentation network and enhanced feature matching. Based on YOLACT, the improved instance segmentation network applies the multi-scale residual network Res2Net as its backbone, and utilizes CIoU_Loss in the bounding box loss function, to enhance the detection accuracy of the segmentation network. To improve the matching rate and calculation efficiency of the internal feature points, we fuse ORB key points with an efficient image descriptor to replace traditional ORB feature matching of ORB-SLAM2. Moreover, the motion consistency detection algorithm based on external variance values is proposed and integrated into ISFM-SLAM, to assist the proposed SLAM systems in culling dynamic feature points more effectively.</p><p><strong>Results and discussion: </strong>Simulation results on the TUM dataset show that the overall pose estimation accuracy of the ISFM-SLAM is 97% better than the ORB-SLAM2, and is superior to other mainstream and state-of-the-art dynamic SLAM systems. Further real-world experiments validate the feasibility of the proposed SLAM system in practical applications.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1473937"},"PeriodicalIF":2.6,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11615477/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142779015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frontiers in NeuroroboticsPub Date : 2024-11-19eCollection Date: 2024-01-01DOI: 10.3389/fnbot.2024.1466630
Kyo Kutsuzawa, Minami Matsumoto, Dai Owaki, Mitsuhiro Hayashibe
{"title":"Learning-based object's stiffness and shape estimation with confidence level in multi-fingered hand grasping.","authors":"Kyo Kutsuzawa, Minami Matsumoto, Dai Owaki, Mitsuhiro Hayashibe","doi":"10.3389/fnbot.2024.1466630","DOIUrl":"10.3389/fnbot.2024.1466630","url":null,"abstract":"<p><strong>Introduction: </strong>When humans grasp an object, they are capable of recognizing its characteristics, such as its stiffness and shape, through the sensation of their hands. They can also determine their level of confidence in the estimated object properties. In this study, we developed a method for multi-fingered hands to estimate both physical and geometric properties, such as the stiffness and shape of an object. Their confidence levels were measured using proprioceptive signals, such as joint angles and velocity.</p><p><strong>Method: </strong>We have developed a learning framework based on probabilistic inference that does not necessitate hyperparameters to maintain equilibrium between the estimation of diverse types of properties. Using this framework, we have implemented recurrent neural networks that estimate the stiffness and shape of grasped objects with their uncertainty in real time.</p><p><strong>Results: </strong>We demonstrated that the trained neural networks are capable of representing the confidence level of estimation that includes the degree of uncertainty and task difficulty in the form of variance and entropy.</p><p><strong>Discussion: </strong>We believe that this approach will contribute to reliable state estimation. Our approach would also be able to combine with flexible object manipulation and probabilistic inference-based decision making.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1466630"},"PeriodicalIF":2.6,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11611863/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142768248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}