Frontiers in Neurorobotics最新文献

筛选
英文 中文
Transformer-based short-term traffic forecasting model considering traffic spatiotemporal correlation.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-23 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1527908
Ande Chang, Yuting Ji, Yiming Bie
{"title":"Transformer-based short-term traffic forecasting model considering traffic spatiotemporal correlation.","authors":"Ande Chang, Yuting Ji, Yiming Bie","doi":"10.3389/fnbot.2025.1527908","DOIUrl":"10.3389/fnbot.2025.1527908","url":null,"abstract":"<p><p>Traffic forecasting is crucial for a variety of applications, including route optimization, signal management, and travel time estimation. However, many existing prediction models struggle to accurately capture the spatiotemporal patterns in traffic data due to its inherent nonlinearity, high dimensionality, and complex dependencies. To address these challenges, a short-term traffic forecasting model, Trafficformer, is proposed based on the Transformer framework. The model first uses a multilayer perceptron to extract features from historical traffic data, then enhances spatial interactions through Transformer-based encoding. By incorporating road network topology, a spatial mask filters out noise and irrelevant interactions, improving prediction accuracy. Finally, traffic speed is predicted using another multilayer perceptron. In the experiments, Trafficformer is evaluated on the Seattle Loop Detector dataset. It is compared with six baseline methods, with Mean Absolute Error, Mean Absolute Percentage Error, and Root Mean Square Error used as metrics. The results show that Trafficformer not only has higher prediction accuracy, but also can effectively identify key sections, and has great potential in intelligent traffic control optimization and refined traffic resource allocation.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1527908"},"PeriodicalIF":2.6,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11799296/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143364427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMEEGNet: attention-based multiscale EEGNet for effective motor imagery EEG decoding.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-22 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1540033
Xuejian Wu, Yaqi Chu, Qing Li, Yang Luo, Yiwen Zhao, Xingang Zhao
{"title":"AMEEGNet: attention-based multiscale EEGNet for effective motor imagery EEG decoding.","authors":"Xuejian Wu, Yaqi Chu, Qing Li, Yang Luo, Yiwen Zhao, Xingang Zhao","doi":"10.3389/fnbot.2025.1540033","DOIUrl":"10.3389/fnbot.2025.1540033","url":null,"abstract":"<p><p>Recently, electroencephalogram (EEG) based on motor imagery (MI) have gained significant traction in brain-computer interface (BCI) technology, particularly for the rehabilitation of paralyzed patients. But the low signal-to-noise ratio of MI EEG makes it difficult to decode effectively and hinders the development of BCI. In this paper, a method of attention-based multiscale EEGNet (AMEEGNet) was proposed to improve the decoding performance of MI-EEG. First, three parallel EEGNets with fusion transmission method were employed to extract the high-quality temporal-spatial feature of EEG data from multiple scales. Then, the efficient channel attention (ECA) module enhances the acquisition of more discriminative spatial features through a lightweight approach that weights critical channels. The experimental results demonstrated that the proposed model achieves decoding accuracies of 81.17, 89.83, and 95.49% on BCI-2a, 2b and HGD datasets. The results show that the proposed AMEEGNet effectively decodes temporal-spatial features, providing a novel perspective on MI-EEG decoding and advancing future BCI applications.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1540033"},"PeriodicalIF":2.6,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11794809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143254282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Convolutional Networks for multi-modal robotic martial arts leg pose recognition.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-20 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1520983
Shun Yao, Yihan Ping, Xiaoyu Yue, He Chen
{"title":"Graph Convolutional Networks for multi-modal robotic martial arts leg pose recognition.","authors":"Shun Yao, Yihan Ping, Xiaoyu Yue, He Chen","doi":"10.3389/fnbot.2024.1520983","DOIUrl":"10.3389/fnbot.2024.1520983","url":null,"abstract":"<p><strong>Introduction: </strong>Accurate recognition of martial arts leg poses is essential for applications in sports analytics, rehabilitation, and human-computer interaction. Traditional pose recognition models, relying on sequential or convolutional approaches, often struggle to capture the complex spatial-temporal dependencies inherent in martial arts movements. These methods lack the ability to effectively model the nuanced dynamics of joint interactions and temporal progression, leading to limited generalization in recognizing complex actions.</p><p><strong>Methods: </strong>To address these challenges, we propose PoseGCN, a Graph Convolutional Network (GCN)-based model that integrates spatial, temporal, and contextual features through a novel framework. PoseGCN leverages spatial-temporal graph encoding to capture joint motion dynamics, an action-specific attention mechanism to assign importance to relevant joints depending on the action context, and a self-supervised pretext task to enhance temporal robustness and continuity. Experimental results on four benchmark datasets-Kinetics-700, Human3.6M, NTU RGB+D, and UTD-MHAD-demonstrate that PoseGCN outperforms existing models, achieving state-of-the-art accuracy and F1 scores.</p><p><strong>Results and discussion: </strong>These findings highlight the model's capacity to generalize across diverse datasets and capture fine-grained pose details, showcasing its potential in advancing complex pose recognition tasks. The proposed framework offers a robust solution for precise action recognition and paves the way for future developments in multi-modal pose analysis.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1520983"},"PeriodicalIF":2.6,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11792168/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143188921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved object detection method for autonomous driving based on DETR. 基于 DETR 的改进型自动驾驶物体检测方法。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-20 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1484276
Huaqi Zhao, Songnan Zhang, Xiang Peng, Zhengguang Lu, Guojing Li
{"title":"Improved object detection method for autonomous driving based on DETR.","authors":"Huaqi Zhao, Songnan Zhang, Xiang Peng, Zhengguang Lu, Guojing Li","doi":"10.3389/fnbot.2024.1484276","DOIUrl":"10.3389/fnbot.2024.1484276","url":null,"abstract":"<p><p>Object detection is a critical component in the development of autonomous driving technology and has demonstrated significant growth potential. To address the limitations of current techniques, this paper presents an improved object detection method for autonomous driving based on a detection transformer (DETR). First, we introduce a multi-scale feature and location information extraction method, which solves the inadequacy of the model for multi-scale object localization and detection. In addition, we developed a transformer encoder based on the group axial attention mechanism. This allows for efficient attention range control in the horizontal and vertical directions while reducing computation, ultimately enhancing the inference speed. Furthermore, we propose a novel dynamic hyperparameter tuning training method based on Pareto efficiency, which coordinates the training state of the loss functions through dynamic weights, overcoming issues associated with manually setting fixed weights and enhancing model convergence speed and accuracy. Experimental results demonstrate that the proposed method surpasses others, with improvements of 3.3%, 4.5%, and 3% in average precision on the COCO, PASCAL VOC, and KITTI datasets, respectively, and an 84% increase in FPS.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1484276"},"PeriodicalIF":2.6,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11788285/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143122709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-modality fusion with EEG and text for enhanced emotion detection in English writing.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-17 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1529880
Jing Wang, Ci Zhang
{"title":"Cross-modality fusion with EEG and text for enhanced emotion detection in English writing.","authors":"Jing Wang, Ci Zhang","doi":"10.3389/fnbot.2024.1529880","DOIUrl":"10.3389/fnbot.2024.1529880","url":null,"abstract":"<p><strong>Introduction: </strong>Emotion detection in written text is critical for applications in human-computer interaction, affective computing, and personalized content recommendation. Traditional approaches to emotion detection primarily leverage textual features, using natural language processing techniques such as sentiment analysis, which, while effective, may miss subtle nuances of emotions. These methods often fall short in recognizing the complex, multimodal nature of human emotions, as they ignore physiological cues that could provide richer emotional insights.</p><p><strong>Methods: </strong>To address these limitations, this paper proposes Emotion Fusion-Transformer, a cross-modality fusion model that integrates EEG signals and textual data to enhance emotion detection in English writing. By utilizing the Transformer architecture, our model effectively captures contextual relationships within the text while concurrently processing EEG signals to extract underlying emotional states. Specifically, the Emotion Fusion-Transformer first preprocesses EEG data through signal transformation and filtering, followed by feature extraction that complements the textual embeddings. These modalities are fused within a unified Transformer framework, allowing for a holistic view of both the cognitive and physiological dimensions of emotion.</p><p><strong>Results and discussion: </strong>Experimental results demonstrate that the proposed model significantly outperforms text-only and EEG-only approaches, with improvements in both accuracy and F1-score across diverse emotional categories. This model shows promise for enhancing affective computing applications by bridging the gap between physiological and textual emotion detection, enabling more nuanced and accurate emotion analysis in English writing.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1529880"},"PeriodicalIF":2.6,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11782560/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143079354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Brain-inspired autonomous driving.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-16 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1543115
Elishai Ezra Tsur, Gianluca Di Flumeri, Hadar Cohen Duwek
{"title":"Editorial: Brain-inspired autonomous driving.","authors":"Elishai Ezra Tsur, Gianluca Di Flumeri, Hadar Cohen Duwek","doi":"10.3389/fnbot.2025.1543115","DOIUrl":"10.3389/fnbot.2025.1543115","url":null,"abstract":"","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1543115"},"PeriodicalIF":2.6,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11779717/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143079357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep reinforcement learning and robust SLAM based robotic control algorithm for self-driving path optimization.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-15 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1428358
Samiullah Khan, Ashfaq Niaz, Dou Yinke, Muhammad Usman Shoukat, Saqib Ali Nawaz
{"title":"Deep reinforcement learning and robust SLAM based robotic control algorithm for self-driving path optimization.","authors":"Samiullah Khan, Ashfaq Niaz, Dou Yinke, Muhammad Usman Shoukat, Saqib Ali Nawaz","doi":"10.3389/fnbot.2024.1428358","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1428358","url":null,"abstract":"<p><p>A reward shaping deep deterministic policy gradient (RS-DDPG) and simultaneous localization and mapping (SLAM) path tracking algorithm is proposed to address the issues of low accuracy and poor robustness of target path tracking for robotic control during maneuver. RS-DDPG algorithm is based on deep reinforcement learning (DRL) and designs a reward function to optimize the parameters of DDPG to achieve the required tracking accuracy and stability. A visual SLAM algorithm based on semantic segmentation and geometric information is proposed to address the issues of poor robustness and susceptibility to interference from dynamic objects in dynamic scenes for SLAM based on visual sensors. Using the Apollo autonomous driving simulation platform, simulation experiments were conducted on the actual DDPG algorithm and the improved RS-DDPG path-tracking control algorithm. The research results indicate that the proposed RS-DDPG algorithm outperforms the DDPG algorithm in terms of path tracking accuracy and robustness. The results showed that it effectively improved the performance of visual SLAM systems in dynamic scenarios.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1428358"},"PeriodicalIF":2.6,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11775903/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143064928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A portable EEG signal acquisition system and a limited-electrode channel classification network for SSVEP.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-15 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1502560
Yunxiao Ma, Jinming Huang, Chuan Liu, Meiyu Shi
{"title":"A portable EEG signal acquisition system and a limited-electrode channel classification network for SSVEP.","authors":"Yunxiao Ma, Jinming Huang, Chuan Liu, Meiyu Shi","doi":"10.3389/fnbot.2024.1502560","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1502560","url":null,"abstract":"<p><p>Brain-computer interfaces (BCIs) have garnered significant research attention, yet their complexity has hindered widespread adoption in daily life. Most current electroencephalography (EEG) systems rely on wet electrodes and numerous electrodes to enhance signal quality, making them impractical for everyday use. Portable and wearable devices offer a promising solution, but the limited number of electrodes in specific regions can lead to missing channels and reduced BCI performance. To overcome these challenges and enable better integration of BCI systems with external devices, this study developed an EEG signal acquisition platform (Gaitech BCI) based on the Robot Operating System (ROS) using a 10-channel dry electrode EEG device. Additionally, a multi-scale channel attention selection network based on the Squeeze-and-Excitation (SE) module (SEMSCS) is proposed to improve the classification performance of portable BCI devices with limited channels. Steady-state visual evoked potential (SSVEP) data were collected using the developed BCI system to evaluate both the system and network performance. Offline data from ten subjects were analyzed using within-subject and cross-subject experiments, along with ablation studies. The results demonstrated that the SEMSCS model achieved better classification performance than the comparative reference model, even with a limited number of channels. Additionally, the implementation of online experiments offers a rational solution for controlling external devices via BCI.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1502560"},"PeriodicalIF":2.6,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11774901/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143064991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating attention mechanism and boundary detection for building segmentation from remote sensing images.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-14 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1482051
Ping Liu, Yu Gao, Xiangtian Zheng, Hesong Wang, Yimeng Zhao, Xinru Wu, Zehao Lu, Zhichuan Yue, Yuting Xie, Shufeng Hao
{"title":"Integrating attention mechanism and boundary detection for building segmentation from remote sensing images.","authors":"Ping Liu, Yu Gao, Xiangtian Zheng, Hesong Wang, Yimeng Zhao, Xinru Wu, Zehao Lu, Zhichuan Yue, Yuting Xie, Shufeng Hao","doi":"10.3389/fnbot.2024.1482051","DOIUrl":"10.3389/fnbot.2024.1482051","url":null,"abstract":"<p><p>Accurate building segmentation has become critical in various fields such as urban management, urban planning, mapping, and navigation. With the increasing diversity in the number, size, and shape of buildings, convolutional neural networks have been used to segment and extract buildings from such images, resulting in increased efficiency and utilization of image features. We propose a building semantic segmentation method to improve the traditional Unet convolutional neural network by integrating attention mechanism and boundary detection. The attention mechanism module combines attention in the channel and spatial dimensions. The module captures image feature information in the channel dimension using a one-dimensional convolutional cross-channel method and automatically adjusts the cross-channel dimension using adaptive convolutional kernel size. Additionally, a weighted boundary loss function is designed to replace the traditional semantic segmentation cross-entropy loss to detect the boundary of a building. The loss function optimizes the extraction of building boundaries in backpropagation, ensuring the integrity of building boundary extraction in the shadow part. Experimental results show that the proposed model AMBDNet achieves high-performance metrics, including a recall rate of 0.9046, an IoU of 0.7797, and a pixel accuracy of 0.9140 on high-resolution remote sensing images, demonstrating its robustness and effectiveness in precise building segmentation. Experimental results further indicate that AMBDNet improves the single-class recall of buildings by 0.0322 and the single-class pixel accuracy by 0.0169 in the high-resolution remote sensing image recognition task.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1482051"},"PeriodicalIF":2.6,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11772425/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143058772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FusionU10: enhancing pedestrian detection in low-light complex tourist scenes through multimodal fusion. FusionU10:通过多模态融合增强低照度复杂旅游场景中的行人检测。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-10 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1504070
Xuefan Zhou, Jiapeng Li, Yingzheng Li
{"title":"FusionU10: enhancing pedestrian detection in low-light complex tourist scenes through multimodal fusion.","authors":"Xuefan Zhou, Jiapeng Li, Yingzheng Li","doi":"10.3389/fnbot.2024.1504070","DOIUrl":"10.3389/fnbot.2024.1504070","url":null,"abstract":"<p><p>With the rapid development of tourism, the concentration of visitor flows poses significant challenges for public safety management, especially in low-light and highly occluded environments, where existing pedestrian detection technologies often struggle to achieve satisfactory accuracy. Although infrared images perform well under low-light conditions, they lack color and detail, making them susceptible to background noise interference, particularly in complex outdoor environments where the similarity between heat sources and pedestrian features further reduces detection accuracy. To address these issues, this paper proposes the FusionU10 model, which combines information from both infrared and visible light images. The model first incorporates an Attention Gate mechanism (AGUNet) into an improved UNet architecture to focus on key features and generate pseudo-color images, followed by pedestrian detection using YOLOv10. During the prediction phase, the model optimizes the loss function with Complete Intersection over Union (CIoU), objectness loss (obj loss), and classification loss (cls loss), thereby enhancing the performance of the detection network and improving the quality and feature extraction capabilities of the pseudo-color images through a feedback mechanism. Experimental results demonstrate that FusionU10 significantly improves detection accuracy and robustness in complex scenes on the FLIR, M3FD, and LLVIP datasets, showing great potential for application in challenging environments.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1504070"},"PeriodicalIF":2.6,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11757253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143046357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信