Frontiers in Neurorobotics最新文献

筛选
英文 中文
NAN-DETR: noising multi-anchor makes DETR better for object detection. NAN-DETR:噪声多锚使 DETR 更好地用于物体检测。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-14 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1484088
Zixin Huang, Xuesong Tao, Xinyuan Liu
{"title":"NAN-DETR: noising multi-anchor makes DETR better for object detection.","authors":"Zixin Huang, Xuesong Tao, Xinyuan Liu","doi":"10.3389/fnbot.2024.1484088","DOIUrl":"10.3389/fnbot.2024.1484088","url":null,"abstract":"<p><p>Object detection plays a crucial role in robotic vision, focusing on accurately identifying and localizing objects within images. However, many existing methods encounter limitations, particularly when it comes to effectively implementing a one-to-many matching strategy. To address these challenges, we propose NAN-DETR (Noising Multi-Anchor Detection Transformer), an innovative framework based on DETR (Detection Transformer). NAN-DETR introduces three key improvements to transformer-based object detection: a decoder-based multi-anchor strategy, a centralization noising mechanism, and the integration of Complete Intersection over Union (CIoU) loss. The multi-anchor strategy leverages multiple anchors per object, significantly enhancing detection accuracy by improving the one-to-many matching process. The centralization noising mechanism mitigates conflicts among anchors by injecting controlled noise into the detection boxes, thereby increasing the robustness of the model. Additionally, CIoU loss, which incorporates both aspect ratio and spatial distance in its calculations, results in more precise bounding box predictions compared to the conventional IoU loss. Although NAN-DETR may not drastically improve real-time processing capabilities, its exceptional performance positions it as a highly reliable solution for diverse object detection scenarios.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1484088"},"PeriodicalIF":2.6,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11513373/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142521681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Noisy Dueling Double Deep Q-Network algorithm for autonomous underwater vehicle path planning. 用于自主水下航行器路径规划的噪声决斗双深 Q 网络算法。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-14 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1466571
Xu Liao, Le Li, Chuangxia Huang, Xian Zhao, Shumin Tan
{"title":"Noisy Dueling Double Deep Q-Network algorithm for autonomous underwater vehicle path planning.","authors":"Xu Liao, Le Li, Chuangxia Huang, Xian Zhao, Shumin Tan","doi":"10.3389/fnbot.2024.1466571","DOIUrl":"10.3389/fnbot.2024.1466571","url":null,"abstract":"<p><p>How to improve the success rate of autonomous underwater vehicle (AUV) path planning and reduce travel time as much as possible is a very challenging and crucial problem in the practical applications of AUV in the complex ocean current environment. Traditional reinforcement learning algorithms lack exploration of the environment, and the strategies learned by the agent may not generalize well to other different environments. To address these challenges, we propose a novel AUV path planning algorithm named the Noisy Dueling Double Deep Q-Network (ND3QN) algorithm by modifying the reward function and introducing a noisy network, which generalizes the traditional D3QN algorithm. Compared with the classical algorithm [e.g., Rapidly-exploring Random Trees Star (RRT*), DQN, and D3QN], with simulation experiments conducted in realistic terrain and ocean currents, the proposed ND3QN algorithm demonstrates the outstanding characteristics of a higher success rate of AUV path planning, shorter travel time, and smoother paths.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1466571"},"PeriodicalIF":2.6,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11513341/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142521682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAM-Vtrans: real-time sports training utilizing multi-modal robot data. CAM-Vtrans:利用多模态机器人数据进行实时运动训练。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-11 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1453571
Hong LinLin, Lee Sangheang, Song GuanTing
{"title":"CAM-Vtrans: real-time sports training utilizing multi-modal robot data.","authors":"Hong LinLin, Lee Sangheang, Song GuanTing","doi":"10.3389/fnbot.2024.1453571","DOIUrl":"10.3389/fnbot.2024.1453571","url":null,"abstract":"<p><strong>Introduction: </strong>Assistive robots and human-robot interaction have become integral parts of sports training. However, existing methods often fail to provide real-time and accurate feedback, and they often lack integration of comprehensive multi-modal data.</p><p><strong>Methods: </strong>To address these issues, we propose a groundbreaking and innovative approach: CAM-Vtrans-Cross-Attention Multi-modal Visual Transformer. By leveraging the strengths of state-of-the-art techniques such as Visual Transformers (ViT) and models like CLIP, along with cross-attention mechanisms, CAM-Vtrans harnesses the power of visual and textual information to provide athletes with highly accurate and timely feedback. Through the utilization of multi-modal robot data, CAM-Vtrans offers valuable assistance, enabling athletes to optimize their performance while minimizing potential injury risks. This novel approach represents a significant advancement in the field, offering an innovative solution to overcome the limitations of existing methods and enhance the precision and efficiency of sports training programs.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1453571"},"PeriodicalIF":2.6,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11502466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142516399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sports-ACtrans Net: research on multimodal robotic sports action recognition driven via ST-GCN. Sports-ACtrans Net:通过 ST-GCN 驱动的多模态机器人运动动作识别研究。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-11 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1443432
Qi Lu
{"title":"Sports-ACtrans Net: research on multimodal robotic sports action recognition driven via ST-GCN.","authors":"Qi Lu","doi":"10.3389/fnbot.2024.1443432","DOIUrl":"10.3389/fnbot.2024.1443432","url":null,"abstract":"<p><strong>Introduction: </strong>Accurately recognizing and understanding human motion actions presents a key challenge in the development of intelligent sports robots. Traditional methods often encounter significant drawbacks, such as high computational resource requirements and suboptimal real-time performance. To address these limitations, this study proposes a novel approach called Sports-ACtrans Net.</p><p><strong>Methods: </strong>In this approach, the Swin Transformer processes visual data to extract spatial features, while the Spatio-Temporal Graph Convolutional Network (ST-GCN) models human motion as graphs to handle skeleton data. By combining these outputs, a comprehensive representation of motion actions is created. Reinforcement learning is employed to optimize the action recognition process, framing it as a sequential decision-making problem. Deep Q-learning is utilized to learn the optimal policy, thereby enhancing the robot's ability to accurately recognize and engage in motion.</p><p><strong>Results and discussion: </strong>Experiments demonstrate significant improvements over state-of-the-art methods. This research advances the fields of neural computation, computer vision, and neuroscience, aiding in the development of intelligent robotic systems capable of understanding and participating in sports activities.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1443432"},"PeriodicalIF":2.6,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11502397/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142498770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The SocialAI school: a framework leveraging developmental psychology toward artificial socio-cultural agents. SocialAI 学校:利用发展心理学实现人工社会文化代理的框架。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-10-09 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1396359
Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer
{"title":"The SocialAI school: a framework leveraging developmental psychology toward artificial socio-cultural agents.","authors":"Grgur Kovač, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer","doi":"10.3389/fnbot.2024.1396359","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1396359","url":null,"abstract":"<p><p>Developmental psychologists have long-established socio-cognitive abilities as fundamental to human intelligence and development. These abilities enable individuals to enter, learn from, and contribute to a surrounding culture. This drives the process of cumulative cultural evolution, which is responsible for humanity's most remarkable achievements. AI research on social interactive agents mostly concerns the <i>emergence</i> of culture in a multi-agent setting (often without a strong grounding in developmental psychology). We argue that AI research should be informed by psychology and study socio-cognitive abilities enabling to <i>enter</i> a culture as well. We draw inspiration from the work of Michael Tomasello and Jerome Bruner, who studied socio-cognitive development and emphasized the influence of a cultural environment on intelligence. We outline a broader set of concepts than those currently studied in AI to provide a foundation for research in artificial social intelligence. Those concepts include social cognition (joint attention, perspective taking), communication, social learning, formats, and scaffolding. To facilitate research in this domain, we present The SocialAI school-a tool that offers a customizable parameterized suite of procedurally generated environments. This tool simplifies experimentation with the introduced concepts. Additionally, these environments can be used both with multimodal RL agents, or with pure-text Large Language Models (LLMs) as interactive agents. Through a series of case studies, we demonstrate the versatility of the SocialAI school for studying both RL and LLM-based agents. Our motivation is to engage the AI community around social intelligence informed by developmental psychology, and to provide a user-friendly resource and tool for initial investigations in this direction. Refer to the project website for code and additional resources: https://sites.google.com/view/socialai-school.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1396359"},"PeriodicalIF":2.6,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11496287/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142498771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast reconstruction of milling temperature field based on CNN-GRU machine learning models. 基于 CNN-GRU 机器学习模型的铣削温度场快速重建。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-09-27 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1448482
Fengyuan Ma, Haoyu Wang, Mingfeng E, Zhongjin Sha, Xingshu Wang, Yunxian Cui, Junwei Yin
{"title":"Fast reconstruction of milling temperature field based on CNN-GRU machine learning models.","authors":"Fengyuan Ma, Haoyu Wang, Mingfeng E, Zhongjin Sha, Xingshu Wang, Yunxian Cui, Junwei Yin","doi":"10.3389/fnbot.2024.1448482","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1448482","url":null,"abstract":"<p><p>With the development of intelligent manufacturing technology, robots have become more widespread in the field of milling processing. When milling difficult-to-machine alloy materials, the localized high temperature and large temperature gradient at the front face of the tool lead to shortened tool life and poor machining quality. The existing temperature field reconstruction methods have many assumptions, large arithmetic volume and long solution time. In this paper, an inverse heat conduction problem solution model based on Gated Convolutional Recurrent Neural Network (CNN-GRU) is proposed for reconstructing the temperature field of the tool during milling. In order to ensure the speed and accuracy of the reconstruction, we propose to utilize the inverse heat conduction problem solution model constructed by knowledge distillation (KD) and compression acceleration, which achieves a significant reduction of the training time with a small loss of optimality and ensures the accuracy and efficiency of the prediction model. With different levels of random noise added to the model input data, CNN-GRU + KD is noise-resistant and still shows good robustness and stability under noisy data. The temperature field reconstruction of the milling tool is carried out for three different working conditions, and the curve fitting excellence under the three conditions is 0.97 at the highest, and the root mean square error is 1.43°C at the minimum, respectively, and the experimental results show that the model is feasible and effective in carrying out the temperature field reconstruction of the milling tool and is of great significance in improving the accuracy of the milling machining robot.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1448482"},"PeriodicalIF":2.6,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11466942/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142462936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ACA-Net: adaptive context-aware network for basketball action recognition. ACA-Net:用于篮球动作识别的自适应情境感知网络。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-09-25 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1471327
Yaolei Zhang, Fei Zhang, Yuanli Zhou, Xiao Xu
{"title":"ACA-Net: adaptive context-aware network for basketball action recognition.","authors":"Yaolei Zhang, Fei Zhang, Yuanli Zhou, Xiao Xu","doi":"10.3389/fnbot.2024.1471327","DOIUrl":"10.3389/fnbot.2024.1471327","url":null,"abstract":"<p><p>The advancements in intelligent action recognition can be instrumental in developing autonomous robotic systems capable of analyzing complex human activities in real-time, contributing to the growing field of robotics that operates in dynamic environments. The precise recognition of basketball players' actions using artificial intelligence technology can provide valuable assistance and guidance to athletes, coaches, and analysts, and can help referees make fairer decisions during games. However, unlike action recognition in simpler scenarios, the background in basketball is similar and complex, the differences between various actions are subtle, and lighting conditions are inconsistent, making action recognition in basketball a challenging task. To address this problem, an Adaptive Context-Aware Network (ACA-Net) for basketball player action recognition is proposed in this paper. It contains a Long Short-term Adaptive (LSTA) module and a Triplet Spatial-Channel Interaction (TSCI) module to extract effective features at the temporal, spatial, and channel levels. The LSTA module adaptively learns global and local temporal features of the video. The TSCI module enhances the feature representation by learning the interaction features between space and channels. We conducted extensive experiments on the popular basketball action recognition datasets SpaceJam and Basketball-51. The results show that ACA-Net outperforms the current mainstream methods, achieving 89.26% and 92.05% in terms of classification accuracy on the two datasets, respectively. ACA-Net's adaptable architecture also holds potential for real-world applications in autonomous robotics, where accurate recognition of complex human actions in unstructured environments is crucial for tasks such as automated game analysis, player performance evaluation, and enhanced interactive broadcasting experiences.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1471327"},"PeriodicalIF":2.6,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11461453/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142389755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Swimtrans Net: a multimodal robotic system for swimming action recognition driven via Swin-Transformer. Swimtrans Net:通过斯温变换器驱动的游泳动作识别多模态机器人系统。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-09-24 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1452019
He Chen, Xiaoyu Yue
{"title":"Swimtrans Net: a multimodal robotic system for swimming action recognition driven via Swin-Transformer.","authors":"He Chen, Xiaoyu Yue","doi":"10.3389/fnbot.2024.1452019","DOIUrl":"10.3389/fnbot.2024.1452019","url":null,"abstract":"<p><strong>Introduction: </strong>Currently, using machine learning methods for precise analysis and improvement of swimming techniques holds significant research value and application prospects. The existing machine learning methods have improved the accuracy of action recognition to some extent. However, they still face several challenges such as insufficient data feature extraction, limited model generalization ability, and poor real-time performance.</p><p><strong>Methods: </strong>To address these issues, this paper proposes an innovative approach called Swimtrans Net: A multimodal robotic system for swimming action recognition driven via Swin-Transformer. By leveraging the powerful visual data feature extraction capabilities of Swin-Transformer, Swimtrans Net effectively extracts swimming image information. Additionally, to meet the requirements of multimodal tasks, we integrate the CLIP model into the system. Swin-Transformer serves as the image encoder for CLIP, and through fine-tuning the CLIP model, it becomes capable of understanding and interpreting swimming action data, learning relevant features and patterns associated with swimming. Finally, we introduce transfer learning for pre-training to reduce training time and lower computational resources, thereby providing real-time feedback to swimmers.</p><p><strong>Results and discussion: </strong>Experimental results show that Swimtrans Net has achieved a 2.94% improvement over the current state-of-the-art methods in swimming motion analysis and prediction, making significant progress. This study introduces an innovative machine learning method that can help coaches and swimmers better understand and improve swimming techniques, ultimately improving swimming performance.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1452019"},"PeriodicalIF":2.6,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11458561/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142389758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human in the collaborative loop: a strategy for integrating human activity recognition and non-invasive brain-machine interfaces to control collaborative robots. 协作回路中的人:整合人类活动识别和非侵入式脑机接口以控制协作机器人的策略。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-09-24 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1383089
Artur Pilacinski, Lukas Christ, Marius Boshoff, Ioannis Iossifidis, Patrick Adler, Michael Miro, Bernd Kuhlenkötter, Christian Klaes
{"title":"Human in the collaborative loop: a strategy for integrating human activity recognition and non-invasive brain-machine interfaces to control collaborative robots.","authors":"Artur Pilacinski, Lukas Christ, Marius Boshoff, Ioannis Iossifidis, Patrick Adler, Michael Miro, Bernd Kuhlenkötter, Christian Klaes","doi":"10.3389/fnbot.2024.1383089","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1383089","url":null,"abstract":"<p><p>Human activity recognition (HAR) and brain-machine interface (BMI) are two emerging technologies that can enhance human-robot collaboration (HRC) in domains such as industry or healthcare. HAR uses sensors or cameras to capture and analyze the movements and actions of humans, while BMI uses human brain signals to decode action intentions. Both technologies face challenges impacting accuracy, reliability, and usability. In this article, we review the state-of-the-art techniques and methods for HAR and BMI and highlight their strengths and limitations. We then propose a hybrid framework that fuses HAR and BMI data, which can integrate the complementary information from the brain and body motion signals and improve the performance of human state decoding. We also discuss our hybrid method's potential benefits and implications for HRC.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1383089"},"PeriodicalIF":2.6,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11458527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142389756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MLFGCN: short-term residential load forecasting via graph attention temporal convolution network. MLFGCN:通过图注意时间卷积网络进行短期住宅负荷预测。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-09-23 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1461403
Ding Feng, Dengao Li, Yu Zhou, Wei Wang
{"title":"MLFGCN: short-term residential load forecasting via graph attention temporal convolution network.","authors":"Ding Feng, Dengao Li, Yu Zhou, Wei Wang","doi":"10.3389/fnbot.2024.1461403","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1461403","url":null,"abstract":"<p><strong>Introduction: </strong>Residential load forecasting is a challenging task due to the random fluctuations caused by complex correlations and individual differences. The existing short-term load forecasting models usually introduce external influencing factors such as climate and date. However, these additional information not only bring computational burden to the model, but also have uncertainty. To address these issues, we propose a novel multi-level feature fusion model based on graph attention temporal convolutional network (MLFGCN) for short-term residential load forecasting.</p><p><strong>Methods: </strong>The proposed MLFGCN model fully considers the potential long-term dependencies in a single load series and the correlations between multiple load series, and does not require any additional information to be added. Temporal convolutional network (TCN) with gating mechanism is introduced to learn potential long-term dependencies in the original load series. In addition, we design two graph attentive convolutional modules to capture potential multi-level dependencies in load data. Finally, the outputs of each module are fused through an information fusion layer to obtain the highly accurate forecasting results.</p><p><strong>Results: </strong>We conduct validation experiments on two real-world datasets. The results show that the proposed MLFGCN model achieves 0.25, 7.58% and 0.50 for MAE, MAPE and RMSE, respectively. These values are significantly better than those of baseline models.</p><p><strong>Discussion: </strong>The MLFGCN algorithm proposed in this paper can significantly improve the accuracy of short-term residential load forecasting. This is achieved through high-quality feature reconstruction, comprehensive information graph construction and spatiotemporal features capture.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1461403"},"PeriodicalIF":2.6,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11457015/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142389757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信