Frontiers in Neurorobotics最新文献

筛选
英文 中文
RL-QPSO net: deep reinforcement learning-enhanced QPSO for efficient mobile robot path planning.
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-08 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1464572
Yang Jing, Li Weiya
{"title":"RL-QPSO net: deep reinforcement learning-enhanced QPSO for efficient mobile robot path planning.","authors":"Yang Jing, Li Weiya","doi":"10.3389/fnbot.2024.1464572","DOIUrl":"10.3389/fnbot.2024.1464572","url":null,"abstract":"<p><strong>Introduction: </strong>Path planning in complex and dynamic environments poses a significant challenge in the field of mobile robotics. Traditional path planning methods such as genetic algorithms, Dijkstra's algorithm, and Floyd's algorithm typically rely on deterministic search strategies, which can lead to local optima and lack global search capabilities in dynamic settings. These methods have high computational costs and are not efficient for real-time applications.</p><p><strong>Methods: </strong>To address these issues, this paper presents a Quantum-behaved Particle Swarm Optimization model enhanced by deep reinforcement learning (RL-QPSO Net) aimed at improving global optimality and adaptability in path planning. The RL-QPSO Net combines quantum-inspired particle swarm optimization (QPSO) and deep reinforcement learning (DRL) modules through a dual control mechanism to achieve path optimization and environmental adaptation. The QPSO module is responsible for global path optimization, using quantum mechanics to avoid local optima, while the DRL module adjusts strategies in real-time based on environmental feedback, thus enhancing decision-making capabilities in complex high-dimensional scenarios.</p><p><strong>Results and discussion: </strong>Experiments were conducted on multiple datasets, including Cityscapes, NYU Depth V2, Mapillary Vistas, and ApolloScape, and the results showed that RL-QPSO Net outperforms traditional methods in terms of accuracy, computational efficiency, and model complexity. This method demonstrated significant improvements in accuracy and computational efficiency, providing an effective path planning solution for real-time applications in complex environments for mobile robots. In the future, this method could be further extended to resource-limited environments to achieve broader practical applications.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1464572"},"PeriodicalIF":2.6,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11750848/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143023169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Directional Spatial and Spectral Attention Network (DSSA Net) for EEG-based emotion recognition. 基于脑电图的定向空间与频谱注意网络(DSSA Net)。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1481746
Jiyao Liu, Lang He, Haifeng Chen, Dongmei Jiang
{"title":"Directional Spatial and Spectral Attention Network (DSSA Net) for EEG-based emotion recognition.","authors":"Jiyao Liu, Lang He, Haifeng Chen, Dongmei Jiang","doi":"10.3389/fnbot.2024.1481746","DOIUrl":"10.3389/fnbot.2024.1481746","url":null,"abstract":"<p><p>Significant strides have been made in emotion recognition from Electroencephalography (EEG) signals. However, effectively modeling the diverse spatial, spectral, and temporal features of multi-channel brain signals remains a challenge. This paper proposes a novel framework, the Directional Spatial and Spectral Attention Network (DSSA Net), which enhances emotion recognition accuracy by capturing critical spatial-spectral-temporal features from EEG signals. The framework consists of three modules: Positional Attention (PA), Spectral Attention (SA), and Temporal Attention (TA). The PA module includes Vertical Attention (VA) and Horizontal Attention (HA) branches, designed to detect active brain regions from different orientations. Experimental results on three benchmark EEG datasets demonstrate that DSSA Net outperforms most competitive methods. On the SEED and SEED-IV datasets, it achieves accuracies of 96.61% and 85.07% for subject-dependent emotion recognition, respectively, and 87.03% and 75.86% for subject-independent recognition. On the DEAP dataset, it attains accuracies of 94.97% for valence and 94.73% for arousal. These results showcase the framework's ability to leverage both spatial and spectral differences across brain hemispheres and regions, enhancing classification accuracy for emotion recognition.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1481746"},"PeriodicalIF":2.6,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11747697/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KalmanFormer: using transformer to model the Kalman Gain in Kalman Filters. 卡尔曼前:利用变压器对卡尔曼滤波器中的卡尔曼增益进行建模。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1460255
Siyuan Shen, Jichen Chen, Guanfeng Yu, Zhengjun Zhai, Pujie Han
{"title":"KalmanFormer: using transformer to model the Kalman Gain in Kalman Filters.","authors":"Siyuan Shen, Jichen Chen, Guanfeng Yu, Zhengjun Zhai, Pujie Han","doi":"10.3389/fnbot.2024.1460255","DOIUrl":"10.3389/fnbot.2024.1460255","url":null,"abstract":"<p><strong>Introduction: </strong>Tracking the hidden states of dynamic systems is a fundamental task in signal processing. Recursive Kalman Filters (KF) are widely regarded as an efficient solution for linear and Gaussian systems, offering low computational complexity. However, real-world applications often involve non-linear dynamics, making it challenging for traditional Kalman Filters to achieve accurate state estimation. Additionally, the accurate modeling of system dynamics and noise in practical scenarios is often difficult. To address these limitations, we propose the KalmanFormer, a hybrid model-driven and data-driven state estimator. By leveraging data, the KalmanFormer promotes the performance of state estimation under non-linear conditions and partial information scenarios.</p><p><strong>Methods: </strong>The proposed KalmanFormer integrates classical Kalman Filter with a Transformer framework. Specifically, it utilizes the Transformer to learn the Kalman Gain directly from data without requiring prior knowledge of noise parameters. The learned Kalman Gain is then incorporated into the standard Kalman Filter workflow, enabling the system to better handle non-linearities and model mismatches. The hybrid approach combines the strengths of data-driven learning and model-driven methodologies to achieve robust state estimation.</p><p><strong>Results and discussion: </strong>To evaluate the effectiveness of KalmanFormer, we conducted numerical experiments in both synthetic and real-world dataset. The results demonstrate that KalmanFormer outperforms the classical Extended Kalman Filter (EKF) in the same settings. It achieves superior accuracy in tracking hidden states, demonstrating resilience to non-linearities and imprecise system models.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1460255"},"PeriodicalIF":2.6,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11747084/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSGU-Net: a lightweight multi-scale ghost U-Net for image segmentation. MSGU-Net:用于图像分割的轻量级多尺度幽灵U-Net。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-06 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1480055
Hua Cheng, Yang Zhang, Huangxin Xu, Dingliang Li, Zejian Zhong, Yinchuan Zhao, Zhuo Yan
{"title":"MSGU-Net: a lightweight multi-scale ghost U-Net for image segmentation.","authors":"Hua Cheng, Yang Zhang, Huangxin Xu, Dingliang Li, Zejian Zhong, Yinchuan Zhao, Zhuo Yan","doi":"10.3389/fnbot.2024.1480055","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1480055","url":null,"abstract":"<p><p>U-Net and its variants have been widely used in the field of image segmentation. In this paper, a lightweight multi-scale Ghost U-Net (MSGU-Net) network architecture is proposed. This can efficiently and quickly process image segmentation tasks while generating high-quality object masks for each object. The pyramid structure (SPP-Inception) module and ghost module are seamlessly integrated in a lightweight manner. Equipped with an efficient local attention (ELA) mechanism and an attention gate mechanism, they are designed to accurately identify the region of interest (ROI). The SPP-Inception module and ghost module work in tandem to effectively merge multi-scale information derived from low-level features, high-level features, and decoder masks at each stage. Comparative experiments were conducted between the proposed MSGU-Net and state-of-the-art networks on the ISIC2017 and ISIC2018 datasets. In short, compared to the baseline U-Net, our model achieves superior segmentation performance while reducing parameter and computation costs by 96.08 and 92.59%, respectively. Moreover, MSGU-Net can serve as a lightweight deep neural network suitable for deployment across a range of intelligent devices and mobile platforms, offering considerable potential for widespread adoption.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1480055"},"PeriodicalIF":2.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11743674/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Architectural planning robot driven by unsupervised learning for space optimization. 基于无监督学习驱动的建筑规划机器人进行空间优化。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2025-01-03 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1517960
Zhe Zhang, Yuchun Zheng
{"title":"Architectural planning robot driven by unsupervised learning for space optimization.","authors":"Zhe Zhang, Yuchun Zheng","doi":"10.3389/fnbot.2024.1517960","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1517960","url":null,"abstract":"<p><strong>Introduction: </strong>Space optimization in architectural planning is a crucial task for maximizing functionality and improving user experience in built environments. Traditional approaches often rely on manual planning or supervised learning techniques, which can be limited by the availability of labeled data and may not adapt well to complex spatial requirements.</p><p><strong>Methods: </strong>To address these limitations, this paper presents a novel architectural planning robot driven by unsupervised learning for automatic space optimization. The proposed framework integrates spatial attention, clustering, and state refinement mechanisms to autonomously learn and optimize spatial configurations without the need for labeled training data. The spatial attention mechanism focuses the model on key areas within the architectural space, clustering identifies functional zones, and state refinement iteratively improves the spatial layout by adjusting based on learned patterns. Experiments conducted on multiple 3D datasets demonstrate the effectiveness of the proposed approach in achieving optimized space layouts with reduced computational requirements.</p><p><strong>Results and discussion: </strong>The results show significant improvements in layout efficiency and processing time compared to traditional methods, indicating the potential for real-world applications in automated architectural planning and dynamic space management. This work contributes to the field by providing a scalable solution for architectural space optimization that adapts to diverse spatial requirements through unsupervised learning.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1517960"},"PeriodicalIF":2.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11739300/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EEG-powered cerebral transformer for athletic performance. 用于运动表现的脑电图驱动的大脑变压器。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-12-20 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1499734
Qikai Sun
{"title":"EEG-powered cerebral transformer for athletic performance.","authors":"Qikai Sun","doi":"10.3389/fnbot.2024.1499734","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1499734","url":null,"abstract":"<p><strong>Introduction: </strong>In recent years, with advancements in wearable devices and biosignal analysis technologies, sports performance analysis has become an increasingly popular research field, particularly due to the growing demand for real-time monitoring of athletes' conditions in sports training and competitive events. Traditional methods of sports performance analysis typically rely on video data or sensor data for motion recognition. However, unimodal data often fails to fully capture the neural state of athletes, leading to limitations in accuracy and real-time performance when dealing with complex movement patterns. Moreover, these methods struggle with multimodal data fusion, making it difficult to fully leverage the deep information from electroencephalogram (EEG) signals.</p><p><strong>Methods: </strong>To address these challenges, this paper proposes a \"Cerebral Transformer\" model based on EEG signals and video data. By employing an adaptive attention mechanism and cross-modal fusion, the model effectively combines EEG signals and video streams to achieve precise recognition and analysis of athletes' movements. The model's effectiveness was validated through experiments on four datasets: SEED, DEAP, eSports Sensors, and MODA. The results show that the proposed model outperforms existing mainstream methods in terms of accuracy, recall, and F1 score, while also demonstrating high computational efficiency.</p><p><strong>Results and discussion: </strong>The significance of this study lies in providing a more comprehensive and efficient solution for sports performance analysis. Through cross-modal data fusion, it not only improves the accuracy of complex movement recognition but also provides technical support for monitoring athletes' neural states, offering important applications in sports training and medical rehabilitation.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1499734"},"PeriodicalIF":2.6,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11695414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142931334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-guided feature fusion network for RGB-T salient object detection. 边缘引导特征融合网络用于RGB-T显著目标检测。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-12-17 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1489658
Yuanlin Chen, Zengbao Sun, Cheng Yan, Ming Zhao
{"title":"Edge-guided feature fusion network for RGB-T salient object detection.","authors":"Yuanlin Chen, Zengbao Sun, Cheng Yan, Ming Zhao","doi":"10.3389/fnbot.2024.1489658","DOIUrl":"10.3389/fnbot.2024.1489658","url":null,"abstract":"<p><strong>Introduction: </strong>RGB-T Salient Object Detection (SOD) aims to accurately segment salient regions in both visible light and thermal infrared images. However, many existing methods overlook the critical complementarity between these modalities, which can enhance detection accuracy.</p><p><strong>Methods: </strong>We propose the Edge-Guided Feature Fusion Network (EGFF-Net), which consists of cross-modal feature extraction, edge-guided feature fusion, and salience map prediction. Firstly, the cross-modal feature extraction module captures and aggregates united and intersecting information in each local region of RGB and thermal images. Then, the edge-guided feature fusion module enhances the edge features of salient regions, considering that edge information is very helpful in refining significant area details. Moreover, a layer-by-layer decoding structure integrates multi-level features and generates the prediction of salience maps.</p><p><strong>Results: </strong>We conduct extensive experiments on three benchmark datasets and compare EGFF-Net with state-of-the-art methods. Our approach achieves superior performance, demonstrating the effectiveness of the proposed modules in improving both detection accuracy and boundary refinement.</p><p><strong>Discussion: </strong>The results highlight the importance of integrating cross-modal information and edge-guided fusion in RGB-T SOD. Our method outperforms existing techniques and provides a robust framework for future developments in multi-modal saliency detection.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1489658"},"PeriodicalIF":2.6,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11685216/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142914613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-attention swin-transformer for detailed segmentation of ancient architectural color patterns. 古建筑色彩图案精细分割的交叉关注旋转变压器。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-12-13 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1513488
Lv Yongyin, Yu Caixia
{"title":"Cross-attention swin-transformer for detailed segmentation of ancient architectural color patterns.","authors":"Lv Yongyin, Yu Caixia","doi":"10.3389/fnbot.2024.1513488","DOIUrl":"10.3389/fnbot.2024.1513488","url":null,"abstract":"<p><strong>Introduction: </strong>Segmentation tasks in computer vision play a crucial role in various applications, ranging from object detection to medical imaging and cultural heritage preservation. Traditional approaches, including convolutional neural networks (CNNs) and standard transformer-based models, have achieved significant success; however, they often face challenges in capturing fine-grained details and maintaining efficiency across diverse datasets. These methods struggle with balancing precision and computational efficiency, especially when dealing with complex patterns and high-resolution images.</p><p><strong>Methods: </strong>To address these limitations, we propose a novel segmentation model that integrates a hierarchical vision transformer backbone with multi-scale self-attention, cascaded attention decoding, and diffusion-based robustness enhancement. Our approach aims to capture both local details and global contexts effectively while maintaining lower computational overhead.</p><p><strong>Results and discussion: </strong>Experiments conducted on four diverse datasets, including Ancient Architecture, MS COCO, Cityscapes, and ScanNet, demonstrate that our model outperforms state-of-the-art methods in accuracy, recall, and computational efficiency. The results highlight the model's ability to generalize well across different tasks and provide robust segmentation, even in challenging scenarios. Our work paves the way for more efficient and precise segmentation techniques, making it valuable for applications where both detail and speed are critical.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1513488"},"PeriodicalIF":2.6,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11707421/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142947470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D convolutional neural network based on spatial-spectral feature pictures learning for decoding motor imagery EEG signal. 基于空间频谱特征图学习的三维卷积神经网络在运动意象脑电信号解码中的应用。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-12-10 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1485640
Xiaoguang Li, Yaqi Chu, Xuejian Wu
{"title":"3D convolutional neural network based on spatial-spectral feature pictures learning for decoding motor imagery EEG signal.","authors":"Xiaoguang Li, Yaqi Chu, Xuejian Wu","doi":"10.3389/fnbot.2024.1485640","DOIUrl":"10.3389/fnbot.2024.1485640","url":null,"abstract":"<p><p>Non-invasive brain-computer interfaces (BCI) hold great promise in the field of neurorehabilitation. They are easy to use and do not require surgery, particularly in the area of motor imagery electroencephalography (EEG). However, motor imagery EEG signals often have a low signal-to-noise ratio and limited spatial and temporal resolution. Traditional deep neural networks typically only focus on the spatial and temporal features of EEG, resulting in relatively low decoding and accuracy rates for motor imagery tasks. To address these challenges, this paper proposes a 3D Convolutional Neural Network (P-3DCNN) decoding method that jointly learns spatial-frequency feature maps from the frequency and spatial domains of the EEG signals. First, the Welch method is used to calculate the frequency band power spectrum of the EEG, and a 2D matrix representing the spatial topology distribution of the electrodes is constructed. These spatial-frequency representations are then generated through cubic interpolation of the temporal EEG data. Next, the paper designs a 3DCNN network with 1D and 2D convolutional layers in series to optimize the convolutional kernel parameters and effectively learn the spatial-frequency features of the EEG. Batch normalization and dropout are also applied to improve the training speed and classification performance of the network. Finally, through experiments, the proposed method is compared to various classic machine learning and deep learning techniques. The results show an average decoding accuracy rate of 86.69%, surpassing other advanced networks. This demonstrates the effectiveness of our approach in decoding motor imagery EEG and offers valuable insights for the development of BCI.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1485640"},"PeriodicalIF":2.6,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11667157/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142885708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An improved graph factorization machine based on solving unbalanced game perception. 一种基于求解不平衡博弈感知的改进图分解机。
IF 2.6 4区 计算机科学
Frontiers in Neurorobotics Pub Date : 2024-12-04 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1481297
Xiaoxia Xie, Yuan Jia, Tiande Ma
{"title":"An improved graph factorization machine based on solving unbalanced game perception.","authors":"Xiaoxia Xie, Yuan Jia, Tiande Ma","doi":"10.3389/fnbot.2024.1481297","DOIUrl":"10.3389/fnbot.2024.1481297","url":null,"abstract":"<p><p>The user perception of mobile game is crucial for improving user experience and thus enhancing game profitability. The sparse data captured in the game can lead to sporadic performance of the model. This paper proposes a new method, the balanced graph factorization machine (BGFM), based on existing algorithms, considering the data imbalance and important high-dimensional features. The data categories are first balanced by Borderline-SMOTE oversampling, and then features are represented naturally in a graph-structured way. The highlight is that the BGFM contains interaction mechanisms for aggregating beneficial features. The results are represented as edges in the graph. Next, BGFM combines factorization machine (FM) and graph neural network strategies to concatenate any sequential feature interactions of features in the graph with an attention mechanism that assigns inter-feature weights. Experiments were conducted on the collected game perception dataset. The performance of proposed BGFM was compared with eight state-of-the-art models, significantly surpassing all of them by AUC, precision, recall, and F-measure indices.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1481297"},"PeriodicalIF":2.6,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11652536/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信