Neurocomputing最新文献

筛选
英文 中文
Fast fixed-/preassigned-time synchronization of Clifford-valued neural networks for medical image encryption 用于医学图像加密的clifford值神经网络快速固定/预分配时间同步
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-07-12 DOI: 10.1016/j.neucom.2025.130984
Yanlin Zhang , Kit Ian Kou , Yanhui Zhang , Lizhi Liu
{"title":"Fast fixed-/preassigned-time synchronization of Clifford-valued neural networks for medical image encryption","authors":"Yanlin Zhang ,&nbsp;Kit Ian Kou ,&nbsp;Yanhui Zhang ,&nbsp;Lizhi Liu","doi":"10.1016/j.neucom.2025.130984","DOIUrl":"10.1016/j.neucom.2025.130984","url":null,"abstract":"<div><div>This paper aims to investigate the fixed-time (FXT) and preassigned-time (PAT) synchronization for Clifford-valued neural networks (CFVNNs) with mixed delays by improving a novel FXT stability theorem and using non-decomposing two-step method. First of all, a novel FXT stability theorem has been derived. Its time estimation formula and settling time are simpler and accurate compared to existing stability theorem. Then, based on this novel FXT stability theorem, the FXT synchronization of the CFVNNs is obtained by designing sample nonlinear controller and Lyapunov function and seeking the settling time. As a special case, the PAT synchronization of CFVNNs is investigated, in which the estimation of settling time is independent of any initial conditions of neural networks and any parameters of the designed controllers. Lastly, numerical examples demonstrate the effectiveness and superiority of the derived theoretical results. The research also extends to the practical domain, evaluating the impact of CFVNNs and the designed controllers on medical image encryption.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130984"},"PeriodicalIF":5.5,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144665753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse and robust alternating direction method of multipliers for large-scale classification learning 大规模分类学习的稀疏鲁棒交替方向乘法器方法
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-07-12 DOI: 10.1016/j.neucom.2025.130893
Huajun Wang , Wenqian Li , Yuanhai Shao , Hongwei Zhang
{"title":"Sparse and robust alternating direction method of multipliers for large-scale classification learning","authors":"Huajun Wang ,&nbsp;Wenqian Li ,&nbsp;Yuanhai Shao ,&nbsp;Hongwei Zhang","doi":"10.1016/j.neucom.2025.130893","DOIUrl":"10.1016/j.neucom.2025.130893","url":null,"abstract":"<div><div>Support vector machine (SVM) is a highly effective method in terms of classification learning. Nonetheless, when faced with large-scale classification problems, the high computational complexity involved can pose a significant obstacle. To tackle this problem, we establish a new trimmed squared loss SVM model known as TSVM. This model can be designed for achieving both sparsity and robustness at the same time. A novel optimality theory has been developed for the nonsmooth and nonconvex TSVM. Utilizing this new theory, the innovative fast alternating direction method of multipliers with low computational complexity and working set has been proposed to solve TSVM. Numerical tests show the effectiveness of the new method regarding the computational speed, number of support vector and classification accuracy, outperforming eight alternative top solvers. As an illustration, when tackling the real dataset with more than <span><math><mrow><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>7</mn></mrow></msup></mrow></math></span> instances, compared to seven other algorithms, our algorithm exhibited a 34 times enhancement in computation time, alongside achieving a 6.5% enhancement in accuracy and a 25 times decrease in support vector rates.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"652 ","pages":"Article 130893"},"PeriodicalIF":5.5,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144713275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Persistent spectral based machine learning method for autism spectrum disorder classification 基于持续谱的自闭症谱系障碍分类机器学习方法
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-07-12 DOI: 10.1016/j.neucom.2025.130780
Xudong Zhang , Liyuan Ma , Yaru Gao , Yunge Zhang , Fengling Li , Fengchun Lei
{"title":"Persistent spectral based machine learning method for autism spectrum disorder classification","authors":"Xudong Zhang ,&nbsp;Liyuan Ma ,&nbsp;Yaru Gao ,&nbsp;Yunge Zhang ,&nbsp;Fengling Li ,&nbsp;Fengchun Lei","doi":"10.1016/j.neucom.2025.130780","DOIUrl":"10.1016/j.neucom.2025.130780","url":null,"abstract":"<div><h3>Background:</h3><div>Autism spectrum disorder (ASD) is a widespread and intricate neurodevelopmental condition. The increasing prevalence of ASD creates a very significant burden on both society and families. Functional magnetic resonance imaging (fMRI) contributes to a deeper understanding of ASD while also facilitating the development of early diagnosis and effective treatment strategies. This study aims to provide new and more reliable tools for early diagnosis of ASD and gain deeper insights into its neural mechanisms through the combination of topology and persistent spectral theory with functional connectivity.</div></div><div><h3>Methods:</h3><div>We proposed a persistent spectral machine learning model based on the simplicial complex for characterizing the functional connectivity in the Autism Brain Imaging Data Exchange I dataset. Simplicial complexes were used to characterize the functional connectivity with coefficients no less than 0.3. We arranged a filtration value for each simplex and persistent Laplacian matrices were calculated through a filtration process. The corresponding persistent attributes, after removing covariates, were used as inputs of classifiers.</div></div><div><h3>Results:</h3><div>Achieving an accuracy of 87.5%, our model outperformed other models that applied functional connectivity, similar sample sizes and the same preprocessing pipelines. We found that the numbers and distribution of connected components and loops of the global functional connectivity are important for classification.</div></div><div><h3>Conclusions:</h3><div>This study provided a feature extraction method based on persistent spectral theory for ASD research. Our model offers a different perspective on the research of related conditions and has great and notable potential in diagnosis.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130780"},"PeriodicalIF":5.5,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ECF-DETR: Enhanced Cross-layer Fusion Transformer for Pollen Detection with IoU and Classification Guided Evaluation ECF-DETR:基于IoU和分类指导评价的增强跨层融合变压器花粉检测
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-07-12 DOI: 10.1016/j.neucom.2025.130892
Baokai Zu, Xu Li, Yafang Li, Hongyuan Wang, Jianqiang Li
{"title":"ECF-DETR: Enhanced Cross-layer Fusion Transformer for Pollen Detection with IoU and Classification Guided Evaluation","authors":"Baokai Zu,&nbsp;Xu Li,&nbsp;Yafang Li,&nbsp;Hongyuan Wang,&nbsp;Jianqiang Li","doi":"10.1016/j.neucom.2025.130892","DOIUrl":"10.1016/j.neucom.2025.130892","url":null,"abstract":"<div><div>Pollen allergy is one of the most common seasonal diseases, often triggering a variety of symptoms that severely affect both the physical and mental health of individuals. Therefore, rapid and accurate pollen detection is of great importance for preventing allergic reactions and protecting public health. However, because of the complexity of the pollen sampling process, the captured images often contain various impurities such as plant debris and dust. In addition, pollen grains are typically small, irregular in shape, and exhibit significant individual differences, making it difficult for existing models to effectively extract both global and local features, which limits detection performance. The Transformer architecture, with its powerful long-range dependency modeling capabilities, offers a promising solution to these challenges. To address these issues, this paper introduces a Transformer-based pollen detection framework named ECF-DETR. This method tackles key challenges such as limited training data, high annotation costs, and the mismatch between classification confidence and bounding box precision by introducing two core components: the Enhanced Cross-layer Location Information Fusion (E-CLIF) mechanism and the IoU and Classification Guided Evaluation (IoCE) strategy. E-CLIF adopts a hybrid matching strategy to increase the number of positive samples and fuses multi-layer spatial features to alleviate data scarcity. Meanwhile, IoCE jointly considers classification scores and IoU values to effectively mitigate the inconsistency between classification and localization. Extensive experiments conducted on a self-constructed pollen dataset in Beijing demonstrate that the proposed ECF-DETR achieves an Average Precision (<span><math><mrow><mi>A</mi><mi>P</mi></mrow></math></span>) of 78.8%, outperforming the baseline DETR with the Improved deNoising anchOr box (DINO) by 1.0%, and achieving a 0.3% gain over the advanced Align-DETR framework, respectively. These findings confirm the feasibility and effectiveness of Transformer-based methods for practical pollen detection applications.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"650 ","pages":"Article 130892"},"PeriodicalIF":5.5,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144655024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Layer federated learning for bit-width and data heterogeneity in cloud-edge systems 云边缘系统中位宽和数据异构的多层联邦学习
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-07-12 DOI: 10.1016/j.neucom.2025.130890
Baoxue Li, Zoujing Yao, Chunhui Zhao
{"title":"Multi-Layer federated learning for bit-width and data heterogeneity in cloud-edge systems","authors":"Baoxue Li,&nbsp;Zoujing Yao,&nbsp;Chunhui Zhao","doi":"10.1016/j.neucom.2025.130890","DOIUrl":"10.1016/j.neucom.2025.130890","url":null,"abstract":"<div><div>In edge intelligence scenarios, quantized models often vary in bit-width due to hardware diversity. Inconsistent model quantization and non-IID local data create dual heterogeneity, which causes significant challenges for federated training across edges. To overcome these challenges, we introduce a novel method (multi-layer Federated Learning with Bit-width Adaptivity) for dual heterogeneity, which facilitates collaborative optimization across differently quantized models. Our method establishes multiple model layers at edges, enabling alternate execution of collaborative and local updates. During collaborative updates, edges synchronize with other edges by selecting suitable models with the same bit-width from their candidate sets. To reduce communication, a Contact Map is designed at the cloud, tracking model selection and guiding efficient collaboration. We theoretically analyze the reduction of communication burden by the Contact Map. Experimental evaluations show that our method outperforms existing approaches in terms of model accuracy in dual heterogeneous settings.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130890"},"PeriodicalIF":5.5,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new dataset, model, and benchmark for lightweight and real-time underwater object detection 一个轻量级和实时水下目标检测的新数据集、模型和基准
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-07-11 DOI: 10.1016/j.neucom.2025.130891
Huilin Ge , Pan Sun , Yu Lu
{"title":"A new dataset, model, and benchmark for lightweight and real-time underwater object detection","authors":"Huilin Ge ,&nbsp;Pan Sun ,&nbsp;Yu Lu","doi":"10.1016/j.neucom.2025.130891","DOIUrl":"10.1016/j.neucom.2025.130891","url":null,"abstract":"<div><div>Underwater object detection (UOD) is crucial for monitoring marine ecosystems, underwater robotics, environmental protection, and autonomous underwater vehicles (AUVs). Despite progress, many models struggle under real-world conditions due to poor visibility, dynamic lighting, and domain shifts. Traditional methods like Faster R-CNN are computationally expensive, while YOLO-based models suffer in challenging underwater scenarios. The scarcity of large-scale annotated datasets further limits model generalization. To address these challenges, we introduce UOD-SZTU-2025, a new dataset of 3,133 high-quality underwater images, sourced primarily from video platforms. The dataset is used in EFCWM (Enhanced Feature Correction and Weighting Module) to extract and refine a feature material library for detection targets. We present <strong>EFCWM-Mamba-YOLO</strong>, a novel lightweight and real-time underwater object detector that integrates enhanced feature correction with state-space modeling to improve detection accuracy and robustness in complex underwater environments. The EFCWM module incorporates domain adaptation for improved robustness. Additionally, a two-stage training strategy first trains on a source domain and fine-tunes with limited target domain samples to enhance generalization. Experiments show our approach surpasses existing lightweight UOD models in accuracy, real-time performance, and robustness. Our dataset, model, and benchmark establish a strong foundation for future UOD research.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130891"},"PeriodicalIF":5.5,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of double compression detection for digital multimedia 数字多媒体双压缩检测技术综述
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-07-11 DOI: 10.1016/j.neucom.2025.130983
Tanfeng Sun, Xiao Han, Qiang Xu, Xing Yan, Yueneng Wang
{"title":"A review of double compression detection for digital multimedia","authors":"Tanfeng Sun,&nbsp;Xiao Han,&nbsp;Qiang Xu,&nbsp;Xing Yan,&nbsp;Yueneng Wang","doi":"10.1016/j.neucom.2025.130983","DOIUrl":"10.1016/j.neucom.2025.130983","url":null,"abstract":"<div><div>The rapid advancement of AI-driven multimedia manipulation has created an urgent need for more sophisticated digital forensics solutions. Current detection methods, while effective against specific tampering types, suffer from limited generalizability across diverse manipulation techniques. To address this challenge, researchers have developed Double Compression Detection (DCD) as a universal approach through compression-domain analysis. This review presents the comprehensive analysis of DCD techniques, systematically evaluating cutting-edge techniques for audio, image, and video content forensics. The pros and cons of existing DCD schemes are summarized for the first time from the perspective of generalization and effectiveness in this review. The emerging trends and fundamental limitations of existing researches are critically examined to guide future research directions in DCD.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130983"},"PeriodicalIF":5.5,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144634489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ML-GAN: Multi-level Text-driven Fine-grained Image Generation using Generative Adversarial Network ML-GAN:使用生成对抗网络的多级文本驱动的细粒度图像生成
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-07-11 DOI: 10.1016/j.neucom.2025.130851
Hong Zhao, He Wang, Yongjuan Yang
{"title":"ML-GAN: Multi-level Text-driven Fine-grained Image Generation using Generative Adversarial Network","authors":"Hong Zhao,&nbsp;He Wang,&nbsp;Yongjuan Yang","doi":"10.1016/j.neucom.2025.130851","DOIUrl":"10.1016/j.neucom.2025.130851","url":null,"abstract":"<div><div>Text-to-image generation aims to convert input text into semantically accurate and visually realistic images. Existing methods typically generate a general outline of an image using sentence-level text and then refine it with word-level text. However, this approach of jumping directly from coarse-grained to fine-grained text features makes it challenging for the model to refine images accurately. To address this issue, we propose a Multi-level Text-driven Fine-grained Image Generation using Generative Adversarial Networks (ML-GAN). The model leverages different hierarchical levels of text information to construct and optimize generated images progressively. We design a Dual-level Text Parallel Fusion Module (DPFM) and a Triple-level Text Parallel Fusion Module (TPFM) to precisely adjust and optimize image details by utilizing text information at different levels. Additionally, to enhance semantic consistency between text and generated images, we introduce a Cross-modal Attention Fusion Module in the discriminator to improve its ability to recognize text-image matching, thereby guiding the generator to produce images that better match text content. Compared to baseline models, our proposed model achieves improvements of 19.10% and 12.99% in FID scores on the CUB and COCO datasets, respectively. This validates the effectiveness of the multi-level text-to-image transformation approach in enhancing the quality of generated images.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130851"},"PeriodicalIF":5.5,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144634487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latent structure-oriented asymmetric hashing for cross-modal retrieval 面向潜在结构的非对称哈希跨模态检索
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-07-11 DOI: 10.1016/j.neucom.2025.130938
Jiajun Ma
{"title":"Latent structure-oriented asymmetric hashing for cross-modal retrieval","authors":"Jiajun Ma","doi":"10.1016/j.neucom.2025.130938","DOIUrl":"10.1016/j.neucom.2025.130938","url":null,"abstract":"<div><div>Cross-modal hashing has attracted considerable attention in cross-modal retrieval due to its excellent computational efficiency and retrieval performance. Most existing methods aim to map multimodal data into a common representation space where either semantic similarity or instance similarity is preserved. However, these methods do not consider the potential clustering structure of instances that characterizes sample separability, resulting in degraded retrieval performance. Furthermore, capturing the consistent instance similarity by effectively fusing similarities of different modalities remains an essential problem to be addressed. To tackle these issues, this paper proposes a novel latent structure-oriented asymmetric cross-modal Hashing method (LSOAH) for cross-modal retrieval. Specifically, LSOAH formulates the common representation learning with orthogonal decomposition, where each modality-specific instance is projected and decomposed into a modality-specific base matrix and a common cluster indicator matrix, and where the indicator matrix is concatenated with the hash code via an asymmetric mechanism. Additionally, we utilize Hadamard product on graphs from different modalities to explore the consistent instance similarity, and embed it in the common representation. Finally, a unified objective function is presented to enable the simultaneous exploration of the cluster structure, instance similarity and semantic similarity, as well as the hash code learning, upon which an alternating optimization algorithm is developed with theoretically proven convergence. Experimental results on three benchmark datasets confirm the superiority of the proposed LSOAH for cross-modal retrieval.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130938"},"PeriodicalIF":5.5,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144634493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PiMAN: A Physics-informed Motion Prediction Network using sEMG signal features for human movement parameters 皮曼:一个基于物理的运动预测网络,使用表面肌电信号特征来预测人类运动参数
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-07-11 DOI: 10.1016/j.neucom.2025.130884
Rajnish Kumar , Anand Gupta , Suriya Prakash Muthukrishnan , Lalan Kumar , Sitikantha Roy
{"title":"PiMAN: A Physics-informed Motion Prediction Network using sEMG signal features for human movement parameters","authors":"Rajnish Kumar ,&nbsp;Anand Gupta ,&nbsp;Suriya Prakash Muthukrishnan ,&nbsp;Lalan Kumar ,&nbsp;Sitikantha Roy","doi":"10.1016/j.neucom.2025.130884","DOIUrl":"10.1016/j.neucom.2025.130884","url":null,"abstract":"<div><div>Early and accurate prediction of human movement parameters is critical for assistive robotics to synchronize effectively with a user’s intent. Surface electromyography (sEMG) signals offer a unique advantage by capturing neuromuscular activity prior to visible motion; however, existing model-based and model-free approaches often suffer from limited generalizability, delayed response, or poor biomechanical interpretability. To address these limitations, we propose PiMAN (Physics-informed Motion Anticipation Network), a deep learning framework that combines an attention-based bidirectional gated recurrent unit (BiGRU) architecture with physics constraints derived from the inverse dynamics. The model incorporates subject-specific anthropometric hyperparameters into the inverse dynamics formulation, enabling biomechanically consistent torque estimation across individuals. PiMAN predicts a comprehensive set of joint parameters, including angles, velocities, accelerations, external payloads, and torques, 48–96 ms before visible movement onset, from sEMG windows aligned with electromechanical delay range. This supports real-time control in assistive and neuroprosthetic systems. The model was trained and evaluated on five test subjects under three external load conditions (0 kg, 2 kg, and 4 kg), using both intra- and inter-subject scenarios. It achieved low RMSE (<span><math><mo>≤</mo></math></span>1.3) and high correlation (up to 0.93) across all outputs. Compared to purely data-driven baselines and physics-informed variants lacking attention, PiMAN consistently outperforms in joint torque and load estimation, particularly under higher-load conditions. In addition, PiMAN generalizes to temporally varying load transitions without retraining, and treats external mass as a continuous variable to facilitate seamless integration into inverse dynamics. These findings position PiMAN as a scalable, generalizable, and real-time-ready framework for anticipatory motion prediction in wearable assistive technologies.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130884"},"PeriodicalIF":5.5,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144672050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信