{"title":"A DLM watermarking method based on a spatiotemporal chaos with DNA computing","authors":"Dehui Wang , Yingqian Zhang , Qiang Wei , Yumei Xue , Shuang Zhou","doi":"10.1016/j.neucom.2025.130981","DOIUrl":"10.1016/j.neucom.2025.130981","url":null,"abstract":"<div><div>Intellectual property (IP) protection for deep learning models (DLM) remains a hotspot, while the main solution is to give each model a universal and useful identity, which is analogous to the identification systems in human society. Recently, black-box watermarking technique has emerged as the primary option for IPP, however, small key space and fraudulent ownership claim attacks are still unresolved. In this paper, we proposed a black-box watermarking method based on a spatiotemporal chaos, Arnold Coupled Logistic Map Lattices (ACLML), with DNA permutation. Firstly, the ACLML can provide favorable chaotic properties to the trigger set and make it unpredictable against machine learning attacks and statistical inference. Secondly, the motion of the ACLML is controlled by particular parameters, which can provide a large key space and assign each model a unique identifier, meeting the commercialization needs of DLM. Thirdly, the trigger samples and chaotic values that build the trigger set are mutually independent, guaranteeing the security of the watermark. Theoretical analysis indicates that our scheme is secure and practical. We also compared it with the previous method, the experimental results demonstrate that our method shows better robustness against fine-tuning attacks and overwriting attacks. Moreover, it also effectively suppresses fraudulent ownership claim attacks.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"652 ","pages":"Article 130981"},"PeriodicalIF":5.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144713223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Continually learn to map visual concepts to language models in resource-constrained environments","authors":"Clea Rebillard , Julio Hurtado , Andrii Krutsylo , Lucia Passaro , Vincenzo Lomonaco","doi":"10.1016/j.neucom.2025.131013","DOIUrl":"10.1016/j.neucom.2025.131013","url":null,"abstract":"<div><div>Continually learning from non-independent and identically distributed (non-i.i.d.) data poses a significant challenge in deep learning, particularly in resource-constrained environments. Visual models trained via supervised learning often suffer from overfitting, catastrophic forgetting, and biased representations when faced with sequential tasks. In contrast, pre-trained language models demonstrate greater robustness in managing task sequences due to their generalized knowledge representations, albeit at the cost of high computational resources. Leveraging this advantage, we propose a novel learning strategy, Continual Visual Mapping (CVM), which continuously maps visual representations into a fixed knowledge space derived from a language model. By anchoring learning to this fixed space, CVM enables training small, efficient visual models, making it particularly suited for scenarios where adapting large pre-trained visual models is computationally or data-prohibitive. Empirical evaluations across five benchmarks demonstrate that CVM consistently outperforms state-of-the-art continual learning methods, showcasing its potential to enhance generalization and mitigate challenges in resource-constrained continual learning settings.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"652 ","pages":"Article 131013"},"PeriodicalIF":5.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144713276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-19DOI: 10.1016/j.neucom.2025.130886
N. Manoj , R. Sriraman , R. Gurusamy , Yilun Shang
{"title":"Further results on global stability of Clifford-valued neural networks subject to time-varying delays","authors":"N. Manoj , R. Sriraman , R. Gurusamy , Yilun Shang","doi":"10.1016/j.neucom.2025.130886","DOIUrl":"10.1016/j.neucom.2025.130886","url":null,"abstract":"<div><div>This paper investigates the global exponential and asymptotic stability of Clifford-valued neural networks (CLVNNs) with multiple time-varying delays. Due to the non-commutative nature of Clifford algebra, analyzing the stability and other dynamical properties of CLVNNs becomes challenging. To address this issue, we separate the CLVNNs into equivalent real-valued neural networks (RVNNs). This separation simplifies the study of CLVNNs through their RVNN components. By constructing a suitable Lyapunov–Krasovskii functionals (LKFs) and applying inequality techniques, we establish several sufficient conditions that guarantee the existence and uniqueness of the equilibrium point (EP), as well as the global exponential and asymptotic stability of the considered neural networks (NNs). These conditions are expressed as linear matrix inequalities (LMIs), which can be efficiently verified using MATLAB LMI toolbox. To validate the analytical results, we present three numerical examples. Additionally, we propose a novel color image encryption algorithm, and demonstrate its effectiveness through simulation results and detailed performance analysis.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 130886"},"PeriodicalIF":5.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144678905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-19DOI: 10.1016/j.neucom.2025.131021
Jin Yuan , Shikai Chen , Yicheng Jiang , Yang Zhang , Zhongchao Shi , Jianping Fan , Yong Rui
{"title":"Epistemic graph: A plug-and-play module for hybrid representation learning","authors":"Jin Yuan , Shikai Chen , Yicheng Jiang , Yang Zhang , Zhongchao Shi , Jianping Fan , Yong Rui","doi":"10.1016/j.neucom.2025.131021","DOIUrl":"10.1016/j.neucom.2025.131021","url":null,"abstract":"<div><div>In recent years, deep models have achieved remarkable success in various vision tasks. However, their performance heavily relies on large training datasets. In contrast, humans exhibit hybrid learning, seamlessly integrating structured knowledge for cross-domain recognition or relying on a smaller amount of data samples for few-shot learning. Motivated by this human-like epistemic process, we aim to extend hybrid learning to computer vision tasks by integrating structured knowledge with data samples for more effective representation learning. Nevertheless, this extension faces significant challenges due to the substantial gap between structured knowledge and deep features learned from data samples, encompassing both dimensions and knowledge granularity. In this paper, a novel Epistemic Graph Layer (EGLayer) is introduced to enable hybrid learning, enhancing the exchange of information between deep features and a structured knowledge graph. Our EGLayer is composed of three major parts, including a local graph module, a query aggregation model, and a novel correlation alignment loss function to emulate human epistemic ability. Serving as a plug-and-play module that can replace the standard linear classifier, EGLayer significantly improves the performance of deep models. Extensive experiments demonstrate that EGLayer can greatly enhance representation learning for the tasks of cross-domain recognition and few-shot learning, and the visualization of knowledge graphs can aid in model interpretation.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"652 ","pages":"Article 131021"},"PeriodicalIF":5.5,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-18DOI: 10.1016/j.neucom.2025.131052
Fangxue Liu , Lei Fan
{"title":"A review of advancements in low-light image enhancement using deep learning","authors":"Fangxue Liu , Lei Fan","doi":"10.1016/j.neucom.2025.131052","DOIUrl":"10.1016/j.neucom.2025.131052","url":null,"abstract":"<div><div>In low-light environments, the performance of computer vision algorithms often deteriorates significantly, adversely affecting key vision tasks such as segmentation, detection, and classification. With the rapid advancement of deep learning, its application to low-light image processing has attracted widespread attention and seen significant progress in recent years. However, there remains a lack of comprehensive surveys that systematically examine how recent deep-learning-based low-light image enhancement methods function and evaluate their effectiveness in enhancing downstream vision tasks. To address this gap, this review provides a detailed elaboration on how various recent approaches (from 2020) operate and their enhancement mechanisms, supplemented with clear illustrations. It also investigates the impact of different enhancement techniques on subsequent vision tasks, critically analyzing their strengths and limitations. Our review found that image enhancement improved the performance of downstream vision tasks to varying degrees. Although supervised methods often produced images with high perceptual quality, they typically produced modest improvements in vision tasks. In contrast, zero-shot learning, despite achieving lower scores in image quality metrics, showed consistently boosted performance across various vision tasks. These suggest a disconnect between image quality metrics and those evaluating vision task performance. Additionally, unsupervised domain adaptation techniques demonstrated significant gains in segmentation tasks, highlighting their potential in practical low-light scenarios where labelled data is scarce. Observed limitations of existing studies are analysed, and directions for future research are proposed. This review serves as a useful reference for determining low-light image enhancement techniques and optimizing vision task performance in low-light conditions.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"652 ","pages":"Article 131052"},"PeriodicalIF":5.5,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-18DOI: 10.1016/j.neucom.2025.131056
Kaifeng Wu , Lei Liu , Chengqing Liang , Lei Li
{"title":"UAV formation control based on ensemble reinforcement learning","authors":"Kaifeng Wu , Lei Liu , Chengqing Liang , Lei Li","doi":"10.1016/j.neucom.2025.131056","DOIUrl":"10.1016/j.neucom.2025.131056","url":null,"abstract":"<div><div>Based on the frameworks of Multi-Agent Deep Deterministic Policy Gradient (MADDPG) and Deep Deterministic Policy Gradient (DDPG) algorithms, this paper investigates the UAV formation control problem. To address the convergence difficulties inherent in multi-agent algorithms, curriculum reinforcement learning is applied during the training phase to decompose the task into incremental stages. A progressively hierarchical reward function tailored for each stage is designed, significantly reducing the training complexity of MADDPG. In the inference phase, an ensemble reinforcement learning strategy is adopted to enhance the accuracy of UAV formation control. When the UAVs approach their target positions, the control strategy switches from MADDPG to the DDPG algorithm, thus achieving more efficient and precise control. Through ablation and comparative experiments in a self-developed Software in the Loop (SITL) simulation environment, the effectiveness and stability of the ensemble reinforcement learning algorithm in multi-agent scenarios are validated. Finally, real-world experiments further verify the practical applicability of the proposed algorithm (<span><span>https://b23.tv/7ceLpLe</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 131056"},"PeriodicalIF":5.5,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144686692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-18DOI: 10.1016/j.neucom.2025.131046
Qishi Zheng , Mengnan He , Jiuqin Duan , Gai Luo , Pengcheng Wu , Yimin Han , Qingyue Min , Peng Chen , Ping Zhang
{"title":"Clip4Vis: Parameter-free fusion for multimodal video recognition","authors":"Qishi Zheng , Mengnan He , Jiuqin Duan , Gai Luo , Pengcheng Wu , Yimin Han , Qingyue Min , Peng Chen , Ping Zhang","doi":"10.1016/j.neucom.2025.131046","DOIUrl":"10.1016/j.neucom.2025.131046","url":null,"abstract":"<div><div>Multimodal video recognition has emerged as a central focus due to its ability to effectively integrate information from diverse modalities, such as video and text. However, traditional fusion methods typically rely on trainable parameters, resulting in increased model computational costs. To address these challenges, this paper presents <strong>Clip4Vis</strong>, a zero-parameter progressive fusion framework that combines video and text features using a shallow-to-deep approach. The shallow and deep fusion steps are implemented through two key modules: (i) <strong>Cross-Model Attention</strong>, a module that enhances video embeddings with textual information, enabling adaptive focus on keyframes to improve action representation in the video. (ii) <strong>Joint Temporal-Textual Aggregation</strong>, a module that integrates video embeddings and word embeddings by jointly utilizing temporal and textual information, enabling global information aggregation. Extensive evaluations on five widely used video datasets demonstrate that our method achieves competitive performance in general, zero-shot, and few-shot video recognition. Our best model, using the released CLIP model, achieves a state-of-the-art accuracy of 87.4 % for general recognition on Kinetics-400 and 75.3 % for zero-shot recognition on Kinetics-600. The code will be released later.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"652 ","pages":"Article 131046"},"PeriodicalIF":5.5,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144713277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-18DOI: 10.1016/j.neucom.2025.131037
Xiaoru Li, Yuxia Lei
{"title":"Enhancing syntactic and semantic features via TextGINConv and Kolmogorov–Arnold networks for aspect-based sentiment analysis","authors":"Xiaoru Li, Yuxia Lei","doi":"10.1016/j.neucom.2025.131037","DOIUrl":"10.1016/j.neucom.2025.131037","url":null,"abstract":"<div><div>Aspect-based sentiment analysis identifies the sentiment polarity of a given aspect in a sentence. Recent advances have demonstrated the effectiveness of combining syntactic dependency structures with graph convolutional networks. However, traditional graph convolutional networks usually have a simpler message passing mechanism that may describe the relationship between nodes only through the adjacency matrix, often ignoring messages passed by edge features and performing sentiment analysis without considering the specific semantic information of different aspects. In this paper, we innovatively propose a syntactic and semantic enhancement network model for aspect-based sentiment analysis (ESSGKA). To be specific, we combine a self-attention mechanism with an aspect-oriented attention mechanism, enabling the simultaneous learning of aspect-related semantics and the overall semantics of the sentence. TextGINConv focuses more on edge features, which are leveraged to facilitate dense message passing. To enhance the fusion of two distinct feature messages, we propose a novel gated fusion framework based on Kolmogorov–Arnold networks. Extensive experiments on four publicly available datasets show that our model is more competitive than state-of-the-art models.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 131037"},"PeriodicalIF":5.5,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144685489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-18DOI: 10.1016/j.neucom.2025.131053
Xulin Yang , Hang Qiu
{"title":"Multimodal representation learning with hierarchical knowledge decomposition for cancer survival analysis","authors":"Xulin Yang , Hang Qiu","doi":"10.1016/j.neucom.2025.131053","DOIUrl":"10.1016/j.neucom.2025.131053","url":null,"abstract":"<div><div>The accurate survival analysis of cancer patients serves as an important decision-making basis for formulating personalized treatment plans. Recent studies illustrate that integrating multimodal information including clinical diagnostic data, genomic features, and whole-slide images (WSIs) can significantly enhance the performance of prognostic assessment models. However, most existing multimodal survival analysis methods excessively focus on extracting shared inter-modal features while failing to effectively mine modality-specific biological information, resulting in inadequate acquisition of comprehensive patient multimodal representations. Furthermore, how to extract prognosis-related tissue microenvironment features from ultra-high-resolution WSIs remains an unresolved open challenge. To address these issues, a multimodal representation learning framework with hierarchical knowledge decomposition (MRL-HKD) is proposed for cancer survival analysis. MRL-HKD transforms multimodal representation learning into a set partitioning problem within multimodal knowledge spaces, and employs a hierarchical multimodal knowledge decomposition module to decouple complex inter-modal relationships. Meanwhile, to address the challenge of high-dimensional pathological image feature extraction, a gated attention mechanism-based multimodal patch attention network is designed. The performance comparison experiments on four cancer datasets demonstrate that MRL-HKD significantly outperforms state-of-the-art methods. Our study demonstrates the potential of gated attention mechanisms and hierarchical knowledge decomposition in multimodal survival analysis, and provides an effective tool for cancer prognosis prediction. The source code will be open-sourced at <span><span>https://github.com/yangxulin/MRL-HKD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"652 ","pages":"Article 131053"},"PeriodicalIF":5.5,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2025-07-18DOI: 10.1016/j.neucom.2025.131048
Yun-He Zhang , Xiao-Jian Li
{"title":"A fault-tolerant model predictive control approach based on deep operator network","authors":"Yun-He Zhang , Xiao-Jian Li","doi":"10.1016/j.neucom.2025.131048","DOIUrl":"10.1016/j.neucom.2025.131048","url":null,"abstract":"<div><div>This paper is concerned with the fault-tolerant control problem for unknown nonlinear systems in the model predictive control (MPC) framework. A modified deep operator network is designed to learn system dynamics from input-output data. However, the input data that contain fault information cannot be acquired directly in the presence of actuator faults. To overcome this difficulty, a mode simulation method is presented via adequate and uniform sampling of virtual fault information in a hyperspace. In this way, the system responses in different faulty modes are simulated to ensure excellent prediction accuracy of the modified network. Moreover, an improved fault estimation method is designed with historical input-output data of the modified network. Then, based on the fault estimates, the design problem of the fault-tolerant MPC controller is converted into a constrained optimization problem, which is further solved using an adaptive gradient descent method. Finally, two simulation experiments are provided to illustrate the validity of the proposed approach.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"651 ","pages":"Article 131048"},"PeriodicalIF":5.5,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144686697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}