NeurocomputingPub Date : 2024-11-26DOI: 10.1016/j.neucom.2024.128872
Cheng Fan , Ling Jin , Lei Su , Xihong Fei
{"title":"Dynamic event triggering output feedback synchronization for Markov jump neural networks with mode detection information","authors":"Cheng Fan , Ling Jin , Lei Su , Xihong Fei","doi":"10.1016/j.neucom.2024.128872","DOIUrl":"10.1016/j.neucom.2024.128872","url":null,"abstract":"<div><div>This article investigates the synchronization control problem of discrete-time Markov jump neural networks. Because of the possible mismatch of controller mode information and the difficulty in obtaining neuron information in practical environments, a hidden Markov model is introduced, which contains a partially unknown detection probability matrix and a partially unknown transition probability matrix. To overcome the unpredictability of the system state and enhance the effective utilization of communication resources, a static output feedback controller based on a dynamic event triggering strategy is designed. Moreover, the conservatism of theoretical derivation is further reduced through the activation function division. Finally, numerical examples are used to verify the reliability of the above results, which are then applied to image encryption.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 128872"},"PeriodicalIF":5.5,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-11-24DOI: 10.1016/j.neucom.2024.128940
Zuhe Li , Panbo Liu , Yushan Pan , Weiping Ding , Jun Yu , Haoran Chen , Weihua Liu , Yiming Luo , Hao Wang
{"title":"Multimodal sentiment analysis based on disentangled representation learning and cross-modal-context association mining","authors":"Zuhe Li , Panbo Liu , Yushan Pan , Weiping Ding , Jun Yu , Haoran Chen , Weihua Liu , Yiming Luo , Hao Wang","doi":"10.1016/j.neucom.2024.128940","DOIUrl":"10.1016/j.neucom.2024.128940","url":null,"abstract":"<div><div>Multimodal sentiment analysis aims to extract sentiment information expressed by users from multimodal data, including linguistic, acoustic, and visual cues. However, the heterogeneity of multimodal data leads to disparities in modal distribution, thereby impacting the model’s ability to effectively integrate complementarity and redundancy across modalities. Additionally, existing approaches often merge modalities directly after obtaining their representations, overlooking potential emotional correlations between them. To tackle these challenges, we propose a Multiview Collaborative Perception (MVCP) framework for multimodal sentiment analysis. This framework consists primarily of two modules: Multimodal Disentangled Representation Learning (MDRL) and Cross-Modal Context Association Mining (CMCAM). The MDRL module employs a joint learning layer comprising a common encoder and an exclusive encoder. This layer maps multimodal data to a hypersphere, learning common and exclusive representations for each modality, thus mitigating the semantic gap arising from modal heterogeneity. To further bridge semantic gaps and capture complex inter-modal correlations, the CMCAM module utilizes multiple attention mechanisms to mine cross-modal and contextual sentiment associations, yielding joint representations with rich multimodal semantic interactions. In this stage, the CMCAM module only discovers the correlation information among the common representations in order to maintain the exclusive representations of different modalities. Finally, a multitask learning framework is adopted to achieve parameter sharing between single-modal tasks and improve sentiment prediction performance. Experimental results on the MOSI and MOSEI datasets demonstrate the effectiveness of the proposed method.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 128940"},"PeriodicalIF":5.5,"publicationDate":"2024-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-11-23DOI: 10.1016/j.neucom.2024.128914
Huanlong Zhang , Xiangbo Yang , Xin Wang , Weiqiang Fu , Bineng Zhong , Jianwei Zhang
{"title":"SAM-Assisted Temporal-Location Enhanced Transformer Segmentation for Object Tracking with Online Motion Inference","authors":"Huanlong Zhang , Xiangbo Yang , Xin Wang , Weiqiang Fu , Bineng Zhong , Jianwei Zhang","doi":"10.1016/j.neucom.2024.128914","DOIUrl":"10.1016/j.neucom.2024.128914","url":null,"abstract":"<div><div>Current transformer-based trackers typically represent targets using bounding boxes. However, bounding boxes do not accurately describe the target and uncontrollably contain most background pixels. This paper proposes a Segment Anything Model (SAM)-Assisted Temporal-Location Enhanced Transformer Segmentation for Object Tracking with Online Motion Inference. First, a novel transformer-based temporal-location enhanced segmentation method is proposed. The target temporal features are clustered into foreground–background tokens utilizing a mask to capture discriminative information distribution. Then, the suitable positional prompts are learned in the proposed mask prediction head to establish the mapping between target features and localization, which enhances the specific foreground weights for precise mask generation. Second, a temporal-based motion inference module is proposed. It fully utilizes the target temporal state to construct an online displacement model inferring motion relationships of the target between frames and providing robust position prompts for the segmentation process. We also introduce SAM for initial mask generation. Precise pixel-level object tracking is achieved by combining segmentation and localization within a unified process. Experimental results demonstrate that the proposed method yields competitive performance compared to existing approaches.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 128914"},"PeriodicalIF":5.5,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-11-23DOI: 10.1016/j.neucom.2024.128809
Xuefei Bai , Quanbo Ge , Pingliang(Peter) Zeng
{"title":"DCM_MCCKF: A non-Gaussian state estimator with adaptive kernel size based on CS divergence","authors":"Xuefei Bai , Quanbo Ge , Pingliang(Peter) Zeng","doi":"10.1016/j.neucom.2024.128809","DOIUrl":"10.1016/j.neucom.2024.128809","url":null,"abstract":"<div><div>In practical industrial system models, noise is better characterized by heavy-tailed non-Gaussian distributions. For state estimation in systems with heavy-tailed non-Gaussian noise, the maximum correntropy criterion (MCC) based on information theoretic learning (ITL) is widely adopted, achieving good filtering performance. The performance of MCC-based filtering depends on the selection of the kernel function and its parameters. To overcome the sensitivity of the Gaussian kernel to its parameters and the limitation of a single kernel function in comprehensively reflecting the characteristics of complex heterogeneous data, a double-Cauchy mixture-based MCC Kalman Filtering (DCM_MCCKF) algorithm is proposed. This selection uses a mixture of two Cauchy kernel functions, using their heavy-tailed properties to better handle large errors and reduce sensitivity to kernel size variations. As a result, it improves the robustness and flexibility of MCC-based filtering. The kernel size should adapt to changes in signal distribution. To address the limitation of fixed kernel size, an adaptive kernel size update rule is designed by comprehensively considering system models, accessible measurements, CS divergence between noise distributions, and covariance propagation. Simulation examples of target tracking validate that the proposed DCM_MCCKF algorithm, under the adaptive kernel size updating rule, effectively handles complex data and achieves superior filtering performance in heavy-tailed non-Gaussian noise scenarios. This algorithm outperforms traditional Kalman filters (KF) based on the mean square error (MSE) criterion, Gaussian sum filtering (GSF), particle filtering (PF), and Maximum Correntropy Criterion Kalman filters (MCCKF) with a single Gaussian kernel (G_MCCKF), a double-Gaussian mixture kernel (DGM_MCCKF), and a Gaussian-Cauchy mixture kernel (GCM_MCCKF). Consequently, the DCM_MCCKF algorithm significantly enhances the applicability and robustness of MCC-based filtering methods.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 128809"},"PeriodicalIF":5.5,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-11-23DOI: 10.1016/j.neucom.2024.128718
Mohammed Fadhel Aljunid , Manjaiah D.H. , Mohammad Kazim Hooshmand , Wasim A. Ali , Amrithkala M. Shetty , Sadiq Qaid Alzoubah
{"title":"A collaborative filtering recommender systems: Survey","authors":"Mohammed Fadhel Aljunid , Manjaiah D.H. , Mohammad Kazim Hooshmand , Wasim A. Ali , Amrithkala M. Shetty , Sadiq Qaid Alzoubah","doi":"10.1016/j.neucom.2024.128718","DOIUrl":"10.1016/j.neucom.2024.128718","url":null,"abstract":"<div><div>In the current digital landscape, both information consumers and producers encounter numerous challenges, underscoring the importance of recommender systems (RS) as a vital tool. Among various RS techniques, collaborative filtering (CF) has emerged as a highly effective method for suggesting products and services. However, traditional CF methods face significant obstacles in the era of big data, including issues related to data sparsity, accuracy, cold start problems, and high dimensionality. This paper offers a comprehensive survey of CF-based RS enhanced by machine learning (ML) and deep learning (DL) algorithms. It aims to serve as a valuable resource for both novice and experienced researchers in the field of RS. The survey is structured into two main sections: the first elucidates the fundamental concepts of RS, while the second delves into solutions for CF-based RS challenges, examining the specific tasks addressed by various studies, as well as the metrics and datasets employed.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 128718"},"PeriodicalIF":5.5,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-11-23DOI: 10.1016/j.neucom.2024.128975
José Ángel Martín-Baos , Ricardo García-Ródenas , Luis Rodriguez-Benitez , Michel Bierlaire
{"title":"Scalable kernel logistic regression with Nyström approximation: Theoretical analysis and application to discrete choice modelling","authors":"José Ángel Martín-Baos , Ricardo García-Ródenas , Luis Rodriguez-Benitez , Michel Bierlaire","doi":"10.1016/j.neucom.2024.128975","DOIUrl":"10.1016/j.neucom.2024.128975","url":null,"abstract":"<div><div>The application of kernel-based Machine Learning (ML) techniques to discrete choice modelling using large datasets often faces challenges due to memory requirements and the considerable number of parameters involved in these models. This complexity hampers the efficient training of large-scale models. This paper addresses these problems of scalability by introducing the Nyström approximation for Kernel Logistic Regression (KLR) on large datasets. The study begins by presenting a theoretical analysis in which: (i) the set of KLR solutions is characterised, (ii) an upper bound to the solution of KLR with Nyström approximation is provided, and finally (iii) a specialisation of the optimisation algorithms to Nyström KLR is described. After this, the Nyström KLR is computationally validated. Four landmark selection methods are tested, including basic uniform sampling, a <span><math><mi>k</mi></math></span>-means sampling strategy, and two non-uniform methods grounded in leverage scores. The performance of these strategies is evaluated using large-scale transport mode choice datasets and is compared with traditional methods such as Multinomial Logit (MNL) and contemporary ML techniques. The study also assesses the efficiency of various optimisation techniques for the proposed Nyström KLR model. The performance of gradient descent, Momentum, Adam, and L-BFGS-B optimisation methods is examined on these datasets. Among these strategies, the <span><math><mi>k</mi></math></span>-means Nyström KLR approach emerges as a successful solution for applying KLR to large datasets, particularly when combined with the L-BFGS-B and Adam optimisation methods. The results highlight the ability of this strategy to handle datasets exceeding 200,000 observations while maintaining robust performance.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 128975"},"PeriodicalIF":5.5,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-11-23DOI: 10.1016/j.neucom.2024.128864
Chuanshu Chen , Shuang Hao , Jian Liu
{"title":"Distantly supervised relation extraction with a Meta-Relation enhanced Contrastive learning framework","authors":"Chuanshu Chen , Shuang Hao , Jian Liu","doi":"10.1016/j.neucom.2024.128864","DOIUrl":"10.1016/j.neucom.2024.128864","url":null,"abstract":"<div><div>Distantly supervised relation extraction employs the alignment of unstructured corpora with knowledge bases to automatically generate labeled data. This method, however, often introduces significant label noise. To address this, multi-instance learning has been widely utilized over the past decade, aiming to extract reliable features from a bag of sentences. Yet, multi-instance learning struggles to effectively distinguish between clean and noisy instances within a bag, thereby hindering the full utilization of informative instances and the reduction of the impact of incorrectly labeled instances. In this paper, we propose a new Meta-Relation enhanced Contrastive learning based method for distantly supervised Relation Extraction named MRConRE. Specifically, we generate a “meta relation pattern” (<span><math><mtext>MRP</mtext></math></span>) for each bag, based on its semantic content, to differentiate between clean and noisy instances. Noisy instances are then transformed into beneficial bag-level instances through relabeling. Subsequently, contrastive learning is employed to develop precise sentence representations, forming the overall representation of the bag. Finally, we utilize a mixup strategy to integrate bag-level information for model training. Our method’s effectiveness is validated through experiments on various benchmarks.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 128864"},"PeriodicalIF":5.5,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-11-23DOI: 10.1016/j.neucom.2024.129026
Wei Wei , Zijin Wang
{"title":"Closed-loop seizure modulation via extreme learning machine supervisor based sliding mode disturbance rejection control","authors":"Wei Wei , Zijin Wang","doi":"10.1016/j.neucom.2024.129026","DOIUrl":"10.1016/j.neucom.2024.129026","url":null,"abstract":"<div><div>Neuromodulation is a low-risk and high-efficient therapy to treat epilepsy. In clinic, there is an urgent need for a regulation strategy that is adaptable to unknown nonlinearities and strong robust to kinds of disturbances and uncertainties. Linear active disturbance rejection control (LADRC) can adapt to complex epileptic dynamics and improve the epilepsy modulation, even if little model information is available, various uncertainties and external disturbances exist. However, a proportional plus derivative controller in the LADRC is weak to resist external disturbances that are not addressed by an extended state observer. In addition, the phase delay of the input and output lowers the speed of modulation. An extreme learning machine (ELM) based supervisor can get an inversion of the plant timelier and more accurately, and an ELM supervisor based sliding mode disturbance rejection control (ESSMDRC) is proposed to improve both speed and robustness of the modulation. Closed-loop stability and the phase-leading property are analysed. Numerical results show that the proposed ESSMDRC guarantees a more satisfactory closed-loop neuromodulation.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 129026"},"PeriodicalIF":5.5,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142757620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-11-22DOI: 10.1016/j.neucom.2024.128939
Tao Li , Zhijun Guo , Qian Li
{"title":"Deep echo state network with projection-encoding for multi-step time series prediction","authors":"Tao Li , Zhijun Guo , Qian Li","doi":"10.1016/j.neucom.2024.128939","DOIUrl":"10.1016/j.neucom.2024.128939","url":null,"abstract":"<div><div>To fully utilize the advantage of reservoir computing in deep network modeling, a deep echo state network with projection-encoding (DEESN) is newly proposed for multi-step time series prediction in this paper. DEESN integrates multiple echo state network (ESN) modules and extreme learning machine (ELM) encoders in series arrays. Firstly, the <span><math><mi>k</mi></math></span>th ESN in DEESN learner is responsible for <span><math><mi>k</mi></math></span>th step ahead prediction. The forecast output and encoded reservoir states of the previous ESN module are concatenated with the input variable to form the new input signals for the next adjacent module. Therefore, the temporal dependency among future time steps can be learned, which contributes the performance improvement. Secondly, the ELM encoder is used to optimize the reservoir states for time consumption reduction. Finally, the effectiveness of DEESN is evaluated in artificial chaos benchmarks and real-world applications. Experimental results on six different datasets and comparative models demonstrate that the proposed DEESN has excellent accuracy and robust generalization for multi-step time series prediction.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"617 ","pages":"Article 128939"},"PeriodicalIF":5.5,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
NeurocomputingPub Date : 2024-11-22DOI: 10.1016/j.neucom.2024.128941
Si Zhang, Jiali Xu, Ning Hui, Peiyun Zhai
{"title":"A short text topic modeling method based on integrating Gaussian and Logistic coding networks with pre-trained word embeddings","authors":"Si Zhang, Jiali Xu, Ning Hui, Peiyun Zhai","doi":"10.1016/j.neucom.2024.128941","DOIUrl":"10.1016/j.neucom.2024.128941","url":null,"abstract":"<div><div>The development of neural networks has provided a flexible learning framework for topic modeling. Currently, topic modeling based on neural networks has garnered wide attention. Despite its widespread application, the implementation of neural topic modeling still needs to be improved due to the complexity of short texts. Short texts usually contains only a few words and a small amount of feature information, lacking sufficient word co-occurrence and context sharing information. This results in challenges such as sparse features and poor interpretability in topic modeling. To alleviate this issue, an innovative model called <strong>T</strong>opic <strong>M</strong>odeling of <strong>E</strong>nhanced <strong>N</strong>eural <strong>N</strong>etwork with word <strong>E</strong>mbedding (ENNETM) was proposed. Firstly, we introduced an enhanced network into the inference network part, which integrated the Gaussian and Logistic coding networks to improve the performance and the interpretability of topic extraction. Secondly, we introduced the pre-trained word embedding into the Gaussian decoding network part of the model to enrich the contextual semantic information. Comprehensive experiments were carried out on three public datasets, 20NewGroups, AG_news and TagMyNews, and the results showed that the proposed method outperformed several state-of-the-art models in topic extraction and text classification.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"616 ","pages":"Article 128941"},"PeriodicalIF":5.5,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142743498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}