Neural NetworksPub Date : 2024-12-03DOI: 10.1016/j.neunet.2024.106976
Kunlun Wu, Shunzhuo E, Ning Yang, Anguo Zhang, Xiaorong Yan, Chaoxu Mu, Yongduan Song
{"title":"A novel approach to enhancing biomedical signal recognition via hybrid high-order information bottleneck driven spiking neural networks.","authors":"Kunlun Wu, Shunzhuo E, Ning Yang, Anguo Zhang, Xiaorong Yan, Chaoxu Mu, Yongduan Song","doi":"10.1016/j.neunet.2024.106976","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106976","url":null,"abstract":"<p><p>Biomedical signals, encapsulating vital physiological information, are pivotal in elucidating human traits and conditions, serving as a cornerstone for advancing human-machine interfaces. Nonetheless, the fidelity of biomedical signal interpretation is frequently compromised by pervasive noise sources such as skin, motion, and equipment interference, posing formidable challenges to precision recognition tasks. Concurrently, the burgeoning adoption of intelligent wearable devices illuminates a societal shift towards enhancing life and work through technological integration. This surge in popularity underscores the imperative for efficient, noise-resilient biomedical signal recognition methodologies, a quest that is both challenging and profoundly impactful. This study proposes a novel approach to enhancing biomedical signal recognition. The proposed approach employs a hierarchical information bottleneck mechanism within SNNs, quantifying the mutual information in different orders based on the depth of information flow in the network. Subsequently, these mutual information, together with the network's output and category labels, are restructured based on information theory principles to form the loss function used for training. A series of theoretical analyses and substantial experimental results have shown that this method can effectively compress noise in the data, and on the premise of low computational cost, it can also significantly outperform its vanilla counterpart in terms of classification performance.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106976"},"PeriodicalIF":6.0,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142792806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2024-12-02DOI: 10.1016/j.neunet.2024.106999
DongCai Zhao
{"title":"Cognitive process and information processing model based on deep learning algorithms.","authors":"DongCai Zhao","doi":"10.1016/j.neunet.2024.106999","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106999","url":null,"abstract":"<p><p>According to the developmental process of infants, cognitive abilities are divided into four stages: the Exploration Stage (ES), the Mapping Stage (MS), the Phenomena-causality Stage (PCS), and the Essence-causality Stage (ECS). The MS is a training of the consecutive characteristics of events, similar to a deep learning model; the PCS is a process that symbolizes the input and output of the mapping training, and uses these symbols as the input or output of the mapping training again. After training, the next possible symbol can be predicted, which is equivalent to recognizing the essence. Expressing the essence itself with a function in the ECS represents entering the scope of science. To illustrate the above process, take the evolution journey of an insectoid with only visual and compositional detection capabilities as an example. Without the need for additional learning algorithm programming, the insectoid evolves according to the Cognitive Process and Information Processing Model and can develop its own independent symbol system. The ability to develop its own unique symbolic system actually indicates the birth of an agent.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106999"},"PeriodicalIF":6.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142792180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2024-12-02DOI: 10.1016/j.neunet.2024.106941
Jin Fan, Wenchao Weng, Qikai Chen, Huifeng Wu, Jia Wu
{"title":"PDG2Seq: Periodic Dynamic Graph to Sequence Model for Traffic Flow Prediction.","authors":"Jin Fan, Wenchao Weng, Qikai Chen, Huifeng Wu, Jia Wu","doi":"10.1016/j.neunet.2024.106941","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106941","url":null,"abstract":"<p><p>Traffic flow prediction is the foundation of intelligent traffic management systems. Current methods prioritize the development of intricate models to capture spatio-temporal correlations, yet they often neglect the exploitation of latent features within traffic flow. Firstly, the correlation among different road nodes exhibits dynamism rather than remaining static. Secondly, traffic data exhibits evident periodicity, yet current research lacks the exploration and utilization of periodic features. Lastly, current models typically rely solely on historical data for modeling, resulting in the limitation of accurately capturing future trend changes in traffic flow. To address these findings, this paper proposes a Periodic Dynamic Graph to Sequence Model (PDG2Seq) for traffic flow prediction. PDG2Seq consists of the Periodic Feature Selection Module (PFSM) and the Periodic Dynamic Graph Convolutional Gated Recurrent Unit (PDCGRU) to further extract the spatio-temporal features of the dynamic real-time traffic. The PFSM extracts learned periodic features using time points as indices, while the PDCGRU leverages the extracted periodic features from the PFSM and dynamic features from traffic flow to generate a Periodic Dynamic Graph for extracting spatio-temporal features. In the decoding phase, PDG2Seq utilizes periodic features corresponding to the prediction target to capture future trend changes, leading to more accurate predictions. Comprehensive experiments conducted on four large-scale datasets substantiate the superiority of PDG2Seq over existing state-of-the-art baselines. Related codes are available at https://github.com/wengwenchao123/PDG2Seq.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106941"},"PeriodicalIF":6.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142792291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intra- and inter-channel deep convolutional neural network with dynamic label smoothing for multichannel biosignal analysis.","authors":"Peiji Chen, Wenyang Li, Yifan Tang, Shunta Togo, Hiroshi Yokoi, Yinlai Jiang","doi":"10.1016/j.neunet.2024.106960","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106960","url":null,"abstract":"<p><p>Efficient processing of multichannel biosignals has significant application values in the fields of healthcare and human-machine interaction. Although previous research has achieved high recognition performance with deep convolutional neural networks, several key challenges still remain: (1) Effective extraction of spatial and temporal features from the multichannel biosignals. (2) Appropriate trade-off between performance and complexity for improving applicability in real-life situations given that traditional machine learning and 2D-based CNN approaches often involve excessive preprocessing steps or model parameters; and (3) Generalization ability of neural networks to compensate for domain difference and to reduce overfitting during training process. To address challenges 1 and 2, we propose a 1D-based deep intra and inter channel (I2C) convolution neural network. The I2C convolutional block is introduced to replace the standard convolutional layer, further extending it to several state-of-the-art modules, with the intent of extracting more effective features from multichannel biosignals with fewer parameters. To address challenge 3, we integrate a branch model into the main model to perform dynamic label smoothing, enabling the model to learn domain difference and improve its generalization ability. Experiments were conducted on three public multichannel biosignals databases, namely ISRUC-S3, HEF and Ninapro-DB1. The results suggest that the proposed method exhibits significant competitive advantages in accuracy, complexity, and generalization ability.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106960"},"PeriodicalIF":6.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142792276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2024-12-02DOI: 10.1016/j.neunet.2024.106969
Jaeill Kim, Wonseok Lee, Moonjung Eo, Wonjong Rhee
{"title":"Improving forward compatibility in class incremental learning by increasing representation rank and feature richness.","authors":"Jaeill Kim, Wonseok Lee, Moonjung Eo, Wonjong Rhee","doi":"10.1016/j.neunet.2024.106969","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106969","url":null,"abstract":"<p><p>Class Incremental Learning (CIL) constitutes a pivotal subfield within continual learning, aimed at enabling models to progressively learn new classification tasks while retaining knowledge obtained from prior tasks. Although previous studies have predominantly focused on backward compatible approaches to mitigate catastrophic forgetting, recent investigations have introduced forward compatible methods to enhance performance on novel tasks and complement existing backward compatible methods. In this study, we introduce effective-Rank based Feature Richness enhancement (RFR) method that is designed for improving forward compatibility. Specifically, this method increases the effective rank of representations during the base session, thereby facilitating the incorporation of more informative features pertinent to unseen novel tasks. Consequently, RFR achieves dual objectives in backward and forward compatibility: minimizing feature extractor modifications and enhancing novel task performance, respectively. To validate the efficacy of our approach, we establish a theoretical connection between effective rank and the Shannon entropy of representations. Subsequently, we conduct comprehensive experiments by integrating RFR into eleven well-known CIL methods. Our results demonstrate the effectiveness of our approach in enhancing novel-task performance while mitigating catastrophic forgetting. Furthermore, our method notably improves the average incremental accuracy across all eleven cases examined.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106969"},"PeriodicalIF":6.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142796518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MNet: A multi-scale network for visible watermark removal.","authors":"Wenhong Huang, Yunshu Dai, Jianwei Fei, Fangjun Huang","doi":"10.1016/j.neunet.2024.106961","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106961","url":null,"abstract":"<p><p>Superimposing visible watermarks on images is an efficient way to indicate ownership and prevent potential unauthorized use. Visible watermark removal technology is receiving increasing attention from researchers due to its ability to enhance the robustness of visible watermarks. In this paper, we propose MNet, a novel multi-scale network for visible watermark removal. In MNet, a variable number of simple U-Nets are stacked in each scale. There are two branches in MNet, i.e., the background restoration branch and the mask prediction branch. In the background restoration branch, we propose a different approach from current methods. Instead of directly reconstructing the background image, we pay great attention to predicting the anti-watermark image. In the watermark mask prediction branch, we adopt dice loss. This further supervises the predicted mask for better prediction accuracy. To make information flow more effective, we employ cross-layer feature fusion and intra-layer feature fusion among U-Nets. Moreover, a scale reduction module is employed to capture multi-scale information effectively. Our approach is evaluated on three different datasets, and the experimental results show that our approach achieves better performance than other state-of-the-art methods. Code will be available at https://github.com/Aitchson-Hwang/MNet.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106961"},"PeriodicalIF":6.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142796521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2024-12-01DOI: 10.1016/j.neunet.2024.106970
Xiaojun Zhou, Chunna Zhao, Yaqun Huang, Chengli Zhou, Junjie Ye
{"title":"Improved fractional-order gradient descent method based on multilayer perceptron.","authors":"Xiaojun Zhou, Chunna Zhao, Yaqun Huang, Chengli Zhou, Junjie Ye","doi":"10.1016/j.neunet.2024.106970","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106970","url":null,"abstract":"<p><p>The fractional-order gradient descent (FOGD) method has been employed by numerous scholars in Artificial Neural Networks (ANN), with its superior performance validated both theoretically and experimentally. However, current FOGD methods only apply fractional-order differentiation to the loss function. The application of FOGD based on Autograd to hidden layers leverages the characteristics of fractional-order differentiation, significantly enhancing its flexibility. Moreover, the implementation of FOGD in the hidden layers serves as a necessary foundation for establishing a family of fractional-order deep learning optimizers, facilitating the widespread application of FOGD in deep learning. This paper proposes an improved fractional-order gradient descent (IFOGD) method based on Multilayer Perceptron (MLP). Firstly, a fractional matrix differentiation algorithm and its fractional matrix differentiation solver is proposed based on MLP, ensuring that IFOGD can be applied within the hidden layers. Subsequently, we overcome the issue of incorrect backpropagation direction caused by the absolute value symbol, ensuring that the IFOGD method does not cause divergence in the value of the loss function. Thirdly, fractional-order Autograd (FOAutograd) is proposed based on PyTorch by reconstructing Linear layer and Mean Squared Error Loss module. By combining FOAutograd with first-order adaptive deep learning optimizers, parameter matrices in each layer of ANN can be updated using fractional-order gradients. Finally, we compare and analyze the performance of IFOGD with other methods in simulation experiments and time series prediction tasks. The experimental results demonstrate that the IFOGD method exhibits performances.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106970"},"PeriodicalIF":6.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142792265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-source Selective Graph Domain Adaptation Network for cross-subject EEG emotion recognition.","authors":"Jing Wang, Xiaojun Ning, Wei Xu, Yunze Li, Ziyu Jia, Youfang Lin","doi":"10.1016/j.neunet.2024.106742","DOIUrl":"10.1016/j.neunet.2024.106742","url":null,"abstract":"<p><p>Affective brain-computer interface is an important part of realizing emotional human-computer interaction. However, existing objective individual differences among subjects significantly hinder the application of electroencephalography (EEG) emotion recognition. Existing methods still lack the complete extraction of subject-invariant representations for EEG and the ability to fuse valuable information from multiple subjects to facilitate the emotion recognition of the target subject. To address the above challenges, we propose a Multi-source Selective Graph Domain Adaptation Network (MSGDAN), which can better utilize data from different source subjects and perform more robust emotion recognition on the target subject. The proposed network extracts and selects the individual information specific to each subject, where public information refers to subject-invariant components from multi-source subjects. Moreover, the graph domain adaptation network captures both functional connectivity and regional states of the brain via a dynamic graph network and then integrates graph domain adaptation to ensure the invariance of both functional connectivity and regional states. To evaluate our method, we conduct cross-subject emotion recognition experiments on the SEED, SEED-IV, and DEAP datasets. The results demonstrate that the MSGDAN has superior classification performance.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"180 ","pages":"106742"},"PeriodicalIF":6.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142331111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2024-12-01Epub Date: 2024-09-22DOI: 10.1016/j.neunet.2024.106756
Lin Qiu, Fajie Wang, Wenzhen Qu, Yan Gu, Qing-Hua Qin
{"title":"Spectral integrated neural networks (SINNs) for solving forward and inverse dynamic problems.","authors":"Lin Qiu, Fajie Wang, Wenzhen Qu, Yan Gu, Qing-Hua Qin","doi":"10.1016/j.neunet.2024.106756","DOIUrl":"10.1016/j.neunet.2024.106756","url":null,"abstract":"<p><p>This study introduces an innovative neural network framework named spectral integrated neural networks (SINNs) to address both forward and inverse dynamic problems in three-dimensional space. In the SINNs, the spectral integration technique is utilized for temporal discretization, followed by the application of a fully connected neural network to solve the resulting partial differential equations in the spatial domain. Furthermore, the polynomial basis functions are employed to expand the unknown function, with the goal of improving the performance of SINNs in tackling inverse problems. The performance of the developed framework is evaluated through several dynamic benchmark examples encompassing linear and nonlinear heat conduction problems, linear and nonlinear wave propagation problems, inverse problem of heat conduction, and long-time heat conduction problem. The numerical results demonstrate that the SINNs can effectively and accurately solve forward and inverse problems involving heat conduction and wave propagation. Additionally, the SINNs provide precise and stable solutions for dynamic problems with extended time durations. Compared to commonly used physics-informed neural networks, the SINNs exhibit superior performance with enhanced convergence speed, computational accuracy, and efficiency.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"180 ","pages":"106756"},"PeriodicalIF":6.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142331112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DFA-mode-dependent stability of impulsive switched memristive neural networks under channel-covert aperiodic asynchronous attacks.","authors":"Xinyi Han, Yongbin Yu, Xiangxiang Wang, Xiao Feng, Jingya Wang, Jingye Cai, Kaibo Shi, Shouming Zhong","doi":"10.1016/j.neunet.2024.106962","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106962","url":null,"abstract":"<p><p>This article is concerned with the deterministic finite automaton-mode-dependent (DFAMD) exponential stability problem of impulsive switched memristive neural networks (SMNNs) with aperiodic asynchronous attacks and the network covert channel. First, unlike the existing literature on SMNNs, this article focuses on DFA to drive mode switching, which facilitates precise system behavior modeling based on deterministic rules and input characters. To eliminate the periodicity and consistency constraints of traditional attacks, this article presents the multichannel aperiodic asynchronous denial-of-service (DoS) attacks, allowing for the diversity of attack sequences. Meanwhile, the network covert channel with a security layer is exploited and its dynamic adjustment is realized jointly through the dynamic weighted try-once-discard (DWTOD) protocol and selector, which can reduce network congestion, improve data security, and enhance system defense capability. In addition, this article proposes a novel mode-dependent hybrid controller composed of output feedback control and mode-dependent impulsive control, with the goal of increasing system flexibility and efficiency. Inspired by the semi-tensor product (STP) technique, Lyapunov-Krasovskii functions, and inequality technology, the novel exponential stability conditions are derived. Finally, a numerical simulation is provided to illustrate the effectiveness of the developed approach.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106962"},"PeriodicalIF":6.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142808454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}