Antonio Ramírez-de-Arellano, David Orellana-Martín, Mario J Pérez-Jiménez, Francis George C Cabarle, Henry N Adorna
{"title":"Matrix Representation of Virus Machines and an Application to the Discrete Logarithm Problem.","authors":"Antonio Ramírez-de-Arellano, David Orellana-Martín, Mario J Pérez-Jiménez, Francis George C Cabarle, Henry N Adorna","doi":"10.1142/S0129065725500492","DOIUrl":"10.1142/S0129065725500492","url":null,"abstract":"<p><p>Virus machines, which develop models of computation inspired by biological processes and the spread of viruses among hosts, deviate from the traditional methods. These virus machines are recognized for their computational power (functioning as algorithms) and their ability to tackle computationally difficult problems. In this paper, we introduce a new extension of the matrix-based representation of virus machines. In this way, hosts, the number of viruses and the instructions to control virus transmission are represented as vectors and matrices, describing the computations of virus machines by linear algebra operations. We also use our matrix representation to show invariants, useful in the proofs, of such machines. In addition, an explicit example is shown to clarify the computation and invariants using the representation. That is, a virus machine that computes the discrete logarithm, which relies on the presumed intractability of cryptosystems such the digital signature algorithm.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550049"},"PeriodicalIF":6.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144877617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward a Biologically Plausible SNN-Based Associative Memory with Context-Dependent Hebbian Connectivity.","authors":"S Yu Makovkin, S Yu Gordleeva, I A Kastalskiy","doi":"10.1142/S0129065725500273","DOIUrl":"10.1142/S0129065725500273","url":null,"abstract":"<p><p>In this paper, we propose a spiking neural network model with Hebbian connectivity for implementing energy-efficient associative memory, whose activity is determined by input stimuli. The model consists of three interacting layers of Hodgkin-Huxley-Mainen spiking neurons with excitatory and inhibitory synaptic connections. Information patterns are stored in memory using a symmetric Hebbian matrix and can be retrieved in response to a specific stimulus pattern. Binary images are encoded using in-phase and anti-phase oscillations relative to a global clock signal. Utilizing the phase-locking effect allows for cluster synchronization of neurons (both on the input and output layers). Interneurons in the intermediate layer filter signal propagation pathways depending on the context of the input layer, effectively engaging only a portion of the synaptic connections within the Hebbian matrix for recognition. The stability of the oscillation phase is investigated for both in-phase and anti-phase synchronization modes when recognizing direct and inverse images. This context-dependent effect opens promising avenues for the development of analog hardware circuits for energy-efficient neurocomputing applications, potentially leading to breakthroughs in artificial intelligence and cognitive computing.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550027"},"PeriodicalIF":6.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144000789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Computational Complexity of Spiking Neural Membrane Systems with Colored Spikes.","authors":"Antonio Grillo, Claudio Zandron","doi":"10.1142/S0129065725500352","DOIUrl":"10.1142/S0129065725500352","url":null,"abstract":"<p><p>Spiking Neural P Systems are parallel and distributed computational models inspired by biological neurons, emerging from membrane computing and applied to solving computationally difficult problems. This paper focuses on the computational complexity of such systems using neuron division rules and colored spikes for the SAT problem. We prove a conjecture stated in a recent paper, showing that enhancing the model with an input module reduces computing time. Additionally, we prove that the inclusion of budding rules extends the model's capability to solve all problems in the complexity class <b>PSPACE</b>. These findings advance research on Spiking Neural P Systems and their application to complex problems; however, whether both budding rules and division rules are required to extend these methods to problem domains beyond the NP class remains an open question.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550035"},"PeriodicalIF":6.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144061212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wang Li, Meichen Xia, Hong Peng, Zhicai Liu, Jun Guo
{"title":"A Salient Object Detection Network Enhanced by Nonlinear Spiking Neural Systems and Transformer.","authors":"Wang Li, Meichen Xia, Hong Peng, Zhicai Liu, Jun Guo","doi":"10.1142/S0129065725500455","DOIUrl":"10.1142/S0129065725500455","url":null,"abstract":"<p><p>Although a variety of deep learning-based methods have been introduced for Salient Object Detection (SOD) to RGB and Depth (RGB-D) images, existing approaches still encounter challenges, including inadequate cross-modal feature fusion, significant errors in saliency estimation due to noise in depth information, and limited model generalization capabilities. To tackle these challenges, this paper introduces an innovative method for RGB-D SOD, TranSNP-Net, which integrates Nonlinear Spiking Neural P (NSNP) systems with Transformer networks. TranSNP-Net effectively fuses RGB and depth features by introducing an enhanced feature fusion module (SNPFusion) and an attention mechanism. Unlike traditional methods, TranSNP-Net leverages fine-tuned Swin (shifted window transformer) as its backbone network, significantly improving the model's generalization performance. Furthermore, the proposed hierarchical feature decoder (SNP-D) notably enhances accuracy in complex scenes where depth noise is prevalent. According to the experimental findings, the mean scores for the four metrics S-measure, F-measure, E-measure and MEA on the six RGB-D benchmark datasets are 0.9328, 0.9356, 0.9558 and 0.0288. TranSNP-Net achieves superior performance compared to 14 leading methods in six RGB-D benchmark datasets.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550045"},"PeriodicalIF":6.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nonlinear Spiking Neural Systems for Thermal Image Semantic Segmentation Networks.","authors":"Peng Wang, Minglong He, Hong Peng, Zhicai Liu","doi":"10.1142/S0129065725500388","DOIUrl":"10.1142/S0129065725500388","url":null,"abstract":"<p><p>Thermal and RGB images exhibit significant differences in information representation, especially in low-light or nighttime environments. Thermal images provide temperature information, complementing the RGB images by restoring details and contextual information. However, the spatial discrepancy between different modalities in RGB-Thermal (RGB-T) semantic segmentation tasks complicates the process of multimodal feature fusion, leading to a loss of spatial contextual information and limited model performance. This paper proposes a channel-space fusion nonlinear spiking neural P system model network (CSPM-SNPNet) to address these challenges. This paper designs a novel color-thermal image fusion module to effectively integrate features from both modalities. During decoding, a nonlinear spiking neural P system is introduced to enhance multi-channel information extraction through the convolution of spiking neural P systems (ConvSNP) operations, fully restoring features learned in the encoder. Experimental results on public datasets MFNet and PST900 demonstrate that CSPM-SNPNet significantly improves segmentation performance. Compared with the existing methods, CSPM-SNPNet achieves a 0.5% improvement in mIOU on MFNet and 1.8% on PST900, showcasing its effectiveness in complex scenes.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550038"},"PeriodicalIF":6.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144096588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yonglin Wu, Xinyu Jiang, Jionghui Liu, Yao Guo, Chenyun Dai
{"title":"An Enhanced Random Convolutional Kernel Transform for Diverse and Robust Feature Extraction from High-Density Surface Electromyograms for Cross-day Gesture Recognition.","authors":"Yonglin Wu, Xinyu Jiang, Jionghui Liu, Yao Guo, Chenyun Dai","doi":"10.1142/S0129065725500625","DOIUrl":"https://doi.org/10.1142/S0129065725500625","url":null,"abstract":"<p><p>High-density surface electromyogram (HD-sEMG) has become a powerful signal source for hand gesture recognition. However, existing approaches suffer from limited feature diversity in hand-crafted methods and high data dependency in deep learning models, necessitating individual model calibration for each user due to neuromuscular differences. We propose EMG-ROCKET, an enhanced version of the RandOm Convolutional KErnel Transform (ROCKET), designed to extract diverse and robust HD-sEMG features without prior knowledge or extensive training. EMG-ROCKET integrates random channel fusion and enhanced aggregation functions to enhance robustness against cross-day signal variability in HD-sEMG applications. In cross-day evaluations of hand gesture recognition, a Ridge classifier using EMG-ROCKET features achieved 84.3% and 77.8% accuracy on two HD-sEMG datasets, outperforming all baseline methods. Furthermore, feature contribution analysis demonstrates the capability of EMG-ROCKET to capture spatial muscle activation patterns, offering insights into motion mechanisms. These results establish EMG-ROCKET as a promising, training-free solution for robust HD-sEMG feature extraction, facilitating practical human-machine interaction applications.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550062"},"PeriodicalIF":6.4,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visually-Inspired Multimodal Iterative Attentional Network for High-Precision EEG-Eye-Movement Emotion Recognition.","authors":"Wei Meng, Fazheng Hou, Kun Chen, Li Ma, Quan Liu","doi":"10.1142/S0129065725500728","DOIUrl":"https://doi.org/10.1142/S0129065725500728","url":null,"abstract":"<p><p>Advancements in artificial intelligence have propelled affective computing toward unprecedented accuracy and real-world impact. By leveraging the unique strengths of brain signals and ocular dynamics, we introduce a novel multimodal framework that integrates EEG and eye-movement (EM) features synergistically to achieve more reliable emotion recognition. First, our EEG Feature Encoder (EFE) uses a convolutional architecture inspired by the human visual cortex's eccentricity-receptive-field mapping, enabling the extraction of highly discriminative neural patterns. Second, our EM Feature Encoder (EMFE) employs a Kolmogorov-Arnold Network (KAN) to overcome the sparse sampling and dimensional mismatch inherent in EM data; through a tailored multilayer design and interpolation alignment, it generates rich, modality-compatible representations. Finally, the core Multimodal Iterative Attentional Feature Fusion (MIAFF) module unites these streams: alternating global and local attention via a Hierarchical Channel Attention Module (HCAM) to iteratively refine and integrate features. Comprehensive evaluations on SEED (3-class) and SEED-IV (4-class) benchmarks show that our method reaches leading-edge accuracy. However, our experiments are limited by small homogeneous datasets, untested cross-cultural robustness, and potential degradation in noisy or edge-deployment settings. Nevertheless, this work not only underscores the power of biomimetic encoding and iterative attention but also paves the way for next-generation brain-computer interface applications in affective health, adaptive gaming, and beyond.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550072"},"PeriodicalIF":6.4,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Unified Hypergraph-Mamba Framework for Adaptive Electroencephalogram Modeling in Multi-view Seizure Prediction.","authors":"Dengdi Sun, Yanqing Liu, Changxu Dong, Zongyun Gu","doi":"10.1142/S012906572550056X","DOIUrl":"https://doi.org/10.1142/S012906572550056X","url":null,"abstract":"<p><p>Seizure prediction from Electroencephalogram (EEG) signals is a critical task for proactive intervention in epilepsy management. Existing models often struggle to capture high-order inter-channel dependencies dynamically and adapt to the spectral variations preceding seizure onset, especially in cross-patient scenarios. To address these issues, a novel Unified Hypergraph-Mamba (UHM) framework, which for the first time integrates hypergraph-based spatial modeling with Mamba-based adaptive spectral modeling. Specifically, a hypergraph attention mechanism is designed to capture high-order spatial interactions among EEG channels, enabling dynamic representation of inter-channel dependencies. Concurrently, an adaptive spectral modeling module based on the Mamba architecture selectively emphasizes frequency components most indicative of preictal states. Together, these components form a unified architecture capable of jointly modeling spatiotemporal EEG dynamics. Extensive experiments conducted on both patient-specific and cross-patient settings demonstrate that our model consistently outperforms state-of-the-art baselines, achieving superior sensitivity and AUC.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550056"},"PeriodicalIF":6.4,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Longfei Qi, Shasha Yuan, Feng Li, Junliang Shang, Juan Wang, Shihan Wang
{"title":"A Contrastive Learning-Enhanced Residual Network for Predicting Epileptic Seizures Using EEG Signals.","authors":"Longfei Qi, Shasha Yuan, Feng Li, Junliang Shang, Juan Wang, Shihan Wang","doi":"10.1142/S0129065725500509","DOIUrl":"10.1142/S0129065725500509","url":null,"abstract":"<p><p>The models used to predict epileptic seizures based on electroencephalogram (EEG) signals often encounter substantial challenges due to the requirement for large, labeled datasets and the inherent complexity of EEG data, which hinders their robustness and generalization capability. This study proposes CLResNet, a framework for predicting epileptic seizures, which combines contrastive self-supervised learning with a modified deep residual neural network to address the above challenges. In contrast to traditional models, CLResNet uses unlabeled EEG data for pre-training to extract robust feature representations. It is then fine-tuned on a smaller labeled dataset to significantly reduce its reliance on labeled data while improving its efficiency and predictive accuracy. The contrastive learning (CL) framework enhances the ability of the model to distinguish between preictal and interictal states, thus improving its robustness and generalizability. The architecture of CLResNet contains residual connections that enable it to learn deep features of the data and ensure an efficient gradient flow. The results of the evaluation of the model on the CHB-MIT dataset showed that it outperformed prevalent methods in the field, with an accuracy of 92.97%, sensitivity of 94.18%, and false-positive rate of 0.043/h. On the Siena dataset, the model also achieved competitive performance, with an accuracy of 92.79%, a sensitivity of 91.47%, and a false-positive rate of 0.041/h. These results confirm the effectiveness of CLResNet in addressing variations in EEG data, and show that contrastive self-supervised learning is a robust and accurate approach for predicting seizures.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550050"},"PeriodicalIF":6.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144651662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dominant Classifier-assisted Hybrid Evolutionary Multi-objective Neural Architecture Search.","authors":"Yu Xue, Keyu Liu, Ferrante Neri","doi":"10.1142/S0129065725500510","DOIUrl":"10.1142/S0129065725500510","url":null,"abstract":"<p><p>Neural Architecture Search (NAS) automates the design of deep neural networks but remains computationally expensive, particularly in multi-objective settings. Existing predictor-assisted evolutionary NAS methods suffer from slow convergence and rank disorder, which undermines prediction accuracy. To overcome these limitations, we propose CHENAS: a Classifier-assisted multi-objective Hybrid Evolutionary NAS framework. CHENAS combines the global exploration of evolutionary algorithms with the local refinement of gradient-based optimization to accelerate convergence and enhance solution quality. A novel dominance classifier predicts Pareto dominance relationships among candidate architectures, reframing multi-objective optimization as a classification task and mitigating rank disorder. To further improve efficiency, we employ a contrastive learning-based autoencoder that maps architectures into a continuous, structured latent space tailored for dominance prediction. Experiments on several benchmark datasets demonstrate that CHENAS outperforms state-of-the-art NAS approaches in identifying high-performing architectures across multiple objectives. Future work will focus on improving the computational efficiency of the framework and extending it to other application domains.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550051"},"PeriodicalIF":6.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144755506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}