{"title":"Architecture Knowledge Distillation for Evolutionary Generative Adversarial Network.","authors":"Yu Xue, Yan Lin, Ferrante Neri","doi":"10.1142/S0129065725500133","DOIUrl":"10.1142/S0129065725500133","url":null,"abstract":"<p><p>Generative Adversarial Networks (GANs) are effective for image generation, but their unstable training limits broader applications. Additionally, neural architecture search (NAS) for GANs with one-shot models often leads to insufficient subnet training, where subnets inherit weights from a supernet without proper optimization, further degrading performance. To address both issues, we propose Architecture Knowledge Distillation for Evolutionary GAN (AKD-EGAN). AKD-EGAN operates in two stages. First, architecture knowledge distillation (AKD) is used during supernet training to efficiently optimize subnetworks and accelerate learning. Second, a multi-objective evolutionary algorithm (MOEA) searches for optimal subnet architectures, ensuring efficiency by considering multiple performance metrics. This approach, combined with a strategy for architecture inheritance, enhances GAN stability and image quality. Experiments show that AKD-EGAN surpasses state-of-the-art methods, achieving a Fréchet Inception Distance (FID) of 7.91 and an Inception Score (IS) of 8.97 on CIFAR-10, along with competitive results on STL-10 (FID: 20.32, IS: 10.06). Code and models will be available at https://github.com/njit-ly/AKD-EGAN.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550013"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143451188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"End-User Confidence in Artificial Intelligence-Based Predictions Applied to Biomedical Data.","authors":"Zvi Kam, Lorenzo Peracchio, Giovanna Nicora","doi":"10.1142/S0129065725500170","DOIUrl":"10.1142/S0129065725500170","url":null,"abstract":"<p><p>Applications of Artificial Intelligence (AI) are revolutionizing biomedical research and healthcare by offering data-driven predictions that assist in diagnoses. Supervised learning systems are trained on large datasets to predict outcomes for new test cases. However, they typically do not provide an indication of the reliability of these predictions, even though error estimates are integral to model development. Here, we introduce a novel method to identify regions in the feature space that diverge from training data, where an AI model may perform poorly. We utilize a compact precompiled structure that allows for fast and direct access to confidence scores in real time at the point of use without requiring access to the training data or model algorithms. As a result, users can determine when to trust the AI model's outputs, while developers can identify where the model's applicability is limited. We validate our approach using simulated data and several biomedical case studies, demonstrating that our approach provides fast confidence estimates ([Formula: see text] milliseconds per case), with high concordance to previously developed methods (<i>f</i>-[Formula: see text]). These estimates can be easily added to real-world AI applications. We argue that providing confidence estimates should be a standard practice for all AI applications in public use.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"35 4","pages":"2550017"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143568989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Frequency-Assisted Local Attention in Lower Layers of Visual Transformers.","authors":"Xin Zhou, Zeyu Jiang, Shihua Zhou, Zhaohui Ren, Yongchao Zhang, Tianzhuang Yu, Yulin Liu","doi":"10.1142/S0129065725500157","DOIUrl":"10.1142/S0129065725500157","url":null,"abstract":"<p><p>Since vision transformers excel at establishing global relationships between features, they play an important role in current vision tasks. However, the global attention mechanism restricts the capture of local features, making convolutional assistance necessary. This paper indicates that transformer-based models can attend to local information without using convolutional blocks, similar to convolutional kernels, by employing a special initialization method. Therefore, this paper proposes a novel hybrid multi-scale model called Frequency-Assisted Local Attention Transformer (FALAT). FALAT introduces a Frequency-Assisted Window-based Positional Self-Attention (FWPSA) module that limits the attention distance of query tokens, enabling the capture of local contents in the early stage. The information from value tokens in the frequency domain enhances information diversity during self-attention computation. Additionally, the traditional convolutional method is replaced with a depth-wise separable convolution to downsample in the spatial reduction attention module for long-distance contents in the later stages. Experimental results demonstrate that FALAT-S achieves 83.0% accuracy on IN-1k with an input size of [Formula: see text] using 29.9[Formula: see text]M parameters and 5.6[Formula: see text]G FLOPs. This model outperforms the Next-ViT-S by 0.9[Formula: see text]AP<sup><i>b</i></sup>/0.8[Formula: see text]AP<sup><i>m</i></sup> with Mask-R-CNN [Formula: see text] on COCO and surpasses the recent FastViT-SA36 by 3.1% mIoU with FPN on ADE20k.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550015"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143525637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zahrul Jannat Peya, Mahfuza Akter Maria, Sk Imran Hossain, M A H Akhand, Nazmul Siddique
{"title":"Autism Spectrum Disorder Detection Using Prominent Connectivity Features from Electroencephalography.","authors":"Zahrul Jannat Peya, Mahfuza Akter Maria, Sk Imran Hossain, M A H Akhand, Nazmul Siddique","doi":"10.1142/S012906572550011X","DOIUrl":"10.1142/S012906572550011X","url":null,"abstract":"<p><p>Autism Spectrum Disorder (ASD) is a disorder of brain growth with great variability whose clinical presentation initially shows up during early stages or youth, and ASD follows a repetitive pattern of behavior in most cases. Accurate diagnosis of ASD has been difficult in clinical practice as there is currently no valid indicator of ASD. Since ASD is regarded as a neurodevelopmental disorder, brain signals specially electroencephalography (EEG) are an effective method for detecting ASD. Therefore, this research aims at developing a method of extracting features from EEG signal for discriminating between ASD and control subjects. This study applies six prominent connectivity features, namely Cross Correlation (XCOR), Phase Locking Value (PLV), Pearson's Correlation Coefficient (PCC), Mutual Information (MI), Normalized Mutual Information (NMI) and Transfer Entropy (TE), for feature extraction. The Connectivity Feature Maps (CFMs) are constructed and used for classification through Convolutional Neural Network (CNN). As CFMs contain spatial information, they are able to distinguish ASD and control subjects better than other features. Rigorous experimentation has been performed on the EEG datasets collected from Italy and Saudi Arabia according to different criteria. MI feature shows the best result for categorizing ASD and control participants with increased sample size and segmentation.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"35 3","pages":"2550011"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143442046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the Versatility of Spiking Neural Networks: Applications Across Diverse Scenarios.","authors":"Matteo Cavaleri, Claudio Zandron","doi":"10.1142/S0129065725500078","DOIUrl":"10.1142/S0129065725500078","url":null,"abstract":"<p><p>In the last few decades, Artificial Neural Networks have become more and more important, evolving into a powerful tool to implement learning algorithms. Spiking neural networks represent the third generation of Artificial Neural Networks; they have earned growing significance due to their remarkable achievements in pattern recognition, finding extensive utility across diverse domains such as e.g. diagnostic medicine. Usually, Spiking Neural Networks are slightly less accurate than other Artificial Neural Networks, but they require a reduced amount of energy to perform calculations; this amount of energy further reduces in a very significant manner if they are implemented on hardware specifically designed for them, like neuromorphic hardware. In this work, we focus on exploring the versatility of Spiking Neural Networks and their potential applications across a range of scenarios by exploiting their adaptability and dynamic processing capabilities, which make them suitable for various tasks. A first rough network is designed based on the dataset's general attributes; the network is then refined through an extensive grid search algorithm to identify the optimal values for hyperparameters. This dual-step process ensures that the Spiking Neural Network can be tailored to diverse and potentially very different situations in a direct and intuitive manner. We test this by considering three different scenarios: epileptic seizure detection, both considering binary and multi-classification tasks, as well as wine classification. The proposed methodology turned out to be highly effective in binary class scenarios: the Spiking Neural Networks models achieved significantly lower energy consumption compared to Artificial Neural Networks while approaching nearly 100% accuracy. In the case of multi-class classification, the model achieved an accuracy of approximately 90%, thus indicating that it can still be further improved.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550007"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142879122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unraveling the Differential Efficiency of Dorsal and Ventral Pathways in Visual Semantic Decoding.","authors":"Wei Huang, Ying Tang, Sizhuo Wang, Jingpeng Li, Kaiwen Cheng, Hongmei Yan","doi":"10.1142/S0129065725500091","DOIUrl":"10.1142/S0129065725500091","url":null,"abstract":"<p><p>Visual semantic decoding aims to extract perceived semantic information from the visual responses of the human brain and convert it into interpretable semantic labels. Although significant progress has been made in semantic decoding across individual visual cortices, studies on the semantic decoding of the ventral and dorsal cortical visual pathways remain limited. This study proposed a graph neural network (GNN)-based semantic decoding model on a natural scene dataset (NSD) to investigate the decoding differences between the dorsal and ventral pathways in process various parts of speech, including verbs, nouns, and adjectives. Our results indicate that the decoding accuracies for verbs and nouns with motion attributes were significantly higher for the dorsal pathway as compared to those for the ventral pathway. Comparative analyses reveal that the dorsal pathway significantly outperformed the ventral pathway in terms of decoding performance for verbs and nouns with motion attributes, with evidence showing that this superiority largely stemmed from higher-level visual cortices rather than lower-level ones. Furthermore, these two pathways appear to converge in their heightened sensitivity toward semantic content related to actions. These findings reveal unique visual neural mechanisms through which the dorsal and ventral cortical pathways segregate and converge when processing stimuli with different semantic categories.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550009"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142960753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giuseppe Placidi, Luigi Cinque, Gian Luca Foresti, Francesca Galassi, Filippo Mignosi, Michele Nappi, Matteo Polsinelli
{"title":"A Context-Dependent CNN-Based Framework for Multiple Sclerosis Segmentation in MRI.","authors":"Giuseppe Placidi, Luigi Cinque, Gian Luca Foresti, Francesca Galassi, Filippo Mignosi, Michele Nappi, Matteo Polsinelli","doi":"10.1142/S0129065725500066","DOIUrl":"10.1142/S0129065725500066","url":null,"abstract":"<p><p>Despite several automated strategies for identification/segmentation of Multiple Sclerosis (MS) lesions in Magnetic Resonance Imaging (MRI) being developed, they consistently fall short when compared to the performance of human experts. This emphasizes the unique skills and expertise of human professionals in dealing with the uncertainty resulting from the vagueness and variability of MS, the lack of specificity of MRI concerning MS, and the inherent instabilities of MRI. Physicians manage this uncertainty in part by relying on their radiological, clinical, and anatomical experience. We have developed an automated framework for identifying and segmenting MS lesions in MRI scans by introducing a novel approach to replicating human diagnosis, a significant advancement in the field. This framework has the potential to revolutionize the way MS lesions are identified and segmented, being based on three main concepts: (1) Modeling the uncertainty; (2) Use of separately trained Convolutional Neural Networks (CNNs) optimized for detecting lesions, also considering their context in the brain, and to ensure spatial continuity; (3) Implementing an ensemble classifier to combine information from these CNNs. The proposed framework has been trained, validated, and tested on a single MRI modality, the FLuid-Attenuated Inversion Recovery (FLAIR) of the MSSEG benchmark public data set containing annotated data from seven expert radiologists and one ground truth. The comparison with the ground truth and each of the seven human raters demonstrates that it operates similarly to human raters. At the same time, the proposed model demonstrates more stability, effectiveness and robustness to biases than any other state-of-the-art model though using just the FLAIR modality.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"35 3","pages":"2550006"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinying Li, Shengjie Yan, Yonglin Wu, Chenyun Dai, Yao Guo
{"title":"A Novel State Space Model with Dynamic Graphic Neural Network for EEG Event Detection.","authors":"Xinying Li, Shengjie Yan, Yonglin Wu, Chenyun Dai, Yao Guo","doi":"10.1142/S012906572550008X","DOIUrl":"10.1142/S012906572550008X","url":null,"abstract":"<p><p>Electroencephalography (EEG) is a widely used physiological signal to obtain information of brain activity, and its automatic detection holds significant research importance, which saves doctors' time, improves detection efficiency and accuracy. However, current automatic detection studies face several challenges: large EEG data volumes require substantial time and space for data reading and model training; EEG's long-term dependencies test the temporal feature extraction capabilities of models; and the dynamic changes in brain activity and the non-Euclidean spatial structure between electrodes complicate the acquisition of spatial information. The proposed method uses range-EEG (rEEG) to extract time-frequency features from EEG to reduce data volume and resource consumption. Additionally, the next-generation state-space model Mamba is utilized as a temporal feature extractor to effectively capture the temporal information in EEG data. To address the limitations of state space models (SSMs) in spatial feature extraction, Mamba is combined with Dynamic Graph Neural Networks, creating an efficient model called DG-Mamba for EEG event detection. Testing on seizure detection and sleep stage classification tasks showed that the proposed method improved training speed by 10 times and reduced memory usage to less than one-seventh of the original data while maintaining superior performance. On the TUSZ dataset, DG-Mamba achieved an AUROC of 0.931 for seizure detection and in the sleep stage classification task, the proposed model surpassed all baselines.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"35 3","pages":"2550008"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-Label Zero-Shot Learning Via Contrastive Label-Based Attention.","authors":"Shixuan Meng, Rongxin Jiang, Xiang Tian, Fan Zhou, Yaowu Chen, Junjie Liu, Chen Shen","doi":"10.1142/S0129065725500108","DOIUrl":"10.1142/S0129065725500108","url":null,"abstract":"<p><p>Multi-label zero-shot learning (ML-ZSL) strives to recognize all objects in an image, regardless of whether they are present in the training data. Recent methods incorporate an attention mechanism to locate labels in the image and generate class-specific semantic information. However, the attention mechanism built on visual features treats label embeddings equally in the prediction score, leading to severe semantic ambiguity. This study focuses on efficiently utilizing semantic information in the attention mechanism. We propose a contrastive label-based attention method (CLA) to associate each label with the most relevant image regions. Specifically, our label-based attention, guided by the latent label embedding, captures discriminative image details. To distinguish region-wise correlations, we implement a region-level contrastive loss. In addition, we utilize a global feature alignment module to identify labels with general information. Extensive experiments on two benchmarks, NUS-WIDE and Open Images, demonstrate that our CLA outperforms the state-of-the-art methods. Especially under the ZSL setting, our method achieves 2.0% improvements in mean Average Precision (mAP) for NUS-WIDE and 4.0% for Open Images compared with recent methods.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550010"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial - A journal that promotes excellence through uncompromising review process: Reflection of freedom of speech and scientific publication.","authors":"Zvi Kam, Giovanna Nicora","doi":"10.1142/S0129065725020010","DOIUrl":"https://doi.org/10.1142/S0129065725020010","url":null,"abstract":"","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2502001"},"PeriodicalIF":0.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}