Neural NetworksPub Date : 2025-01-27DOI: 10.1016/j.neunet.2025.107195
Wenjun Bai , Okito Yamashita , Junichiro Yoshimoto
{"title":"Functionally specialized spectral organization of the resting human cortex","authors":"Wenjun Bai , Okito Yamashita , Junichiro Yoshimoto","doi":"10.1016/j.neunet.2025.107195","DOIUrl":"10.1016/j.neunet.2025.107195","url":null,"abstract":"<div><div>Ample studies across various neuroimaging modalities have suggested that the human cortex at rest is hierarchically organized along the spectral and functional axes. However, the relationship between the spectral and functional organizations of the human cortex remains largely unexplored. Here, we reveal the confluence of functional and spectral cortical organizations by examining the functional specialization in spectral gradients of the cortex. These spectral gradients, derived from functional magnetic resonance imaging data at rest using our temporal de-correlation method to enhance spectral resolution, demonstrate regional frequency biases. The grading of spectral gradients across the cortex – aligns with many existing brain maps – is found to be highly functionally specialized through discovered frequency-specific resting-state functional networks, functionally distinctive spectral profiles, and an intrinsic coordinate system that is functionally specialized. By demonstrating the functionally specialized spectral gradients of the cortex, we shed light on the close relation between functional and spectral organizations of the resting human cortex.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107195"},"PeriodicalIF":6.0,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143081533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-01-27DOI: 10.1016/j.neunet.2025.107189
Pablo Hernández-Cámara, Jorge Vila-Tomás, Valero Laparra, Jesús Malo
{"title":"Dissecting the effectiveness of deep features as metric of perceptual image quality","authors":"Pablo Hernández-Cámara, Jorge Vila-Tomás, Valero Laparra, Jesús Malo","doi":"10.1016/j.neunet.2025.107189","DOIUrl":"10.1016/j.neunet.2025.107189","url":null,"abstract":"<div><div>There is an open debate on the role of artificial networks to understand the visual brain. Internal representations of images in artificial networks develop human-like properties. In particular, evaluating distortions using differences between internal features is correlated to human perception of distortion. However, the origins of this correlation are not well understood.</div><div>Here, we dissect the different factors involved in the emergence of human-like behavior: <em>function</em>, <em>architecture</em>, and <em>environment</em>. To do so, we evaluate the aforementioned human-network correlation at different depths of 46 pre-trained model configurations that include no psycho-visual information. The results show that most of the models correlate better with human opinion than SSIM (a de-facto standard in subjective image quality). Moreover, some models are better than state-of-the-art networks specifically tuned for the application (LPIPS, DISTS). Regarding the function, supervised classification leads to nets that correlate better with humans than the explored models for self- and non-supervised tasks. However, we found that better performance in the task does not imply more human behavior. Regarding the architecture, simpler models correlate better with humans than very deep nets and generally, the highest correlation is not achieved in the last layer. Finally, regarding the environment, training with large natural datasets leads to bigger correlations than training in smaller databases with restricted content, as expected. We also found that the best classification models are not the best for predicting human distances.</div><div>In the general debate about understanding human vision, our empirical findings imply that explanations have not to be focused on a single abstraction level, but all <em>function</em>, <em>architecture</em>, and <em>environment</em> are relevant.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107189"},"PeriodicalIF":6.0,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep one-class probability learning for end-to-end image classification","authors":"Jia Liu, Wenhua Zhang, Fang Liu, Jingxiang Yang, Liang Xiao","doi":"10.1016/j.neunet.2025.107201","DOIUrl":"10.1016/j.neunet.2025.107201","url":null,"abstract":"<div><div>One-class learning has many application potentials in novelty, anomaly, and outlier detection systems. It aims to distinguish both positive and negative samples with a model trained via only positive samples or one-class annotated samples. With the difficulty in training an end-to-end classification network, existing methods usually make decisions indirectly. To fully exploit the learning capability of a deep network, in this paper, we propose to design a deep end-to-end binary image classifier based on convolutional neural network with input of image and output of classification result. Without negative training samples, we establish a probabilistic model driven by an energy to learn the distribution of positive samples. The energy is proposed based on the output of the network which subtly models the deep discriminations into statistics. During optimization, to overcome the difficulty of distribution estimation, we propose a novel particle swarm optimization algorithm based sampling method. Compared with existing methods, the proposed method is able to directly output classification results without additional thresholding or estimating operations. Moreover, the deep network is directly optimized via the probabilistic model which results in better adaptation of positive distribution and classification task. Experiments demonstrate the effectiveness and state-of-the-art performance of the proposed method.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107201"},"PeriodicalIF":6.0,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143140552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-01-27DOI: 10.1016/j.neunet.2025.107199
Jue Xiao , Hewang Nie , Zepu Yi , Xueming Tang , Songfeng Lu
{"title":"Federated learning with bilateral defense via blockchain","authors":"Jue Xiao , Hewang Nie , Zepu Yi , Xueming Tang , Songfeng Lu","doi":"10.1016/j.neunet.2025.107199","DOIUrl":"10.1016/j.neunet.2025.107199","url":null,"abstract":"<div><div>Federated Learning (FL) offers benefits in protecting client data privacy but also faces multiple security challenges, such as privacy breaches from unencrypted data transmission and poisoning attacks that compromise model performance, however, most existing solutions address only one of these issues. In this paper, we consider a more challenging threat model—the non-fully trusted model, wherein both malicious clients and honest-but-curious servers coexist. To this end, we propose a Federated Learning with Bilateral Defense via Blockchain (FedBASS) scheme that tackles both threats by implementing a dual-server architecture (Analyzer and Verifier), using CKKS encryption to secure client-uploaded gradients, and employing cosine similarity to detect malicious clients. Additionally, we address the problem of non-IID data by proposing a gradient compensation strategy based on dynamic clustering. To further enhance privacy during clustering, we propose a weakened differential privacy scheme augmented with shuffling. Moreover, in FedBASS, the communication process between servers is recorded on the blockchain to ensure the robustness and transparency of FedBASS and to prevent selfish behaviors by clients and servers. Finally, extensive experiments conducted on three datasets prove that FedBASS effectively achieves a balance among model fidelity, robustness, efficiency, privacy, and practicality.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107199"},"PeriodicalIF":6.0,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143076113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-01-27DOI: 10.1016/j.neunet.2025.107198
Kaizheng Wang , Keivan Shariatmadar , Shireen Kudukkil Manchingal , Fabio Cuzzolin , David Moens , Hans Hallez
{"title":"CreINNs: Credal-Set Interval Neural Networks for Uncertainty Estimation in Classification Tasks","authors":"Kaizheng Wang , Keivan Shariatmadar , Shireen Kudukkil Manchingal , Fabio Cuzzolin , David Moens , Hans Hallez","doi":"10.1016/j.neunet.2025.107198","DOIUrl":"10.1016/j.neunet.2025.107198","url":null,"abstract":"<div><div>Effective uncertainty estimation is becoming increasingly attractive for enhancing the reliability of neural networks. This work presents a novel approach, termed Credal-Set Interval Neural Networks (CreINNs), for classification. CreINNs retain the fundamental structure of traditional Interval Neural Networks, capturing weight uncertainty through deterministic intervals. CreINNs are designed to predict an upper and a lower probability bound for each class, rather than a single probability value. The probability intervals can define a credal set, facilitating estimating different types of uncertainties associated with predictions. Experiments on standard multiclass and binary classification tasks demonstrate that the proposed CreINNs can achieve superior or comparable quality of uncertainty estimation compared to variational Bayesian Neural Networks (BNNs) and Deep Ensembles. Furthermore, CreINNs significantly reduce the computational complexity of variational BNNs during inference. Moreover, the effective uncertainty quantification of CreINNs is also verified when the input data are intervals.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107198"},"PeriodicalIF":6.0,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-01-26DOI: 10.1016/j.neunet.2025.107209
Shuqi Yang , Qing Lan , Lijuan Zhang , Kuangling Zhang , Guangmin Tang , Huan Huang , Ping Liang , Jiaqing Miao , Boxun Zhang , Rui Tan , Dezhong Yao , Cheng Luo , Ying Tan
{"title":"Multimodal cross-scale context clusters for classification of mental disorders using functional and structural MRI","authors":"Shuqi Yang , Qing Lan , Lijuan Zhang , Kuangling Zhang , Guangmin Tang , Huan Huang , Ping Liang , Jiaqing Miao , Boxun Zhang , Rui Tan , Dezhong Yao , Cheng Luo , Ying Tan","doi":"10.1016/j.neunet.2025.107209","DOIUrl":"10.1016/j.neunet.2025.107209","url":null,"abstract":"<div><div>The brain is a complex system with multiple scales and hierarchies, making it challenging to identify abnormalities in individuals with mental disorders. The dynamic segregation and integration of activities across brain regions enable flexible switching between local and global information processing modes. Modeling these scale dynamics within and between brain regions can uncover hidden correlates of brain structure and function in mental disorders. Consequently, we propose a multimodal cross-scale context clusters (MCCocs) model. First, the complementary information in the multimodal image voxels of the brain is integrated and mapped to the original target space to establish a novel voxel-level brain representation. Within each region of interest (ROI), the Voxel Reducer uses a convolution operator to extract local associations among neighboring features and achieves quantitative dimensionality reduction. Among multiple ROIs, the ROI Context Cluster Block performs unsupervised clustering of whole-brain features, capturing nonlinear relationships between ROIs through bidirectional feature aggregation to simulate the effective integration of information across regions. By alternately executing the Voxel Reducer and ROI Context Cluster Block modules multiple times, our model simulates dynamic scale switching within and between ROIs. Experimental results show that MCCocs can recognize potential discriminative biomarkers and achieve state-of-the-art performance in multiple mental disorder classification tasks. The code is available at <span><span>https://github.com/yangshuqigit/MCCocs</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107209"},"PeriodicalIF":6.0,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143069226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-01-25DOI: 10.1016/j.neunet.2025.107196
Wenyi Feng , Zhe Wang , Ting Xiao
{"title":"Low-Rank Representation with Empirical Kernel Space Embedding of Manifolds","authors":"Wenyi Feng , Zhe Wang , Ting Xiao","doi":"10.1016/j.neunet.2025.107196","DOIUrl":"10.1016/j.neunet.2025.107196","url":null,"abstract":"<div><div>Low-Rank Representation (LRR) methods integrate low-rank constraints and projection operators to model the mapping from the sample space to low-dimensional manifolds. Nonetheless, existing approaches typically apply Euclidean algorithms directly to manifold data in the original input space, leading to suboptimal classification accuracy. To mitigate this limitation, we introduce an unsupervised low-rank projection learning method named Low-Rank Representation with Empirical Kernel Space Embedding of Manifolds (LRR-EKM). LRR-EKM leverages an empirical kernel mapping to project samples into the Reproduced Kernel Hilbert Space (RKHS), enabling the linear separability of non-linearly structured samples and facilitating improved low-dimensional manifold representations through Euclidean distance metrics. By incorporating a row sparsity constraint on the projection matrix, LRR-EKM not only identifies discriminative features and removes redundancies but also enhances the interpretability of the learned subspace. Additionally, we introduce a manifold structure preserving constraint to retain the original representation and distance information of the samples during projection. Comprehensive experimental evaluations across various real-world datasets validate the superior performance of our proposed method compared to the state-of-the-art methods. The code is publicly available at <span><span>https://github.com/ff-raw-war/LRR-EKM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107196"},"PeriodicalIF":6.0,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143141058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-01-25DOI: 10.1016/j.neunet.2025.107197
Ziyang Chen, Ke Fan
{"title":"An online trajectory guidance framework via imitation learning and interactive feedback in robot-assisted surgery","authors":"Ziyang Chen, Ke Fan","doi":"10.1016/j.neunet.2025.107197","DOIUrl":"10.1016/j.neunet.2025.107197","url":null,"abstract":"<div><div>Improving the manipulation performance of surgical instruments is important for novice surgeons, as it directly affects the safety and outcome of robot-assisted surgery. To reduce the difference between expert and novice surgeons, learning the instrument movement trajectories generated by experts is an effective approach for novices to foster their muscle memory and improve manipulation skills. In this work, we propose an online trajectory guidance framework to generate expert-like movement trajectories so that novice surgeons can receive intra-operative trajectory guidance to achieve a similar manipulation performance as experts. First, Dynamic Movement Primitives (DMP) based Imitation Learning (IL) is implemented to model the 3D trajectories demonstrated by experts for adaptive trajectory generation at different start and end points. To introduce the obstacle avoidance capability into IL, we propose a vision-based strategy involving stereo reconstruction, object detection and segmentation to recover the 3D information of obstacles so that they can be coupled into DMP as an obstacle avoidance term. Furthermore, we introduce Augmented Reality (AR) and Interactive Feedback (IF) including visual and force feedback to enhance the trajectory reproduction accuracy of novice surgeons during operation. The experiment was conducted based on a 3D peg-transfer task in two different scenes (with changed start and end points, and with the obstacle present) using a standard da Vinci Research Kit robot. Ten non-expert human subjects were invited to evaluate the online trajectory guidance framework by reproducing the expert-like manipulation trajectories, and the experimental results showed that the novices with the assistance of AR and IF achieved promising trajectory reproduction performance (the mean distance error <span><math><msub><mrow><mi>E</mi></mrow><mrow><mtext>mean</mtext></mrow></msub></math></span> was reduced by 76.47% and 65.15% in two different intra-operative scenes, respectively), narrowing the manipulation gap with experts.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107197"},"PeriodicalIF":6.0,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143360783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural NetworksPub Date : 2025-01-24DOI: 10.1016/j.neunet.2025.107211
Zhen Jiang, Na Tang, Jianlong Sun, Yongzhao Zhan
{"title":"Combining various training and adaptation algorithms for ensemble few-shot classification","authors":"Zhen Jiang, Na Tang, Jianlong Sun, Yongzhao Zhan","doi":"10.1016/j.neunet.2025.107211","DOIUrl":"10.1016/j.neunet.2025.107211","url":null,"abstract":"<div><div>To mitigate the shortage of labeled data, Few-Shot Classification (FSC) methods train deep neural networks (DNNs) on a base dataset with sufficient labeled data, and then adapt them to target tasks using a few labeled data. Despite notable progress, a single FSC model remains prone to high variance and low confidence. As a result, ensemble FSC has garnered increasing attention. However, the limited labeled data and the high computational cost associated with DNNs present significant challenges for ensemble FSC methods. This paper presents a novel ensemble method that generates multiple FSC models via combining various training and adaptation algorithms. Due to the reuse of training phases, the proposed method significantly reduces the learning cost while generating base models with greater diversity. To further minimize reliance on labeled data, we provide each model with pseudo-labeled data selected by the majority vote of other models. Compared with self-training style methods, this “one-vs-others” learning strategy effectively reduces pseudo-label noise and confirmation bias. Finally, we conduct extensive experiments on <em>miniImageNet, tieredImageNet</em> and <em>CUB</em> datasets. The experimental results demonstrate that our method outperforms other state-of-the-art FSC methods. Especially, our method achieves the greatest improvement in the performance of base models. The source code and related models are available at <span><span>https://github.com/tn1999tn/Ensemble-FSC/tree/master</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107211"},"PeriodicalIF":6.0,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143076081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Context Sensitive Network for weakly-supervised fine-grained temporal action localization","authors":"Cerui Dong, Qinying Liu, Zilei Wang, Yixin Zhang, Feng Zhao","doi":"10.1016/j.neunet.2025.107140","DOIUrl":"10.1016/j.neunet.2025.107140","url":null,"abstract":"<div><div>Weakly-supervised fine-grained temporal action localization seeks to identify fine-grained action instances in untrimmed videos using only video-level labels. The primary challenge in this task arises from the subtle distinctions among various fine-grained action categories, which complicate the accurate localization of specific action instances. In this paper, we note that the context information embedded within the videos plays a crucial role in overcoming this challenge. However, we also find that effectively integrating context information across different scales is non-trivial, as not all scales provide equally valuable information for distinguishing fine-grained actions. Based on these observations, we propose a weakly-supervised fine-grained temporal action localization approach termed the Context Sensitive Network, which aims to fully leverage context information. Specifically, we first introduce a multi-scale context extraction module designed to efficiently capture multi-scale temporal contexts. Subsequently, we develop a scale-sensitive context gating module that facilitates interaction among multi-scale contexts and adaptively selects informative contexts based on varying video content. Extensive experiments conducted on two benchmark datasets, FineGym and FineAction, demonstrate that our approach achieves state-of-the-art performance.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107140"},"PeriodicalIF":6.0,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143076083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}