{"title":"Multilevel Reliable Guidance for Unpaired Multiview Clustering.","authors":"Like Xin, Wanqi Yang, Lei Wang, Ming Yang","doi":"10.1109/TNNLS.2025.3586306","DOIUrl":"10.1109/TNNLS.2025.3586306","url":null,"abstract":"<p><p>In this article, we address the challenging problem of unpaired multiview clustering (UMC), which aims to achieve effective joint clustering using unpaired samples observed across multiple views. Traditional incomplete multiview clustering (IMC) methods typically rely on paired samples to capture complementary information between views. However, such strategies become impractical in the UMC due to the absence of paired samples. Although some researchers have attempted to address this issue by preserving consistent cluster structures across views, effectively mining such consistency remains challenging when the cluster structures with low confidence. Therefore, we propose a novel method, multilevel reliable guidance for UMC (MRG-UMC), which integrates multilevel clustering and reliable view guidance to learn consistent and confident cluster structures from three perspectives. Specifically, inner view multilevel clustering exploits high-confidence sample pairs across different levels to reduce the impact of boundary samples, resulting in more confident cluster structures. Synthesized-view alignment leverages a synthesized view to mitigate cross-view discrepancies and promote consistency. Cross-view guidance employs a reliable view guidance strategy to enhance the clustering confidence of poorly clustered views. These three modules are jointly optimized across multiple levels to achieve consistent and confident cluster structures. Furthermore, theoretical analyses verify the effectiveness of MRG-UMC in enhancing clustering confidence. Extensive experimental results show that MRG-UMC outperforms state-of-the-art UMC methods, achieving an average NMI improvement of 12.95% on multiview datasets. The source code is available at https://anonymous.4open.science/r/MRG-UMC-5E20.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":"18968-18982"},"PeriodicalIF":8.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144649401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semi-Supervised Anomaly Detection Using Restricted Distribution Transformation.","authors":"Feng Xiao, Youqing Wang, S Joe Qin, Jicong Fan","doi":"10.1109/TNNLS.2025.3583320","DOIUrl":"10.1109/TNNLS.2025.3583320","url":null,"abstract":"<p><p>Anomaly detection (AD) is typically regarded as an unsupervised learning task, where the training data either do not contain any anomalous samples or contain only a few unlabeled anomalous samples. In fact, in many real scenarios such as fault diagnosis and disease detection, a small number of anomalous samples labeled by domain experts are often available during the training phase, which makes semi-supervised AD (SAD) more appealing, though the related study is quite limited. Existing semi-supervised AD methods directly add optimization terms of anomalous samples to the optimization objective of unsupervised AD (UAD), where the effects of the limited labeled anomalous data on the optimization process become trivial and they cannot fully contribute to the detection task. To cover the shortage, in this work, we propose a novel semi-supervised AD method to fully use the limited labeled anomalous data and further to boost detection performance. The proposed method learns a nonlinear transformation to project normal data into a compact target distribution and simultaneously to project exposed anomalous samples into another target distribution, where the two target distributions do not overlap each other. The goal is difficult to achieve because of the scarcity of anomalous samples. To address this problem, we propose to generate a large number of intermediate samples interpolating between normal and anomalous data and project them into a third target distribution lying between the aforementioned two target distributions. Empirical results on multiple benchmarks with varying domains demonstrate the superiority of our method over existing supervised and semi-supervised AD methods.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":"17966-17977"},"PeriodicalIF":8.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144564746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"All-to-All Connected Oscillator Ising Machines and Their Application as Associative Memory","authors":"Yi Cheng, Zongli Lin","doi":"10.1109/tnnls.2025.3609571","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3609571","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"95 4 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145203299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Decentralized Consensus Inference-Based Hierarchical Reinforcement Learning for Multiconstrained UAV Pursuit-Evasion Game.","authors":"Yuming Xiang, Sizhao Li, Rongpeng Li, Zhifeng Zhao, Honggang Zhang","doi":"10.1109/TNNLS.2025.3582909","DOIUrl":"10.1109/TNNLS.2025.3582909","url":null,"abstract":"<p><p>Multiple quadrotor uncrewed aerial vehicles (UAVs) systems have garnered widespread research interest and fostered tremendous interesting applications, especially in multiconstrained pursuit-evasion games (MC-PEGs). The cooperative evasion and formation coverage (CEFC) task, where the UAV swarm aims to maximize formation coverage across multiple target zones while collaboratively evading predators, belongs to one of the most challenging issues in MC-PEGs, especially under communication-limited constraints. This multifaceted problem, which intertwines responses to obstacles, adversaries, target zones, and formation dynamics, brings up significant high-dimensional complications in locating a solution. In this article, we propose a novel two-level framework [i.e., consensus inference-based hierarchical reinforcement learning (CI-HRL)], which delegates target localization to a high-level policy, while adopting a low-level policy to manage obstacle avoidance, navigation, and formation. Specifically, in the high-level policy, we develop a novel multiagent reinforcement learning (RL) module, consensus-oriented multiagent communication (ConsMAC), to enable agents to perceive global information and establish consensus from local states by effectively aggregating neighbor messages. Meanwhile, we leverage an alternative training-based MAPPO (AT-M) and policy distillation to accomplish the low-level control. The experimental results, including the high-fidelity software-in-the-loop (SITL) simulations, validate that CI-HRL provides a superior solution with enhanced swarm's collaborative evasion and task completion capabilities.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":"18229-18243"},"PeriodicalIF":8.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144663899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adamantios Ntakaris, Moncef Gabbouj, Juho Kanniainen
{"title":"Optimizing the Output of Long Short-Term Memory Cell for High-Frequency Forecasting in Financial Markets","authors":"Adamantios Ntakaris, Moncef Gabbouj, Juho Kanniainen","doi":"10.1109/tnnls.2025.3611887","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3611887","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"78 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145203296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Spatiotemporal Prototype Learning for Spiking Neural Networks.","authors":"Wuque Cai, Hongze Sun, Qianqian Liao, Jiayi He, Duo Chen, Dezhong Yao, Daqing Guo","doi":"10.1109/TNNLS.2025.3583747","DOIUrl":"10.1109/TNNLS.2025.3583747","url":null,"abstract":"<p><p>Spiking neural networks (SNNs) leverage their spike-driven nature to achieve high energy efficiency, positioning them as a promising alternative to traditional artificial neural networks (ANNs). The spiking decoder, a crucial component for output, significantly affects the performance of SNNs. However, current rate coding schemes for decoding of SNNs often lack robustness and do not have a training framework suitable for robust learning, while alternatives to rate coding generally produce worse overall performance. To address these challenges, we propose spatiotemporal prototype (STP) learning for SNNs, which uses multiple learnable binarized prototypes for distance-based decoding. In addition, we introduce a cotraining framework that jointly optimizes prototypes and model parameters, enabling mutual adaptation of the two components. STP learning clusters feature centers through supervised learning to ensure effective aggregation around the prototypes, while maintaining enough spacing between prototypes to handle noise and interference. This dual capability results in superior stability and robustness. On eight benchmark datasets with diverse challenges, the STP-SNN model achieves performance comparable to or superior to state-of-the-art methods. Notably, STP learning demonstrates exceptional robustness and stability in multitask experiments. Overall, these findings reveal that STP learning is an effective means of improving the performance and robustness of SNNs.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":"18995-19009"},"PeriodicalIF":8.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144564745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Protecting Deep Learning Model Copyrights With Adversarial Example-Free Reuse Detection.","authors":"Xiaokun Luan, Xiyue Zhang, Jingyi Wang, Meng Sun","doi":"10.1109/TNNLS.2025.3578664","DOIUrl":"10.1109/TNNLS.2025.3578664","url":null,"abstract":"<p><p>Model reuse techniques can reduce the resource requirements for training high-performance deep neural networks (DNNs) by leveraging existing models. However, unauthorized reuse and replication of DNNs can lead to copyright infringement and economic loss to the model owner. This underscores the need to analyze the reuse relation between DNNs and develop copyright protection techniques to safeguard intellectual property rights. Existing DNN copyright protection approaches suffer from several inherent limitations hindering their effectiveness in practical scenarios. For instance, existing white-box fingerprinting approaches cannot address the common heterogeneous reuse case where the model architecture is changed, and DNN fingerprinting approaches heavily rely on generating adversarial examples with good transferability, which is known to be challenging in the black-box setting. To bridge the gap, we propose a neuron functionality analysis-based reuse detector (NFARD), a neuron functionality (NF) analysis-based reuse detector, which only requires normal test samples to detect reuse relations by measuring the models' differences on a newly proposed model characterization, i.e., NF. A set of NF-based distance metrics is designed to make NFARD applicable to both white-box and black-box settings. Moreover, we devise a linear transformation method to handle heterogeneous reuse cases by constructing the optimal projection matrix for dimension consistency, significantly extending the application scope of NFARD. To the best of our knowledge, this is the first adversarial example-free method that exploits NF for DNN copyright protection. As a side contribution, we constructed a reuse detection benchmark named Reuse Zoo that covers various practical reuse techniques and popular datasets. Extensive evaluations on this comprehensive benchmark show that NFARD achieves $F1$ scores of 0.984 and 1.0 for detecting reuse relationships in black-box and white-box settings, respectively, while generating test suites $2{sim } 99$ times faster than previous methods.</p>","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"PP ","pages":"19187-19199"},"PeriodicalIF":8.9,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144583810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}