Jianxun Lou,Huasheng Wang,Xinbo Wu,John Cho Hui Ng,Richard White,Kaveri A Thakoor,Padraig Corcoran,Ying Chen,Hantao Liu
{"title":"Chest X-Ray Visual Saliency Modeling: Eye-Tracking Dataset and Saliency Prediction Model.","authors":"Jianxun Lou,Huasheng Wang,Xinbo Wu,John Cho Hui Ng,Richard White,Kaveri A Thakoor,Padraig Corcoran,Ying Chen,Hantao Liu","doi":"10.1109/tnnls.2025.3564292","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3564292","url":null,"abstract":"Radiologists' eye movements during medical image interpretation reflect their perceptual-cognitive processes of diagnostic decisions. The eye movement data can be modeled to represent clinically relevant regions in a medical image and potentially integrated into an artificial intelligence (AI) system for automatic diagnosis in medical imaging. In this article, we first conduct a large-scale eye-tracking study involving 13 radiologists interpreting 191 chest X-ray (CXR) images, establishing a best-of-its-kind CXR visual saliency benchmark. We then perform analysis to quantify the reliability and clinical relevance of saliency maps (SMs) generated for CXR images. We develop CXR image saliency prediction method (CXRSalNet), a novel saliency prediction model that leverages radiologists' gaze information to optimize the use of unlabeled CXR images, enhancing training and mitigating data scarcity. We also demonstrate the application of our CXR saliency model in enhancing the performance of AI-powered diagnostic imaging systems.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"20 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143926508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning to Coordinate With Different Teammates via Team Probing.","authors":"Hao Ding,Chengxing Jia,Zongzhang Zhang,Cong Guan,Feng Chen,Lei Yuan,Yang Yu","doi":"10.1109/tnnls.2025.3563773","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3563773","url":null,"abstract":"Coordinating with different teammates is essential in cooperative multiagent systems (MASs). However, most multiagent reinforcement learning (MARL) methods assume fixed team compositions, which leads to agents overfitting their training partners and failing to cooperate well with different teams during the deployment phase. A common way to mitigate the problem is to anticipate teammate behaviors and adapt policies accordingly during cooperation. However, these methods use the same policy for both collecting information for modeling teammates and maximizing cooperation performance. We argue that these two goals may conflict and reduce the effectiveness of both. In this work, we propose coordinating with different teammates via team probing (CDP), a novel approach that rapidly adapts to different teams by disentangling probing and adaptation phases. Specifically, we first generate a diverse population of teams as training partners with a novel value-based diversity objective. Then, we train a probing module to probe and reveal the coordination pattern of each team with policy-dynamics reconstruction and get a representation space of the population. Finally, we train a generalist meta-policy consisting of several expert policies with module selection based on the clustering of the learned representation space. We empirically show that CDP surpasses existing policy adaptation methods in various complex multiagent scenarios with both seen and unseen teammates.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"102 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143915049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dual-Correlation-Guided Anchor Learning for Scalable Incomplete Multi-View Clustering.","authors":"Wen-Jue He,Zheng Zhang,Xiaofeng Zhu","doi":"10.1109/tnnls.2025.3562297","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3562297","url":null,"abstract":"Efficiently learning informative yet compact representations from heterogeneous data remains challenging in incomplete multi-view clustering (IMC). The prevalent resource-efficient IMC models excel in constructing small-size anchors for fast similarity learning and data partition. However, existing anchor-based methods still suffer from shared deficiencies: 1) unstable and less informative anchor generation by random anchor selection or clueless learning and 2) imbalanced coherence and versatility capabilities of the learned anchors among different views. To mitigate these issues, we propose a novel dual-correlation-guided anchor learning (DCGA) method for scalable IMC, which learns informative anchor spaces to simultaneously incorporate both intra-view and inter-view correlations. Specifically, the intra-view anchor space is constructed and stabilized by compressing the view-specific data under the guidance of the conceived anchors as a bottleneck (A3B) strategy, with a strict theoretic analysis. Importantly, we, for the first time, build an unsupervised anchor learning scheme for incomplete multi-view data under the guidance of the bottleneck of information flow with the well-defined IB principle. As such, our model can simultaneously eliminate information redundancy and preserve the versatile knowledge derived from each view. Moreover, to endow the coherence of the learned anchors, an informative anchor constraint (IAC) is imposed to align the anchor spaces across different views. Extensive experiments on seven datasets against 11 state-of-the-art IMC methods validate the effectiveness and efficiency of our method. Code is available at https://github.com/DarrenZZhang/TNNLS25-DCGA.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"138 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143915048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Khoi Nguyen Tiet Nguyen,Wenyu Zhang,Kangkang Lu,Yu-Huan Wu,Xingjian Zheng,Hui Li Tan,Liangli Zhen
{"title":"A Survey and Evaluation of Adversarial Attacks in Object Detection.","authors":"Khoi Nguyen Tiet Nguyen,Wenyu Zhang,Kangkang Lu,Yu-Huan Wu,Xingjian Zheng,Hui Li Tan,Liangli Zhen","doi":"10.1109/tnnls.2025.3561225","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3561225","url":null,"abstract":"Deep learning models achieve remarkable accuracy in computer vision tasks yet remain vulnerable to adversarial examples-carefully crafted perturbations to input images that can deceive these models into making confident but incorrect predictions. This vulnerability poses significant risks in high-stakes applications such as autonomous vehicles, security surveillance, and safety-critical inspection systems. While the existing literature extensively covers adversarial attacks in image classification, comprehensive analyses of such attacks on object detection systems remain limited. This article presents a novel taxonomic framework for categorizing adversarial attacks specific to object detection architectures, synthesizes existing robustness metrics, and provides a comprehensive empirical evaluation of state-of-the-art attack methodologies on popular object detection models, including both traditional detectors and modern detectors with vision-language pretraining. Through rigorous analysis of open-source attack implementations and their effectiveness across diverse detection architectures, we derive key insights into attack characteristics. Furthermore, we delineate critical research gaps and emerging challenges to guide future investigations in securing object detection systems against adversarial threats. Our findings establish a foundation for developing more robust detection models while highlighting the urgent need for standardized evaluation protocols in this rapidly evolving domain.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"39 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143915053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}