IEEE Transactions on Emerging Topics in Computational Intelligence最新文献

筛选
英文 中文
Co-Occurrence Relationship Driven Hierarchical Attention Network for Brain CT Report Generation 用于生成脑 CT 报告的共现关系驱动的层次注意网络
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-06-18 DOI: 10.1109/TETCI.2024.3413002
Xiaodan Zhang;Shixin Dou;Junzhong Ji;Ying Liu;Zheng Wang
{"title":"Co-Occurrence Relationship Driven Hierarchical Attention Network for Brain CT Report Generation","authors":"Xiaodan Zhang;Shixin Dou;Junzhong Ji;Ying Liu;Zheng Wang","doi":"10.1109/TETCI.2024.3413002","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3413002","url":null,"abstract":"Automatic generation of medical reports for Brain Computed Tomography (CT) imaging is crucial for helping radiologists make more accurate clinical diagnoses efficiently. Brain CT imaging typically contains rich pathological information, including common pathologies that often co-occur in one report and rare pathologies that appear in medical reports with lower frequency. However, current research ignores the potential co-occurrence between common pathologies and pays insufficient attention to rare pathologies, severely restricting the accuracy and diversity of the generated medical reports. In this paper, we propose a Co-occurrence Relationship Driven Hierarchical Attention Network (CRHAN) to improve Brain CT report generation by mining common and rare pathologies in Brain CT imaging. Specifically, the proposed CRHAN follows a general encoder-decoder framework with two novel attention modules. In the encoder, a co-occurrence relationship guided semantic attention (CRSA) module is proposed to extract the critical semantic features by embedding the co-occurrence relationship of common pathologies into semantic attention. In the decoder, a common-rare topic driven visual attention (CRVA) module is proposed to fuse the common and rare semantic features as sentence topic vectors, and then guide the visual attention to capture important lesion features for medical report generation. Experiments on the Brain CT dataset demonstrate the effectiveness of the proposed method.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3643-3653"},"PeriodicalIF":5.3,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse Graph Tensor Learning for Multi-View Spectral Clustering 用于多视图光谱聚类的稀疏图张量学习
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-06-12 DOI: 10.1109/TETCI.2024.3409724
Man-Sheng Chen;Zhi-Yuan Li;Jia-Qi Lin;Chang-Dong Wang;Dong Huang
{"title":"Sparse Graph Tensor Learning for Multi-View Spectral Clustering","authors":"Man-Sheng Chen;Zhi-Yuan Li;Jia-Qi Lin;Chang-Dong Wang;Dong Huang","doi":"10.1109/TETCI.2024.3409724","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3409724","url":null,"abstract":"Multi-view spectral clustering has achieved impressive performance by learning multiple robust and meaningful similarity graphs for clustering. Generally, the existing literatures often construct multiple similarity graphs by certain similarity measure (e.g. the Euclidean distance), which lack the desired ability to learn sparse and reliable connections that carry critical information in graph learning while preserving the low-rank structure. Regarding the challenges, a novel Sparse Graph Tensor Learning for Multi-view Spectral Clustering (SGTL) method is designed in this paper, where multiple similarity graphs are seamlessly coupled with the cluster indicators and constrained with a low-rank graph tensor. Specifically, a novel graph learning paradigm is designed by establishing an explicit theoretical connection between the similarity matrices and the cluster indicator matrices, in order that the constructed similarity graphs enjoy the desired block diagonal and sparse property for learning a small portion of reliable links. Then, we stack multiple similarity matrices into a low-rank graph tensor to better preserve the low-rank structure of the reliable links in graph learning, where the key knowledge conveyed by singular values from different views is explicitly considered. Extensive experiments on several benchmark datasets demonstrate the superiority of SGTL.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3534-3543"},"PeriodicalIF":5.3,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142377139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bi-Search Evolutionary Algorithm for High-Dimensional Bi-Objective Feature Selection 用于高维双目标特征选择的双搜索进化算法
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-04-30 DOI: 10.1109/TETCI.2024.3393388
Hang Xu;Bing Xue;Mengjie Zhang
{"title":"A Bi-Search Evolutionary Algorithm for High-Dimensional Bi-Objective Feature Selection","authors":"Hang Xu;Bing Xue;Mengjie Zhang","doi":"10.1109/TETCI.2024.3393388","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3393388","url":null,"abstract":"High dimensionality often challenges the efficiency and accuracy of a classifier, while evolutionary feature selection is an effective method for data preprocessing and dimensionality reduction. However, with the exponential expansion of search space along with the increase of features, traditional evolutionary feature selection methods could still find it difficult to search for optimal or near optimal solutions in the large-scale search space. To overcome the above issue, in this paper, we propose a bi-search evolutionary algorithm (termed BSEA) for tackling high-dimensional feature selection in classification, with two contradictory optimizing objectives (i.e., minimizing both selected features and classification errors). In BSEA, a bi-search evolutionary mode combining the forward and backward searching tasks is adopted to enhance the search ability in the large-scale search space; in addition, an adaptive feature analysis mechanism is also designed to the explore promising features for efficiently reproducing more diverse offspring. In the experiments, BSEA is comprehensively compared with 9 most recent or classic state-of-the-art MOEAs on a series of 11 high-dimensional datasets with no less than 2000 features. The empirical results suggest that BSEA generally performs the best on most of the datasets in terms of all performance metrics, along with high computational efficiency, while each of its essential components can take positive effect on boosting the search ability and together make the best contribution.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3489-3502"},"PeriodicalIF":5.3,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-modal Authentication Model for Occluded Faces in a Challenging Environment 挑战性环境中的隐蔽人脸多模式身份验证模型
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-04-30 DOI: 10.1109/TETCI.2024.3390058
Dahye Jeong;Eunbeen Choi;Hyeongjin Ahn;Ester Martinez-Martin;Eunil Park;Angel P. del Pobil
{"title":"Multi-modal Authentication Model for Occluded Faces in a Challenging Environment","authors":"Dahye Jeong;Eunbeen Choi;Hyeongjin Ahn;Ester Martinez-Martin;Eunil Park;Angel P. del Pobil","doi":"10.1109/TETCI.2024.3390058","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3390058","url":null,"abstract":"Authentication systems are crucial in the digital era, providing reliable protection of personal information. Most authentication systems rely on a single modality, such as the face, fingerprints, or password sensors. In the case of an authentication system based on a single modality, there is a problem in that the performance of the authentication is degraded when the information of the corresponding modality is covered. Especially, face identification does not work well due to the mask in a COVID-19 situation. In this paper, we focus on the multi-modality approach to improve the performance of occluded face identification. Multi-modal authentication systems are crucial in building a robust authentication system because they can compensate for the lack of modality in the uni-modal authentication system. In this light, we propose DemoID, a multi-modal authentication system based on face and voice for human identification in a challenging environment. Moreover, we build a demographic module to efficiently handle the demographic information of individual faces. The experimental results showed an accuracy of 99% when using all modalities and an overall improvement of 5.41%–10.77% relative to uni-modal face models. Furthermore, our model demonstrated the highest performance compared to existing multi-modal models and also showed promising results on the real-world dataset constructed for this study.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3463-3473"},"PeriodicalIF":5.3,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PV-SSD: A Multi-Modal Point Cloud 3D Object Detector Based on Projection Features and Voxel Features PV-SSD:基于投影特征和体素特征的多模态点云三维物体检测器
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-04-29 DOI: 10.1109/TETCI.2024.3389710
Yongxin Shao;Aihong Tan;Zhetao Sun;Enhui Zheng;Tianhong Yan;Peng Liao
{"title":"PV-SSD: A Multi-Modal Point Cloud 3D Object Detector Based on Projection Features and Voxel Features","authors":"Yongxin Shao;Aihong Tan;Zhetao Sun;Enhui Zheng;Tianhong Yan;Peng Liao","doi":"10.1109/TETCI.2024.3389710","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3389710","url":null,"abstract":"3D object detection using LiDAR is critical for autonomous driving. However, the point cloud data in autonomous driving scenarios is sparse. Converting the sparse point cloud into regular data representations (voxels or projection) often leads to information loss due to downsampling or excessive compression of feature information. This kind of information loss will adversely affect detection accuracy, especially for objects with fewer reflective points like cyclists. This paper proposes a multi-modal point cloud 3D object detector based on projection features and voxel features, which consists of two branches. One, called the voxel branch, is used to extract fine-grained local features. Another, called the projection branch, is used to extract projection features from a bird's-eye view and focus on the correlation of local features in the voxel branch. By feeding voxel features into the projection branch, we can compensate for the information loss in the projection branch while focusing on the correlation between neighboring local features in the voxel features. To achieve comprehensive feature fusion of voxel features and projection features, we propose a multi-modal feature fusion module (MSSFA). To further mitigate the loss of crucial features caused by downsampling, we propose a voxel feature extraction method (VR-VFE), which samples feature points based on their importance for the detection task. To validate the effectiveness of our method, we tested it on the KITTI dataset and ONCE dataset. The experimental results show that our method has achieved significant improvement in the detection accuracy of objects with fewer reflection points like cyclists.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3436-3449"},"PeriodicalIF":5.3,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Population-Based Training for Hyperparameter Optimization in Reinforcement Learning 强化学习中基于群体的广义超参数优化训练
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-04-26 DOI: 10.1109/TETCI.2024.3389777
Hui Bai;Ran Cheng
{"title":"Generalized Population-Based Training for Hyperparameter Optimization in Reinforcement Learning","authors":"Hui Bai;Ran Cheng","doi":"10.1109/TETCI.2024.3389777","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3389777","url":null,"abstract":"Hyperparameter optimization plays a key role in the machine learning domain. Its significance is especially pronounced in reinforcement learning (RL), where agents continuously interact with and adapt to their environments, requiring dynamic adjustments in their learning trajectories. To cater to this dynamicity, the Population-Based Training (PBT) was introduced, leveraging the collective intelligence of a population of agents learning simultaneously. However, PBT tends to favor high-performing agents, potentially neglecting the explorative potential of agents on the brink of significant advancements. To mitigate the limitations of PBT, we present the Generalized Population-Based Training (GPBT), a refined framework designed for enhanced granularity and flexibility in hyperparameter adaptation. Complementing GPBT, we further introduce Pairwise Learning (PL). Instead of merely focusing on elite agents, PL employs a comprehensive pairwise strategy to identify performance differentials and provide holistic guidance to underperforming agents. By integrating the capabilities of GPBT and PL, our approach significantly improves upon traditional PBT in terms of adaptability and computational efficiency. Rigorous empirical evaluations across a range of RL benchmarks confirm that our approach consistently outperforms not only the conventional PBT but also its Bayesian-optimized variant.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3450-3462"},"PeriodicalIF":5.3,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Multi-Source Information Fusion Method Based on Dependency Interval 基于依赖区间的新型多源信息融合方法
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-04-25 DOI: 10.1109/TETCI.2024.3370032
Weihua Xu;Yufei Lin;Na Wang
{"title":"A Novel Multi-Source Information Fusion Method Based on Dependency Interval","authors":"Weihua Xu;Yufei Lin;Na Wang","doi":"10.1109/TETCI.2024.3370032","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3370032","url":null,"abstract":"With the rapid development of Big Data era, it is necessary to extract necessary information from a large amount of information. Single-source information systems are often affected by extreme values and outliers, so multi-source information systems are more common and data more reasonable, information fusion is a common method to deal with multi-source information system. Compared with single-valued data, interval-valued data can describe the uncertainty and random change of data more effectively. This article proposes a novel interval-valued multi-source information fusion method: A multi-source information fusion method based on dependency interval. This method needs to construct a dependency function, which takes into account the interval length and the number of data points in the interval, so as to make the obtained data more centralized and eliminate the influence of outliers and extreme values. Due to the unfixed boundary of the dependency interval, a median point within the interval is selected as a bridge to simplify the acquisition of the dependency interval. Furthermore, a multi-source information system fusion algorithm based on dependency intervals was proposed, and experiments were conducted on 9 UCI datasets to compare the classification accuracy and quality of the proposed algorithm with traditional information fusion methods. The experimental results show that this method is more effective than the maximum interval method, quartile interval method, and mean interval method, and the validity of the data has been proven through hypothesis testing.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 4","pages":"3180-3194"},"PeriodicalIF":5.3,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Contrast Medical Image Segmentation via Transformer and Boundary Perception 通过变换器和边界感知进行低对比医学图像分割
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-04-19 DOI: 10.1109/TETCI.2024.3353624
Yinglin Zhang;Ruiling Xi;Wei Wang;Heng Li;Lingxi Hu;Huiyan Lin;Dave Towey;Ruibin Bai;Huazhu Fu;Risa Higashita;Jiang Liu
{"title":"Low-Contrast Medical Image Segmentation via Transformer and Boundary Perception","authors":"Yinglin Zhang;Ruiling Xi;Wei Wang;Heng Li;Lingxi Hu;Huiyan Lin;Dave Towey;Ruibin Bai;Huazhu Fu;Risa Higashita;Jiang Liu","doi":"10.1109/TETCI.2024.3353624","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3353624","url":null,"abstract":"Low-contrast medical image segmentation is a challenging task that requires full use of local details and global context. However, existing convolutional neural networks (CNNs) cannot fully exploit global information due to limited receptive fields and local weight sharing. On the other hand, the transformer effectively establishes long-range dependencies but lacks desirable properties for modeling local details. This paper proposes a Transformer-embedded Boundary perception Network (TBNet) that combines the advantages of transformer and convolution for low-contrast medical image segmentation. Firstly, the transformer-embedded module uses convolution at the low-level layer to model local details and uses the Enhanced TRansformer (ETR) to capture long-range dependencies at the high-level layer. This module can extract robust features with semantic contexts to infer the possible target location and basic structure in low-contrast conditions. Secondly, we utilize the decoupled body-edge branch to promote general feature learning and precept precise boundary locations. The ETR establishes long-range dependencies across the whole feature map range and is enhanced by introducing local information. We implement it in a parallel mode, i.e., the group of self-attention with multi-head captures the global relationship, and the group of convolution retains local details. We compare TBNet with other state-of-the-art (SOTA) methods on the cornea endothelial cell, ciliary body, and kidney segmentation tasks. The TBNet improves segmentation performance, proving its effectiveness and robustness.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 3","pages":"2297-2309"},"PeriodicalIF":5.3,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compensation Atmospheric Scattering Model and Two-Branch Network for Single Image Dehazing 用于单幅图像去噪的补偿大气散射模型和双分支网络
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-04-18 DOI: 10.1109/TETCI.2024.3386838
Xudong Wang;Xi'ai Chen;Weihong Ren;Zhi Han;Huijie Fan;Yandong Tang;Lianqing Liu
{"title":"Compensation Atmospheric Scattering Model and Two-Branch Network for Single Image Dehazing","authors":"Xudong Wang;Xi'ai Chen;Weihong Ren;Zhi Han;Huijie Fan;Yandong Tang;Lianqing Liu","doi":"10.1109/TETCI.2024.3386838","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3386838","url":null,"abstract":"Most existing dehazing networks rely on synthetic hazy-clear image pairs for training, and thus fail to work well in real-world scenes. In this paper, we deduce a reformulated atmospheric scattering model for a hazy image and propose a novel lightweight two-branch dehazing network. In the model, we use a Transformation Map to represent the dehazing transformation and use a Compensation Map to represent variable illumination compensation. Based on this model, we design a \u0000<underline>T</u>\u0000wo-\u0000<underline>B</u>\u0000ranch \u0000<underline>N</u>\u0000etwork (TBN) to jointly estimate the Transformation Map and Compensation Map. Our TBN is designed with a shared Feature Extraction Module and two Adaptive Weight Modules. The Feature Extraction Module is used to extract shared features from hazy images. The two Adaptive Weight Modules generate two groups of adaptive weighted features for the Transformation Map and Compensation Map, respectively. This design allows for a targeted conversion of features to the Transformation Map and Compensation Map. To further improve the dehazing performance in the real-world, we propose a semi-supervised learning strategy for TBN. Specifically, by performing supervised pre-training based on synthetic image pairs, we propose a Self-Enhancement method to generate pseudo-labels, and then further train our TBN with the pseudo-labels in a semi-supervised way. Extensive experiments demonstrate that the model-based TBN outperforms the state-of-the-art methods on various real-world datasets.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 4","pages":"2880-2896"},"PeriodicalIF":5.3,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Contrastive Learning for Tracking Dynamic Communities in Temporal Networks 跟踪时态网络中动态群落的图对比学习
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-04-17 DOI: 10.1109/TETCI.2024.3386844
Yun Ai;Xianghua Xie;Xiaoke Ma
{"title":"Graph Contrastive Learning for Tracking Dynamic Communities in Temporal Networks","authors":"Yun Ai;Xianghua Xie;Xiaoke Ma","doi":"10.1109/TETCI.2024.3386844","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3386844","url":null,"abstract":"Temporal networks are ubiquitous because complex systems in nature and society are evolving, and tracking dynamic communities is critical for revealing the mechanism of systems. Moreover, current algorithms utilize temporal smoothness framework to balance clustering accuracy at current time and clustering drift at historical time, which are criticized for failing to characterize the temporality of networks and determine its importance. To overcome these problems, we propose a novel algorithm by \u0000<underline><b>j</b></u>\u0000oining \u0000<underline><b>N</b></u>\u0000on-negative matrix factorization and \u0000<underline><b>C</b></u>\u0000ontrastive learning for \u0000<underline><b>D</b></u>\u0000ynamic \u0000<underline><b>C</b></u>\u0000ommunity detection (jNCDC). Specifically, jNCDC learns the features of vertices by projecting successive snapshots into a shared subspace to learn the low-dimensional representation of vertices with matrix factorization. Subsequently, it constructs an evolution graph to explicitly measure relations of vertices by representing vertices at current time with features at historical time, paving a way to characterize the dynamics of networks at the vertex-level. Finally, graph contrastive learning utilizes the roles of vertices to select positive and negative samples to further improve the quality of features. These procedures are seamlessly integrated into an overall objective function, and optimization rules are deduced. To the best of our knowledge, jNCDC is the first graph contrastive learning for dynamic community detection, that provides an alternative for the current temporal smoothness framework. Experimental results demonstrate that jNCDC is superior to the state-of-the-art approaches in terms of accuracy.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3422-3435"},"PeriodicalIF":5.3,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142368544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信