IEEE Transactions on Emerging Topics in Computational Intelligence最新文献

筛选
英文 中文
Adversarial Examples Detection With Bayesian Neural Network 利用贝叶斯神经网络检测逆向实例
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3372383
Yao Li;Tongyi Tang;Cho-Jui Hsieh;Thomas C. M. Lee
{"title":"Adversarial Examples Detection With Bayesian Neural Network","authors":"Yao Li;Tongyi Tang;Cho-Jui Hsieh;Thomas C. M. Lee","doi":"10.1109/TETCI.2024.3372383","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3372383","url":null,"abstract":"In this paper, we propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors and make it easier to simulate the output distribution of a deep neural network. With these observations, we propose a novel Bayesian adversarial example detector, short for \u0000<sc>BATer</small>\u0000, to improve the performance of adversarial example detection. Specifically, we study the distributional difference of hidden layer output between natural and adversarial examples, and propose to use the randomness of the Bayesian neural network to simulate hidden layer output distribution and leverage the distribution dispersion to detect adversarial examples. The advantage of a Bayesian neural network is that the output is stochastic while a deep neural network without random components does not have such characteristics. Empirical results on several benchmark datasets against popular attacks show that the proposed \u0000<sc>BATer</small>\u0000 outperforms the state-of-the-art detectors in adversarial example detection.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3654-3664"},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142377011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local Dimming for Video Based on an Improved Surrogate Model Assisted Evolutionary Algorithm 基于改进的代用模型辅助进化算法的视频局部调光技术
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3370033
Yahui Cao;Tao Zhang;Xin Zhao;Yuzheng Yan;Shuxin Cui
{"title":"Local Dimming for Video Based on an Improved Surrogate Model Assisted Evolutionary Algorithm","authors":"Yahui Cao;Tao Zhang;Xin Zhao;Yuzheng Yan;Shuxin Cui","doi":"10.1109/TETCI.2024.3370033","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3370033","url":null,"abstract":"Compared with the traditional liquid crystal displays (LCD) systems, the local dimming systems can obtain higher display quality with lower power consumption. Considering local dimming of the static image as an optimization problem and solving it based on an evolutionary algorithm, a set of optimal backlight matrix can be obtained. However, the local dimming algorithm based on evolutionary algorithm is no longer applicable for the video sequences because the calculation is very time-consuming. This paper proposes a local dimming algorithm based on improved surrogate model assisted evolutional algorithm (ISAEA-LD). In this algorithm, the surrogate model assisted evolutionary algorithm is applied to solve the local dimming problem of the video sequences. The surrogate model is used to reduce the complexity of individual fitness evaluation of the evolutionary algorithm. Firstly, a surrogate model based on convolutional neural network is adopted to improve the accuracy of individual fitness evaluation of surrogate model. Secondly, the algorithm introduces the backlight update strategy based on the content correlation between the video sequences' adjacent frames and the model transfer strategy based on transfer learning to improve the efficiency of the algorithm. Experimental results show that the proposed ISAEA-LD algorithm can obtain better visual quality and higher algorithm efficiency.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 4","pages":"3166-3179"},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement Learning and Transformer for Fast Magnetic Resonance Imaging Scan 用于快速磁共振成像扫描的强化学习和变压器
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-03-18 DOI: 10.1109/TETCI.2024.3358180
Yiming Liu;Yanwei Pang;Ruiqi Jin;Yonghong Hou;Xuelong Li
{"title":"Reinforcement Learning and Transformer for Fast Magnetic Resonance Imaging Scan","authors":"Yiming Liu;Yanwei Pang;Ruiqi Jin;Yonghong Hou;Xuelong Li","doi":"10.1109/TETCI.2024.3358180","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3358180","url":null,"abstract":"A major drawback in Magnetic Resonance Imaging (MRI) is the long scan times necessary to acquire complete K-space matrices using phase encoding. This paper proposes a transformer-based deep Reinforcement Learning (RL) framework (called TITLE) to reduce the scan time by sequentially selecting partial phases in real-time so that a slice can be accurately reconstructed from the resultant slice-specific incomplete K-space matrix. As a deep learning based slice-specific method, the TITLE method has the following characteristic and merits: (1) It is real-time because the decision of which phase to be encoded in next time can be made within the period between the time at which an echo signal is obtained and the time at which the next 180° RF pulse is activated. (2) It exploits the powerful feature representation ability of transformer, a self-attention based neural network, for predicting phases with the mechanism of deep reinforcement learning. (3) Both historically selected phases (called phase-indicator vector) and the corresponding undersampled image of the slice being scanned are used for extracting features by transformer. Experimental results on the fastMRI dataset demonstrate that the proposed method is 150 times faster than the state-of-the-art reinforcement learning based method and outperforms the state-of-the-art deep learning based methods in reconstruction accuracy. The source codes are available.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 3","pages":"2310-2323"},"PeriodicalIF":5.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey of Deep Learning Video Super-Resolution 深度学习视频超分辨率调查
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-03-17 DOI: 10.1109/TETCI.2024.3398015
Arbind Agrahari Baniya;Tsz-Kwan Lee;Peter W. Eklund;Sunil Aryal
{"title":"A Survey of Deep Learning Video Super-Resolution","authors":"Arbind Agrahari Baniya;Tsz-Kwan Lee;Peter W. Eklund;Sunil Aryal","doi":"10.1109/TETCI.2024.3398015","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3398015","url":null,"abstract":"Video super-resolution (VSR) is a prominent research topic in low-level computer vision, where deep learning technologies have played a significant role. The rapid progress in deep learning and its applications in VSR has led to a proliferation of tools and techniques in the literature. However, the usage of these methods is often not adequately explained, and decisions are primarily driven by quantitative improvements. Given the significance of VSR's potential influence across multiple domains, it is imperative to conduct a comprehensive analysis of the elements and deep learning methodologies employed in VSR research. This methodical analysis will facilitate the informed development of models tailored to specific application needs. In this paper, we present an overarching overview of deep learning-based video super-resolution models, investigating each component and discussing its implications. Furthermore, we provide a synopsis of key components and technologies employed by state-of-the-art and earlier VSR models. By elucidating the underlying methodologies and categorising them systematically, we identified trends, requirements, and challenges in the domain. As a first-of-its-kind survey of deep learning-based VSR models, this work also establishes a multi-level taxonomy to guide current and future VSR research, enhancing the maturation and interpretation of VSR practices for various practical applications.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 4","pages":"2655-2676"},"PeriodicalIF":5.3,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intensive Class Imbalance Learning in Drifting Data Streams 漂移数据流中的强化类失衡学习
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-03-17 DOI: 10.1109/TETCI.2024.3399657
Muhammad Usman;Huanhuan Chen
{"title":"Intensive Class Imbalance Learning in Drifting Data Streams","authors":"Muhammad Usman;Huanhuan Chen","doi":"10.1109/TETCI.2024.3399657","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3399657","url":null,"abstract":"Streaming data analysis faces two primary challenges: concept drifts and class imbalance. The co-occurrence of virtual drifts and class imbalance is a common real-world scenario requiring dedicated solutions. This paper presents Intensive Class Imbalance Learning (ICIL), a novel supervised classification method for virtually drifting data streams. ICIL facilitates the detection of virtual drifts through a feature-sensitive change detection method. It calibrates the data over time to resolve within-class imbalance, overlaps, and small sample size problems. A weighted voting ensemble is proposed for enhanced performance, wherein weights are constantly updated based on the recent performance of the member classifiers. Experiments are conducted on 14 synthetic and real-world data streams to demonstrate the efficacy of the proposed method. The comparative analysis against 11 state-of-the-art methods shows that the proposed method outperforms the other methods in 9/14 data streams on the G-mean metric.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3503-3517"},"PeriodicalIF":5.3,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-Enabled Reinforcement Learning for Time Series Forecasting With Adaptive Intelligence 利用自适应智能进行时间序列预测的图形强化学习
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-03-15 DOI: 10.1109/TETCI.2024.3398024
Thanveer Shaik;Xiaohui Tao;Haoran Xie;Lin Li;Jianming Yong;Yuefeng Li
{"title":"Graph-Enabled Reinforcement Learning for Time Series Forecasting With Adaptive Intelligence","authors":"Thanveer Shaik;Xiaohui Tao;Haoran Xie;Lin Li;Jianming Yong;Yuefeng Li","doi":"10.1109/TETCI.2024.3398024","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3398024","url":null,"abstract":"Reinforcement learning (RL) is renowned for its proficiency in modeling sequential tasks and adaptively learning latent data patterns. Deep learning models have been extensively explored and adopted in regression and classification tasks. However, deep learning has limitations, such as the assumption of equally spaced and ordered data, and the inability to incorporate graph structure in time-series prediction. Graph Neural Network (GNN) can overcome these challenges by capturing the temporal dependencies in time-series data effectively. In this study, we propose a novel approach for predicting time-series data using GNN, augmented with Reinforcement Learning(GraphRL) for monitoring. GNNs explicitly integrate the graph structure of the data into the model, enabling them to naturally capture temporal dependencies. This approach facilitates more accurate predictions in complex temporal structures, as encountered in healthcare, traffic, and weather forecasting domains. We further enhance our GraphRL model's performance through fine-tuning with a Bayesian optimization technique. The proposed framework surpasses baseline models in time-series forecasting and monitoring. This study's contributions include introducing a novel GraphRL framework for time-series prediction and demonstrating GNNs' efficacy compared to traditional deep learning models, such as Recurrent Neural Networks (RNN) and Long Short-Term Memory Networks(LSTM). Overall, this study underscores the potential of GraphRL in yielding accurate and efficient predictions within dynamic RL environments.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 4","pages":"2908-2918"},"PeriodicalIF":5.3,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141965867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Low-Light Image Enhancement via Luminance Mask and Luminance-Independent Representation Decoupling 通过亮度掩码和亮度无关表示解耦实现无监督低照度图像增强
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-03-14 DOI: 10.1109/TETCI.2024.3369858
Bo Peng;Jia Zhang;Zhe Zhang;Qingming Huang;Liqun Chen;Jianjun Lei
{"title":"Unsupervised Low-Light Image Enhancement via Luminance Mask and Luminance-Independent Representation Decoupling","authors":"Bo Peng;Jia Zhang;Zhe Zhang;Qingming Huang;Liqun Chen;Jianjun Lei","doi":"10.1109/TETCI.2024.3369858","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3369858","url":null,"abstract":"Enhancing low-light images in an unsupervised manner has become a popular topic due to the challenge of obtaining paired real-world low/normal-light images. Driven by massive available normal-light images, learning a low-light image enhancement network from unpaired data is more practical and valuable. This paper presents an unsupervised low-light image enhancement method (DeULLE) via luminance mask and luminance-independent representation decoupling based on unpaired data. Specifically, by estimating a luminance mask from low-light image, a luminance mask-guided low-light image generation (LMLIG) module is presented to darken reference normal-light image. In addition, a luminance-independent representation-based low-light image enhancement (LRLIE) module is developed to enhance low-light image by learning luminance-independent representation and incorporating the luminance cue of reference normal-light image. With the LMLIG and LRLIE modules, a bidirectional mapping-based cycle supervision (BMCS) is constructed to facilitate the decoupling of the luminance mask and luminance-independent representation, which further promotes unsupervised low-light enhancement learning with unpaired data. Comprehensive experiments on various challenging benchmark datasets demonstrate that the proposed DeULLE exhibits superior performance.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 4","pages":"3029-3039"},"PeriodicalIF":5.3,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
News-MESI: A Dataset for Multimodal News Excerpt Segmentation and Identification 新闻-MESI:多模态新闻摘录分割与识别数据集
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-03-14 DOI: 10.1109/TETCI.2024.3369866
Qing Song;Zilong Jia;Wenhe Jia;Wenyi Zhao;Mengjie Hu;Chun Liu
{"title":"News-MESI: A Dataset for Multimodal News Excerpt Segmentation and Identification","authors":"Qing Song;Zilong Jia;Wenhe Jia;Wenyi Zhao;Mengjie Hu;Chun Liu","doi":"10.1109/TETCI.2024.3369866","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3369866","url":null,"abstract":"In complex long-term news videos, the fundamental component is the news excerpt which consists of many studio and interview screens. Spotting and identifying the correct news excerpt from such a complex long-term video is a challenging task. Apart from the inherent temporal semantics and the complex generic events interactions, the varied richness of semantics within the text and visual modalities further complicates matters. In this paper, we delve into the nuanced realm of video temporal understanding, examining it through a multimodal and multitask perspective. Our research involves presenting a more fine-grained challenge, which we refer to as \u0000<bold>M</b>\u0000ultimodal News \u0000<bold>E</b>\u0000xcerpt \u0000<bold>S</b>\u0000egmentation and \u0000<bold>I</b>\u0000dentification. The objective is to segment news videos into individual frame-level excerpts while accurately assigning elaborate tags to each segment by utilizing multimodal semantics. As there is an absence of multimodal fine-grained temporal segmentation dataset at present, we set up a new benchmark called News-MESI to support our research. News-MESI comprises over 150 high-quality news videos sourced from digital media, with approximately 150 hours in total and encompassing more than 2000 news excerpts. By annotating it with frame-level excerpt boundaries and an elaborate categorization hierarchy, this collection offers a valuable chance for multi-modal semantic understanding of these distinctive videos. We also present a novel algorithm employing coarse-to-fine multimodal fusion and hierarchical classification to address this problem. Extensive experiments are executed on our benchmark to show how the news content is temporally evolving in nature. Further analysis shows that multi-modal solutions are significantly superior to the single-modal solution.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 4","pages":"3001-3016"},"PeriodicalIF":5.3,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PATReId: Pose Apprise Transformer Network for Vehicle Re-Identification PATReId:用于车辆再识别的 Pose Apprise Transformer 网络
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-03-14 DOI: 10.1109/TETCI.2024.3372391
Rishi Kishore;Nazia Aslam;Maheshkumar H. Kolekar
{"title":"PATReId: Pose Apprise Transformer Network for Vehicle Re-Identification","authors":"Rishi Kishore;Nazia Aslam;Maheshkumar H. Kolekar","doi":"10.1109/TETCI.2024.3372391","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3372391","url":null,"abstract":"Vehicle re-identification is a procedure for identifying a vehicle using multiple non-overlapping cameras. The use of licence plates for re-identification have constraints because a licence plates may not be seen owing to viewpoint differences. Also, the high intra-class variability (due to the shape and appearance from different angles) and small inter-class variability (due to the similarity in appearance and shapes of vehicles from different manufacturers) make it more challenging. To address these issues, we have proposed a novel PATReId, Pose Apprise Transformer network for Vehicle Re-identification. This network works two-fold: 1) generating the poses of the vehicles using the heatmap, keypoints, and segments, which eliminate the viewpoint dependencies, and 2) jointly classify the attributes of the vehicles (colour and type) while performing ReId by utilizing the multitask learning through a two-stream neural network-integrated with the pose. The vision transformer and ResNet50 networks are employed to create the two-stream neural network. Extensive experiments have been conducted on Veri776, VehicleID and Veri Wild datasets to demonstrate the accuracy and efficacy of the proposed PATReId framework.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 5","pages":"3691-3702"},"PeriodicalIF":5.3,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142377140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diversity-Induced Bipartite Graph Fusion for Multiview Graph Clustering 多视图图形聚类的多样性诱导双方图融合
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-03-14 DOI: 10.1109/TETCI.2024.3369316
Weiqing Yan;Xinying Zhao;Guanghui Yue;Jinlai Ren;Jindong Xu;Zhaowei Liu;Chang Tang
{"title":"Diversity-Induced Bipartite Graph Fusion for Multiview Graph Clustering","authors":"Weiqing Yan;Xinying Zhao;Guanghui Yue;Jinlai Ren;Jindong Xu;Zhaowei Liu;Chang Tang","doi":"10.1109/TETCI.2024.3369316","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3369316","url":null,"abstract":"Multi-view graph clustering can divide similar objects into the same category through learning the relationship among samples. To improve clustering efficiency, instead of all sample-based graph learning, the bipartite graph learning method can achieve efficient clustering by establishing the graph between data points and a few anchors, so it becomes an important research topic. However, most these bipartite graph-based multi-view clustering approaches focused on consistent information learning among views, ignored the diversity information of each view, which is not conductive to improve clustering precision. To address this issue, a diversity-induced bipartite graph fusion for multiview graph clustering (DiBGF-MGC) is proposed to simultaneously consider the consistency and diversity of multiple views. In our method, the constraint of diversity is achieved via minimizing the diversity of each view and minimizing the inconsistency of diversity in different views. The former ensures the sparse of diversity information, and the later ensures the diversity information is private information of each view. Specifically, we separate the bipartite graph to the consistent part and the divergent part in order to remove the diversity parts while preserving the consistency among multiple views. The consistent parts are used to learn the consensus bipartite graph, which can obtain a clear clustering structure due to eliminating diversity part from original bipartite graph. The diversity part is formulated by intra-view constraint and inter-views inconsistent constraint, which can better distinguish diversity part from original bipartite graph. The consistent learning and diversity learning can be improved iteratively via leveraging the results of the other one. Experiment shows that the proposed DiBGF-MGC method obtains better clustering results than state-of-the-art methods on several benchmark datasets.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 3","pages":"2592-2601"},"PeriodicalIF":5.3,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141096288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信