International Journal of Machine Learning and Cybernetics最新文献

筛选
英文 中文
Bgman: Boundary-Prior-Guided Multi-scale Aggregation Network for skin lesion segmentation Bgman:用于皮损分割的边界先导多尺度聚合网络
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-26 DOI: 10.1007/s13042-024-02284-3
Zhenyang Huang, Yixing Zhao, Jinjiang Li, Yepeng Liu
{"title":"Bgman: Boundary-Prior-Guided Multi-scale Aggregation Network for skin lesion segmentation","authors":"Zhenyang Huang, Yixing Zhao, Jinjiang Li, Yepeng Liu","doi":"10.1007/s13042-024-02284-3","DOIUrl":"https://doi.org/10.1007/s13042-024-02284-3","url":null,"abstract":"<p>Skin lesion segmentation is a fundamental task in the field of medical image analysis. Deep learning approaches have become essential tools for segmenting medical images, as their accuracy in effectively analyzing abnormalities plays a critical role in determining the ultimate diagnostic results. Because of the inherent difficulties presented by medical images, including variations in shapes and sizes, along with the indistinct boundaries between lesions and the surrounding backgrounds, certain conventional algorithms face difficulties in fulfilling the growing requirements for elevated accuracy in processing medical images. To enhance the performance in capturing edge features and fine details of lesion processing, this paper presents the Boundary-Prior-Guided Multi-Scale Aggregation Network for skin lesion segmentation (BGMAN). The proposed BGMAN follows a basic Encoder–Decoder structure, wherein the encoder network employs prevalent CNN-based architectures to capture semantic information. We propose the Transformer Bridge Block (TBB) and employ it to enhance multi-scale features captured by the encoder. The TBB strengthens the intensity of weak feature information, establishing long-distance relationships between feature information. In order to augment BGMAN’s capability to identify boundaries, a boundary-guided decoder is designed, utilizing the Boundary Aware Block (BAB) and Cross Scale Fusion Block (CSFB) to guide the decoding learning process. BAB can acquire features embedded with explicit boundary information under the supervision of a boundary mask, while CSFB aggregates boundary features from different scales using learnable embeddings. The proposed method has been validated on the ISIC2016, ISIC2017, and <span>(PH^2)</span> datasets. It outperforms current mainstream networks with the following results: F1 92.99 and IoU 87.71 on ISIC2016, F1 86.42 and IoU 78.34 on ISIC2017, and F1 94.83 and IoU 90.26 on <span>(PH^2)</span>.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141784936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quasi-framelets: robust graph neural networks via adaptive framelet convolution 准小帧:通过自适应小帧卷积实现鲁棒图神经网络
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-26 DOI: 10.1007/s13042-024-02286-1
Mengxi Yang, Dai Shi, Xuebin Zheng, Jie Yin, Junbin Gao
{"title":"Quasi-framelets: robust graph neural networks via adaptive framelet convolution","authors":"Mengxi Yang, Dai Shi, Xuebin Zheng, Jie Yin, Junbin Gao","doi":"10.1007/s13042-024-02286-1","DOIUrl":"https://doi.org/10.1007/s13042-024-02286-1","url":null,"abstract":"<p>This paper aims to provide a novel design of a multiscale framelet convolution for spectral graph neural networks (GNNs). While current spectral methods excel in various graph learning tasks, they often lack the flexibility to adapt to noisy, incomplete, or perturbed graph signals, making them fragile in such conditions. Our newly proposed framelet convolution addresses these limitations by decomposing graph data into low-pass and high-pass spectra through a finely-tuned multiscale approach. Our approach directly designs filtering functions within the spectral domain, allowing for precise control over the spectral components. The proposed design excels in filtering out unwanted spectral information and significantly reduces the adverse effects of noisy graph signals. Our approach not only enhances the robustness of GNNs but also preserves crucial graph features and structures. Through extensive experiments on diverse, real-world graph datasets, we demonstrate that our framelet convolution achieves superior performance in node classification tasks. It exhibits remarkable resilience to noisy data and adversarial attacks, highlighting its potential as a robust solution for real-world graph applications. This advancement opens new avenues for more adaptive and reliable spectral GNN architectures.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141770180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visible-infrared person re-identification with complementary feature fusion and identity consistency learning 利用互补特征融合和身份一致性学习进行可见红外人员再识别
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-24 DOI: 10.1007/s13042-024-02282-5
Yiming Wang, Xiaolong Chen, Yi Chai, Kaixiong Xu, Yutao Jiang, Bowen Liu
{"title":"Visible-infrared person re-identification with complementary feature fusion and identity consistency learning","authors":"Yiming Wang, Xiaolong Chen, Yi Chai, Kaixiong Xu, Yutao Jiang, Bowen Liu","doi":"10.1007/s13042-024-02282-5","DOIUrl":"https://doi.org/10.1007/s13042-024-02282-5","url":null,"abstract":"<p>The dual-mode 24/7 monitoring systems continuously obtain visible and infrared images in a real scene. However, differences such as color and texture between these cross-modality images pose challenges for visible-infrared person re-identification (ReID). Currently, the general method is modality-shared feature learning or modal-specific information compensation based on style transfer, but the modality differences often result in the inevitable loss of valuable feature information in the training process. To address this issue, A complementary feature fusion and identity consistency learning (<b>CFF-ICL</b>) method is proposed. On the one hand, the multiple feature fusion mechanism based on cross attention is used to promote the features extracted by the two groups of networks in the same modality image to show a more obvious complementary relationship to improve the comprehensiveness of feature information. On the other hand, the designed collaborative adversarial mechanism between dual discriminators and feature extraction network is designed to remove the modality differences, and then construct the identity consistency between visible and infrared images. Experimental results by testing on SYSU-MM01 and RegDB datasets verify the method’s effectiveness and superiority.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141770210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Text semantic matching algorithm based on the introduction of external knowledge under contrastive learning 对比学习下基于外部知识引入的文本语义匹配算法
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-24 DOI: 10.1007/s13042-024-02285-2
Jie Hu, Yinglian Zhu, Lishan Wu, Qilei Luo, Fei Teng, Tianrui Li
{"title":"Text semantic matching algorithm based on the introduction of external knowledge under contrastive learning","authors":"Jie Hu, Yinglian Zhu, Lishan Wu, Qilei Luo, Fei Teng, Tianrui Li","doi":"10.1007/s13042-024-02285-2","DOIUrl":"https://doi.org/10.1007/s13042-024-02285-2","url":null,"abstract":"<p>Measuring the semantic similarity between two texts is a fundamental aspect of text semantic matching. Each word in the texts holds a weighted meaning, and it is essential for the model to effectively capture the most crucial knowledge. However, current text matching methods based on BERT have limitations in acquiring professional domain knowledge. BERT requires extensive domain-specific training data to perform well in specialized fields such as medicine, where obtaining labeled data is challenging. In addition, current text matching models that inject domain knowledge often rely on creating new training tasks to fine-tune the model, which is time-consuming. Although existing works have directly injected domain knowledge into BERT through similarity matrices, they struggle to handle the challenge of small sample sizes in professional fields. Contrastive learning trains a representation learning model by generating instances that exhibit either similarity or dissimilarity, so that a more general representation can be learned with a small number of samples. In this paper, we propose to directly integrate the word similarity matrix into BERT’s multi-head attention mechanism under a contrastive learning framework to align similar words during training. Furthermore, in the context of Chinese medical applications, we propose an entity MASK approach to enhance the understanding of medical terms by pre-trained models. The proposed method helps BERT acquire domain knowledge to better learn text representations in professional fields. Extensive experimental results have shown that the algorithm significantly improves the performance of the text matching model, especially when training data is limited.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141770181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Condensed-gradient boosting 浓缩梯度增强
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-23 DOI: 10.1007/s13042-024-02279-0
Seyedsaman Emami, Gonzalo Martínez-Muñoz
{"title":"Condensed-gradient boosting","authors":"Seyedsaman Emami, Gonzalo Martínez-Muñoz","doi":"10.1007/s13042-024-02279-0","DOIUrl":"https://doi.org/10.1007/s13042-024-02279-0","url":null,"abstract":"<p>This paper presents a computationally efficient variant of Gradient Boosting (GB) for multi-class classification and multi-output regression tasks. Standard GB uses a 1-vs-all strategy for classification tasks with more than two classes. This strategy entails that one tree per class and iteration has to be trained. In this work, we propose the use of multi-output regressors as base models to handle the multi-class problem as a single task. In addition, the proposed modification allows the model to learn multi-output regression problems. An extensive comparison with other multi-output based Gradient Boosting methods is carried out in terms of generalization and computational efficiency. The proposed method showed the best trade-off between generalization ability and training and prediction speeds. Furthermore, an analysis of space and time complexity was undertaken.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141784937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A dual stream attention network for facial expression recognition in the wild 用于野生面部表情识别的双流注意力网络
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-23 DOI: 10.1007/s13042-024-02287-0
Hui Tang, Yichang Li, Zhong Jin
{"title":"A dual stream attention network for facial expression recognition in the wild","authors":"Hui Tang, Yichang Li, Zhong Jin","doi":"10.1007/s13042-024-02287-0","DOIUrl":"https://doi.org/10.1007/s13042-024-02287-0","url":null,"abstract":"<p>Facial Expression Recognition (FER) is crucial for human-computer interaction and has achieved satisfactory results on lab-collected datasets. However, occlusion and head pose variation in the real world make FER extremely challenging due to facial information deficiency. This paper proposes a novel Dual Stream Attention Network (DSAN) for occlusion and head pose robust FER. Specifically, DSAN consists of a Global Feature Element-based Attention Network (GFE-AN) and a Multi-Feature Fusion-based Attention Network (MFF-AN). A sparse attention block and a feature recalibration loss designed in GFE-AN selectively emphasize feature elements meaningful for facial expression and suppress those unrelated to facial expression. And a lightweight local feature attention block is customized in MFF-AN to extract rich semantic information from different representation sub-spaces. In addition, DSAN takes into account computation overhead minimization when designing model architecture. Extensive experiments on public benchmarks demonstrate that the proposed DSAN outperforms the state-of-the-art methods with 89.70% on RAF-DB, 89.93% on FERPlus, 65.77% on AffectNet-7, 62.13% on AffectNet-8. Moreover, the parameter size of DSAN is only 11.33M, which is lightweight compared to most of the recent in-the-wild FER algorithms.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141770211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial attack method based on enhanced spatial momentum 基于增强空间动量的对抗性攻击方法
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-22 DOI: 10.1007/s13042-024-02290-5
Jun Hu, Guanghao Wei, Shuyin Xia, Guoyin Wang
{"title":"Adversarial attack method based on enhanced spatial momentum","authors":"Jun Hu, Guanghao Wei, Shuyin Xia, Guoyin Wang","doi":"10.1007/s13042-024-02290-5","DOIUrl":"https://doi.org/10.1007/s13042-024-02290-5","url":null,"abstract":"<p>Deep neural networks have been widely applied in many fields, but it is found that they are vulnerable to adversarial examples, which can mislead the DNN-based models with imperceptible perturbations. Many adversarial attack methods can achieve great success rates when attacking white-box models, but they usually exhibit poor transferability when attacking black-box models. Momentum iterative gradient-based methods can effectively improve the transferability of adversarial examples. Still, the momentum update mechanism of existing methods may lead to a problem of unstable gradient update direction and result in poor local optima. In this paper, we propose an enhanced spatial momentum iterative gradient-based adversarial attack method. Specifically, we introduce the spatial domain momentum accumulation mechanism. Instead of only accumulating the gradients of data points on the optimization path in the gradient update process, we additionally accumulate the average gradients of multiple sampling points within the neighborhood of data points. This mechanism fully utilizes the contextual gradient information of different regions within the image to smooth the accumulated gradients and find a more stable gradient update direction, thus escaping from poor local optima. Empirical results on the standard ImageNet dataset demonstrate that our method can significantly improve the attack success rate of momentum iterative gradient-based methods and shows excellent attack performance not only against normally trained models but also against adversarial training and defense models, outperforming the state-of-the-art methods.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141738346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep feature dendrite with weak mapping for small-sample hyperspectral image classification 用于小样本高光谱图像分类的弱映射深度特征树枝状图
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-22 DOI: 10.1007/s13042-024-02272-7
Gang Liu, Jiaying Xu, Shanshan Zhao, Rui Zhang, Xiaoyuan Li, Shanshan Guo, Yajing Pang
{"title":"Deep feature dendrite with weak mapping for small-sample hyperspectral image classification","authors":"Gang Liu, Jiaying Xu, Shanshan Zhao, Rui Zhang, Xiaoyuan Li, Shanshan Guo, Yajing Pang","doi":"10.1007/s13042-024-02272-7","DOIUrl":"https://doi.org/10.1007/s13042-024-02272-7","url":null,"abstract":"<p>Hyperspectral image (HSI) classification faces the challenges of large and complex data and costly training labels. Existing methods for small-sample HSI classification may not achieve good generalization because they pursue powerful feature extraction and nonlinear mapping abilities. We argue that small samples need deep feature extraction but weak nonlinear mapping to achieve generalization. Based on this, we propose a Deep Feature Dendrite (DFD) method, which consists of two parts: a deep feature extraction part that uses a convolution-tokenization-attention module to effectively extract spatial-spectral features, and a controllable mapping part that uses a residual dendrite network to perform weak mapping and enhance generalization ability. We conducted experiments on four standard datasets, and the results show that our method has higher classification accuracy than other existing methods. Significance: This paper pioneers and verifies weak mapping and generalization for HSI classification (new ideas). DFD code is available at https://github.com/liugang1234567/DFD</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141738345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPERM: sequential pairwise embedding recommendation with MI-FGSM SPERM:利用 MI-FGSM 进行顺序成对嵌入推荐
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-19 DOI: 10.1007/s13042-024-02288-z
Agyemang Paul, Yuxuan Wan, Boyu Chen, Zhefu Wu
{"title":"SPERM: sequential pairwise embedding recommendation with MI-FGSM","authors":"Agyemang Paul, Yuxuan Wan, Boyu Chen, Zhefu Wu","doi":"10.1007/s13042-024-02288-z","DOIUrl":"https://doi.org/10.1007/s13042-024-02288-z","url":null,"abstract":"<p>Visual recommendation systems have shown remarkable performance by leveraging consumer feedback and the visual attributes of products. However, recent concerns have arisen regarding the decline in recommendation quality when these systems are subjected to attacks that compromise the model parameters. While the fast gradient sign method (FGSM) and iterative FGSM (I-FGSM) are well-studied attack strategies, the momentum iterative FGSM (MI-FGSM), known for its superiority in the computer vision (CV) domain, has been overlooked. This oversight raises the possibility that visual recommender systems may be vulnerable to MI-FGSM, leading to significant vulnerabilities. Adversarial training, a regularization technique designed to withstand MI-FGSM attacks, could be a promising solution to bolster model resilience. In this research, we introduce MI-FGSM for visual recommendation. We propose the Sequential Pairwise Embedding Recommender with MI-FGSM (SPERM), a model that incorporates visual, temporal, and sequential information for visual recommendations through adversarial training. Specifically, we employ higher-order Markov chains to capture consumers’ sequential behaviors and utilize visual pairwise ranking to discern their visual preferences. To optimize the SPERM model, we employ a learning method based on AdaGrad. Moreover, we fortify the SPERM approach with adversarial training, where the primary objective is to train the model to withstand adversarial inputs introduced by MI-FGSM. Finally, we evaluate the effectiveness of our approach by conducting experiments on three Amazon datasets, comparing it with existing visual and adversarial recommendation algorithms. Our results demonstrate the efficacy of the proposed SPERM model in addressing adversarial attacks while enhancing visual recommendation performance.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141738483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One-step graph-based multi-view clustering via specific and unified nonnegative embeddings 通过特定和统一的非负嵌入,实现基于图形的一步式多视图聚类
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-17 DOI: 10.1007/s13042-024-02280-7
Sally El Hajjar, Fahed Abdallah, Hichem Omrani, Alain Khaled Chaaban, Muhammad Arif, Ryan Alturki, Mohammed J. AlGhamdi
{"title":"One-step graph-based multi-view clustering via specific and unified nonnegative embeddings","authors":"Sally El Hajjar, Fahed Abdallah, Hichem Omrani, Alain Khaled Chaaban, Muhammad Arif, Ryan Alturki, Mohammed J. AlGhamdi","doi":"10.1007/s13042-024-02280-7","DOIUrl":"https://doi.org/10.1007/s13042-024-02280-7","url":null,"abstract":"<p>Multi-view clustering techniques, especially spectral clustering methods, are quite popular today in the fields of machine learning and data science owing to the ever-growing diversity in data types and information sources. As the landscape of data continues to evolve, the need for advanced clustering approaches becomes increasingly crucial. In this context, the research in this study addresses the challenges posed by traditional multi-view spectral clustering techniques, offering a novel approach that simultaneously learns nonnegative embedding matrices and spectral embeddings. Moreover, the cluster label matrix, also known as the nonnegative embedding matrix, is split into two different types of matrices: (1) the shared nonnegative embedding matrix, which reflects the common cluster structure, (2) the individual nonnegative embedding matrices, which represent the unique cluster structure of each view. The proposed strategy allows us to effectively deal with noise and outliers in multiple views. The simultaneous optimization of the proposed model is solved efficiently with an alternating minimization scheme. The proposed method exhibits significant improvements, with an average accuracy enhancement of 4% over existing models, as demonstrated through extensive experiments on various real datasets. This highlights the efficacy of the approach in achieving superior clustering results.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141738484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信