International Journal of Machine Learning and Cybernetics最新文献

筛选
英文 中文
ASR-Fed: agnostic straggler-resilient semi-asynchronous federated learning technique for secured drone network ASR-Fed:面向安全无人机网络的不可知的抗流浪者半同步联合学习技术
IF 3.1 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-15 DOI: 10.1007/s13042-024-02238-9
Vivian Ukamaka Ihekoronye, C. I. Nwakanma, Dong‐Seong Kim, Jae Min Lee
{"title":"ASR-Fed: agnostic straggler-resilient semi-asynchronous federated learning technique for secured drone network","authors":"Vivian Ukamaka Ihekoronye, C. I. Nwakanma, Dong‐Seong Kim, Jae Min Lee","doi":"10.1007/s13042-024-02238-9","DOIUrl":"https://doi.org/10.1007/s13042-024-02238-9","url":null,"abstract":"","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141648194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving matrix factorization for recommendation systems using Gaussian mechanism and functional mechanism 使用高斯机制和函数机制的推荐系统隐私保护矩阵因式分解
IF 3.1 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-14 DOI: 10.1007/s13042-024-02276-3
Sohan Salahuddin Mugdho, Hafiz Imtiaz
{"title":"Privacy-preserving matrix factorization for recommendation systems using Gaussian mechanism and functional mechanism","authors":"Sohan Salahuddin Mugdho, Hafiz Imtiaz","doi":"10.1007/s13042-024-02276-3","DOIUrl":"https://doi.org/10.1007/s13042-024-02276-3","url":null,"abstract":"","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":3.1,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141649484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Industrial product surface defect detection via the fast denoising diffusion implicit model 通过快速去噪扩散隐含模型检测工业产品表面缺陷
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-11 DOI: 10.1007/s13042-024-02213-4
Yue Wang, Yong Yang, Mingsheng Liu, Xianghong Tang, Haibin Wang, Zhifeng Hao, Ze Shi, Gang Wang, Botao Jiang, Chunyang Liu
{"title":"Industrial product surface defect detection via the fast denoising diffusion implicit model","authors":"Yue Wang, Yong Yang, Mingsheng Liu, Xianghong Tang, Haibin Wang, Zhifeng Hao, Ze Shi, Gang Wang, Botao Jiang, Chunyang Liu","doi":"10.1007/s13042-024-02213-4","DOIUrl":"https://doi.org/10.1007/s13042-024-02213-4","url":null,"abstract":"<p>In the age of intelligent manufacturing, surface defect detection plays a pivotal role in the automated quality control of industrial products, constituting a fundamental aspect of smart factory evolution. Considering the diverse sizes and feature scales of surface defects on industrial products and the difficulty in procuring high-quality training samples, the achievement of real-time and high-quality surface defect detection through artificial intelligence technologies remains a formidable challenge. To address this, we introduce a defect detection approach grounded in the Fast Denoising Probabilistic Implicit Models. Firstly, we propose a noise predictor influenced by the spectral radius feature tensor of images. This enhancement augments the ability of generative model to capture nuanced details in non-defective areas, thus overcoming limitations in model versatility and detail portrayal. Furthermore, we present a loss function constraint based on the Perron-root. This is designed to incorporate the constraint within the representational space, ensuring the denoising model consistently produces high-quality samples. Lastly, comprehensive experiments on both the Magnetic Tile and Market-PCB datasets, benchmarked against nine most representative models, underscore the exemplary detection efficacy of our proposed approach.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint features-guided linear transformer and CNN for efficient image super-resolution 联合特征引导线性变换器和 CNN 实现高效图像超分辨率
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-09 DOI: 10.1007/s13042-024-02277-2
Bufan Wang, Yongjun Zhang, Wei Long, Zhongwei Cui
{"title":"Joint features-guided linear transformer and CNN for efficient image super-resolution","authors":"Bufan Wang, Yongjun Zhang, Wei Long, Zhongwei Cui","doi":"10.1007/s13042-024-02277-2","DOIUrl":"https://doi.org/10.1007/s13042-024-02277-2","url":null,"abstract":"<p>Integrating convolutional neural networks (CNNs) and transformers has notably improved lightweight single image super-resolution (SISR) tasks. However, existing methods lack the capability to exploit multi-level contextual information, and transformer computations inherently add quadratic complexity. To address these issues, we propose a <b>J</b>oint features-<b>G</b>uided <b>L</b>inear <b>T</b>ransformer and CNN <b>N</b>etwork (JGLTN) for efficient SISR, which is constructed by cascading modules composed of CNN layers and linear transformer layers. Specifically, in the CNN layer, our approach employs an inter-scale feature integration module (IFIM) to extract critical latent information across scales. Then, in the linear transformer layer, we design a joint feature-guided linear attention (JGLA). It jointly considers adjacent and extended regional features, dynamically assigning weights to convolutional kernels for contextual feature selection. This process garners multi-level contextual information, which is used to guide linear attention for effective information interaction. Moreover, we redesign the method of computing feature similarity within the self-attention, reducing its computational complexity to linear. Extensive experiments shows that our proposal outperforms state-of-the-art models while balancing performance and computational costs.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141577378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inherit or discard: learning better domain-specific child networks from the general domain for multi-domain NMT 继承还是放弃:从一般领域学习更好的特定领域子网络,实现多领域 NMT
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-08 DOI: 10.1007/s13042-024-02253-w
Jinlei Xu, Yonghua Wen, Yan Xiang, Shuting Jiang, Yuxin Huang, Zhengtao Yu
{"title":"Inherit or discard: learning better domain-specific child networks from the general domain for multi-domain NMT","authors":"Jinlei Xu, Yonghua Wen, Yan Xiang, Shuting Jiang, Yuxin Huang, Zhengtao Yu","doi":"10.1007/s13042-024-02253-w","DOIUrl":"https://doi.org/10.1007/s13042-024-02253-w","url":null,"abstract":"<p>Multi-domain NMT aims to develop a parameter-sharing model for translating general and specific domains, such as biology, legal, etc., which often struggle with the parameter interference problem. Existing approaches typically tackle this issue by learning a domain-specific sub-network for each domain equally, but they ignore the significant data imbalance problem across domains. For instance, the training data for the general domain often outweighs the biological domain tenfold. In this paper, we observe a natural similarity between the general and specific domains, including shared vocabulary or similar sentence structure. We propose a novel parameter inheritance strategy to adaptively learn domain-specific child networks from the general domain. Our approach employs gradient similarity as the criterion for determining which parameters should be inherited or discarded between the general and specific domains. Extensive experiments on several multi-domain NMT corpora demonstrate that our method significantly outperforms several strong baselines. In addition, our method exhibits remarkable generalization performance in adapting to few-shot multi-domain NMT scenarios. Further investigations reveal that our method achieves good interpretability because the parameters learned by the child network from the general domain depend on the interconnectedness between the specific domain and the general domain.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141568634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-representation with adaptive loss minimization via doubly stochastic graph regularization for robust unsupervised feature selection 通过双随机图正则化实现自适应损失最小化的自我呈现,从而实现稳健的无监督特征选择
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-06 DOI: 10.1007/s13042-024-02275-4
Xiangfa Song
{"title":"Self-representation with adaptive loss minimization via doubly stochastic graph regularization for robust unsupervised feature selection","authors":"Xiangfa Song","doi":"10.1007/s13042-024-02275-4","DOIUrl":"https://doi.org/10.1007/s13042-024-02275-4","url":null,"abstract":"<p>Unsupervised feature selection (UFS), which involves selecting representative features from unlabeled high-dimensional data, has attracted much attention. Numerous self-representation-based models have been recently developed successfully for UFS. However, these models have two main problems. First, existing self-representation-based UFS models cannot effectively handle noise and outliers. Second, many graph-regularized self-representation-based UFS models typically construct a fixed graph to maintain the local structure of data. To overcome the above shortcomings, we propose a novel robust UFS model called self-representation with adaptive loss minimization via doubly stochastic graph regularization (SRALDS). Specifically, SRALDS uses an adaptive loss function to minimize the representation residual term, which may enhance the robustness of the model and diminish the effect of noise and outliers. Besides, rather than utilizing a fixed graph, SRALDS learns a high-quality doubly stochastic graph that more accurately captures the local structure of data. Finally, an efficient optimization algorithm is designed to obtain the optimal solution for SRALDS. Extensive experiments demonstrate the superior performance of SRALDS over several well-known UFS methods.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141568636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-strategy hybrid cuckoo search algorithm with specular reflection based on a population linear decreasing strategy 基于群体线性递减策略的带有镜面反射的多策略混合布谷鸟搜索算法
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-05 DOI: 10.1007/s13042-024-02273-6
Chengtian Ouyang, Xin Liu, Donglin Zhu, Yangyang Zheng, Changjun Zhou, Chengye Zou
{"title":"A multi-strategy hybrid cuckoo search algorithm with specular reflection based on a population linear decreasing strategy","authors":"Chengtian Ouyang, Xin Liu, Donglin Zhu, Yangyang Zheng, Changjun Zhou, Chengye Zou","doi":"10.1007/s13042-024-02273-6","DOIUrl":"https://doi.org/10.1007/s13042-024-02273-6","url":null,"abstract":"<p>The cuckoo search algorithm (CS), an algorithm inspired by the nest-parasitic breeding behavior of cuckoos, has proved its own effectiveness as a problem-solving approach in many fields since it was proposed. Nevertheless, the cuckoo search algorithm still suffers from an imbalance between exploration and exploitation as well as a tendency to fall into local optimization. In this paper, we propose a new hybrid cuckoo search algorithm (LHCS) based on linear decreasing of populations, and in order to optimize the local search of the algorithm and make the algorithm converge quickly, we mix the solution updating strategy of the Grey Yours sincerely, wolf optimizer (GWO) and use the linear decreasing rule to adjust the calling ratio of the strategy in order to balance the global exploration and the local exploitation; Second, the addition of a specular reflection learning strategy enhances the algorithm's ability to jump out of local optima; Finally, the convergence ability of the algorithm on different intervals and the adaptive ability of population diversity are improved using a population linear decreasing strategy. The experimental results on 29 benchmark functions from the CEC2017 test set show that the LHCS algorithm has significant superiority and stability over other algorithms when the quality of all solutions is considered together. In order to further verify the performance of the proposed algorithm in this paper, we applied the algorithm to engineering problems, functional tests, and Wilcoxon test results show that the comprehensive performance of the LHCS algorithm outperforms the other 14 state-of-the-art algorithms. In several engineering optimization problems, the practicality and effectiveness of the LHCS algorithm are verified, and the design cost can be greatly reduced by applying it to real engineering problems.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-dimensional intrinsic dimension reveals a phase transition in gradient-based learning of deep neural networks 低维内在维度揭示了基于梯度学习的深度神经网络的阶段性转变
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-04 DOI: 10.1007/s13042-024-02244-x
Chengli Tan, Jiangshe Zhang, Junmin Liu, Zixiang Zhao
{"title":"Low-dimensional intrinsic dimension reveals a phase transition in gradient-based learning of deep neural networks","authors":"Chengli Tan, Jiangshe Zhang, Junmin Liu, Zixiang Zhao","doi":"10.1007/s13042-024-02244-x","DOIUrl":"https://doi.org/10.1007/s13042-024-02244-x","url":null,"abstract":"<p>Deep neural networks complete a feature extraction task by propagating the inputs through multiple modules. However, how the representations evolve with the gradient-based optimization remains unknown. Here we leverage the intrinsic dimension of the representations to study the learning dynamics and find that the training process undergoes a phase transition from expansion to compression under disparate training regimes. Surprisingly, this phenomenon is ubiquitous across a wide variety of model architectures, optimizers, and data sets. We demonstrate that the variation in the intrinsic dimension is consistent with the complexity of the learned hypothesis, which can be quantitatively assessed by the critical sample ratio that is rooted in adversarial robustness. Meanwhile, we mathematically show that this phenomenon can be analyzed in terms of the mutable correlation between neurons. Although the evoked activities obey a power-law decaying rule in biological circuits, we identify that the power-law exponent of the representations in deep neural networks predicted adversarial robustness well only at the end of the training but not during the training process. These results together suggest that deep neural networks are prone to producing robust representations by adaptively eliminating or retaining redundancies. The code is publicly available at https://github.com/cltan023/learning2022.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel abstractive summarization model based on topic-aware and contrastive learning 基于主题感知和对比学习的新型抽象摘要模型
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-04 DOI: 10.1007/s13042-024-02263-8
Huanling Tang, Ruiquan Li, Wenhao Duan, Quansheng Dou, Mingyu Lu
{"title":"A novel abstractive summarization model based on topic-aware and contrastive learning","authors":"Huanling Tang, Ruiquan Li, Wenhao Duan, Quansheng Dou, Mingyu Lu","doi":"10.1007/s13042-024-02263-8","DOIUrl":"https://doi.org/10.1007/s13042-024-02263-8","url":null,"abstract":"<p>The majority of abstractive summarization models are designed based on the Sequence-to-Sequence(Seq2Seq) architecture. These models are able to capture syntactic and contextual information between words. However, Seq2Seq-based summarization models tend to overlook global semantic information. Moreover, there exist inconsistency between the objective function and evaluation metrics of this model. To address these limitations, a novel model named ASTCL is proposed in this paper. It integrates the neural topic model into the Seq2Seq framework innovatively, aiming to capture the text’s global semantic information and guide the summary generation. Additionally, it incorporates contrastive learning techniques to mitigate the discrepancy between the objective loss and the evaluation metrics through scoring multiple candidate summaries. On CNN/DM XSum and NYT datasets, the experimental results demonstrate that the ASTCL model outperforms the other generic models in summarization task.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Undersampling based on generalized learning vector quantization and natural nearest neighbors for imbalanced data 基于广义学习向量量化和自然近邻的不平衡数据去采样
IF 5.6 3区 计算机科学
International Journal of Machine Learning and Cybernetics Pub Date : 2024-07-03 DOI: 10.1007/s13042-024-02261-w
Long-Hui Wang, Qi Dai, Jia-You Wang, Tony Du, Lifang Chen
{"title":"Undersampling based on generalized learning vector quantization and natural nearest neighbors for imbalanced data","authors":"Long-Hui Wang, Qi Dai, Jia-You Wang, Tony Du, Lifang Chen","doi":"10.1007/s13042-024-02261-w","DOIUrl":"https://doi.org/10.1007/s13042-024-02261-w","url":null,"abstract":"<p>Imbalanced datasets can adversely affect classifier performance. Conventional undersampling approaches may lead to the loss of essential information, while oversampling techniques could introduce noise. To address this challenge, we propose an undersampling algorithm called GLNDU (Generalized Learning Vector Quantization and Natural Nearest Neighbors-based Undersampling). GLNDU utilizes Generalized Learning Vector Quantization (GLVQ) for computing the centroids of positive and negative instances. It also utilizes the concept of Natural Nearest Neighbors to identify majority-class instances in the overlapping region of the centroids of minority-class instances. Afterwards, these majority-class instances are removed, resulting in a new balanced training dataset that is used to train a foundational classifier. We conduct extensive experiments on 29 publicly available datasets, evaluating the performance using AUC and G_mean values. GLNDU demonstrates significant advantages over established methods such as SVM, CART, and KNN across different types of classifiers. Additionally, the results of the Friedman ranking and Nemenyi post-hoc test provide additional support for the findings obtained from the experiments.</p>","PeriodicalId":51327,"journal":{"name":"International Journal of Machine Learning and Cybernetics","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信