基于谱梯度混合近端方法的优化机器学习算法

Cherng Liin Yong, Ban-Hoe Kwan, D. Ng, Hong Seng Sim
{"title":"基于谱梯度混合近端方法的优化机器学习算法","authors":"Cherng Liin Yong, Ban-Hoe Kwan, D. Ng, Hong Seng Sim","doi":"10.1109/CSPA52141.2021.9377294","DOIUrl":null,"url":null,"abstract":"Deep learning models are widely implemented in various machines to perform complicated tasks. Therefore, a significant amount of research effort focuses on improving the implementation of such models. One of the key bottlenecks in model implementation is the lengthy and inefficient model training process. A large amount of literature was published to improve the training process. In this paper, the Spectral Proximal (SP) optimization method is studied and presented. The SP method is an optimization algorithm that combines the Multiple Damping Gradient (MDG) method with a sparsity optimizer to improve training efficiency in machine learning. The MDG algorithm utilizes a damping matrix to correct errors in the descent direction. On top of that, the sparsity optimizer eliminates insignificant elements in the solution to reduce unnecessary computation. We conducted a training experiment to evaluate the SP method against the Adam method. In the experiment, both methods are used to train You Only Look Once version 3 (YOLOv3) model with an object detection dataset, known as YYMNIST dataset. The dataset utilized selected images from the Modified National Institute of Standards and Technology (MNIST) dataset. From the experiment, SP method displays a higher convergence rate and achieved a slightly higher mean Average Precision (mAP) than Adam method. However, SP method requires slightly longer training time due to higher computational requirements.","PeriodicalId":194655,"journal":{"name":"2021 IEEE 17th International Colloquium on Signal Processing & Its Applications (CSPA)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Optimized Machine Learning Algorithm using Hybrid Proximal Method with Spectral Gradient Techniques\",\"authors\":\"Cherng Liin Yong, Ban-Hoe Kwan, D. Ng, Hong Seng Sim\",\"doi\":\"10.1109/CSPA52141.2021.9377294\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning models are widely implemented in various machines to perform complicated tasks. Therefore, a significant amount of research effort focuses on improving the implementation of such models. One of the key bottlenecks in model implementation is the lengthy and inefficient model training process. A large amount of literature was published to improve the training process. In this paper, the Spectral Proximal (SP) optimization method is studied and presented. The SP method is an optimization algorithm that combines the Multiple Damping Gradient (MDG) method with a sparsity optimizer to improve training efficiency in machine learning. The MDG algorithm utilizes a damping matrix to correct errors in the descent direction. On top of that, the sparsity optimizer eliminates insignificant elements in the solution to reduce unnecessary computation. We conducted a training experiment to evaluate the SP method against the Adam method. In the experiment, both methods are used to train You Only Look Once version 3 (YOLOv3) model with an object detection dataset, known as YYMNIST dataset. The dataset utilized selected images from the Modified National Institute of Standards and Technology (MNIST) dataset. From the experiment, SP method displays a higher convergence rate and achieved a slightly higher mean Average Precision (mAP) than Adam method. However, SP method requires slightly longer training time due to higher computational requirements.\",\"PeriodicalId\":194655,\"journal\":{\"name\":\"2021 IEEE 17th International Colloquium on Signal Processing & Its Applications (CSPA)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-03-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 17th International Colloquium on Signal Processing & Its Applications (CSPA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CSPA52141.2021.9377294\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 17th International Colloquium on Signal Processing & Its Applications (CSPA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSPA52141.2021.9377294","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

深度学习模型被广泛应用于各种机器中,以执行复杂的任务。因此,大量的研究工作集中在改进这些模型的实现上。模型实现的关键瓶颈之一是冗长而低效的模型训练过程。发表了大量的文献来改进培训过程。本文研究并提出了光谱近端(SP)优化方法。SP方法是一种将多元阻尼梯度(Multiple Damping Gradient, MDG)方法与稀疏优化器相结合的优化算法,以提高机器学习中的训练效率。MDG算法利用阻尼矩阵来修正下降方向上的误差。最重要的是,稀疏优化器消除了解决方案中不重要的元素,以减少不必要的计算。我们进行了一个训练实验来比较SP方法和Adam方法。在实验中,这两种方法都使用一个被称为YYMNIST的目标检测数据集来训练You Only Look Once version 3 (YOLOv3)模型。该数据集使用了来自修改后的美国国家标准与技术研究所(MNIST)数据集的选定图像。实验结果表明,SP方法收敛速度更快,平均精度(mAP)略高于Adam方法。而SP方法由于计算量要求较高,需要稍长的训练时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Optimized Machine Learning Algorithm using Hybrid Proximal Method with Spectral Gradient Techniques
Deep learning models are widely implemented in various machines to perform complicated tasks. Therefore, a significant amount of research effort focuses on improving the implementation of such models. One of the key bottlenecks in model implementation is the lengthy and inefficient model training process. A large amount of literature was published to improve the training process. In this paper, the Spectral Proximal (SP) optimization method is studied and presented. The SP method is an optimization algorithm that combines the Multiple Damping Gradient (MDG) method with a sparsity optimizer to improve training efficiency in machine learning. The MDG algorithm utilizes a damping matrix to correct errors in the descent direction. On top of that, the sparsity optimizer eliminates insignificant elements in the solution to reduce unnecessary computation. We conducted a training experiment to evaluate the SP method against the Adam method. In the experiment, both methods are used to train You Only Look Once version 3 (YOLOv3) model with an object detection dataset, known as YYMNIST dataset. The dataset utilized selected images from the Modified National Institute of Standards and Technology (MNIST) dataset. From the experiment, SP method displays a higher convergence rate and achieved a slightly higher mean Average Precision (mAP) than Adam method. However, SP method requires slightly longer training time due to higher computational requirements.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信