带有边缘混沌超参数的GPU加速器平台上的剪枝遗传nas

Anand Ravishankar, S. Natarajan, A. B. Malakreddy
{"title":"带有边缘混沌超参数的GPU加速器平台上的剪枝遗传nas","authors":"Anand Ravishankar, S. Natarajan, A. B. Malakreddy","doi":"10.1109/ICMLA52953.2021.00158","DOIUrl":null,"url":null,"abstract":"Deep Neural Networks (DNNs) is an extremely attractive subset of computational models due to their remarkable ability to provide promising results for a wide variety of problems. However, the performance delivered by DNNs often overshadows the work done before training the network, which includes Network Architecture Search (NAS) and its suitability concerning the task. This paper presents a modified Genetic-NAS framework designed to prevent network stagnation and reduce training loss. The network hyperparameters are initialized in a “Chaos on Edge” region, preventing premature convergence through reverse biases. The Genetic-NAS and parameter space exploration process is co-evolved by applying genetic operators and subjugating them to layer-wise competition. The inherent parallelism offered by both the neural network and its genetic extension is exploited by deploying the model on a GPU which improves the throughput. the GPU device provides an acceleration of 8.4x with 92.9% of the workload placed on the GPU device for the text-based datasets. On average, the task of classifying an image-based dataset takes 3 GPU hours.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"28 1","pages":"958-963"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Pruned Genetic-NAS on GPU Accelerator Platforms with Chaos-on-Edge Hyperparameters\",\"authors\":\"Anand Ravishankar, S. Natarajan, A. B. Malakreddy\",\"doi\":\"10.1109/ICMLA52953.2021.00158\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Neural Networks (DNNs) is an extremely attractive subset of computational models due to their remarkable ability to provide promising results for a wide variety of problems. However, the performance delivered by DNNs often overshadows the work done before training the network, which includes Network Architecture Search (NAS) and its suitability concerning the task. This paper presents a modified Genetic-NAS framework designed to prevent network stagnation and reduce training loss. The network hyperparameters are initialized in a “Chaos on Edge” region, preventing premature convergence through reverse biases. The Genetic-NAS and parameter space exploration process is co-evolved by applying genetic operators and subjugating them to layer-wise competition. The inherent parallelism offered by both the neural network and its genetic extension is exploited by deploying the model on a GPU which improves the throughput. the GPU device provides an acceleration of 8.4x with 92.9% of the workload placed on the GPU device for the text-based datasets. On average, the task of classifying an image-based dataset takes 3 GPU hours.\",\"PeriodicalId\":6750,\"journal\":{\"name\":\"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"volume\":\"28 1\",\"pages\":\"958-963\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLA52953.2021.00158\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA52953.2021.00158","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度神经网络(dnn)是一个非常有吸引力的计算模型子集,因为它们具有为各种各样的问题提供有希望的结果的卓越能力。然而,深度神经网络提供的性能往往掩盖了训练网络之前所做的工作,其中包括网络架构搜索(NAS)及其对任务的适用性。本文提出了一种改进的遗传- nas框架,旨在防止网络停滞和减少训练损失。网络超参数在“混沌边缘”区域初始化,防止通过反向偏差过早收敛。遗传- nas和参数空间探索过程是通过应用遗传算子并使它们服从于分层竞争而共同进化的。通过在GPU上部署该模型,利用神经网络及其遗传扩展所提供的固有并行性,提高了吞吐量。GPU设备提供8.4倍的加速,92.9%的工作负载放在GPU设备上用于基于文本的数据集。平均而言,对基于图像的数据集进行分类的任务需要3个GPU小时。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Pruned Genetic-NAS on GPU Accelerator Platforms with Chaos-on-Edge Hyperparameters
Deep Neural Networks (DNNs) is an extremely attractive subset of computational models due to their remarkable ability to provide promising results for a wide variety of problems. However, the performance delivered by DNNs often overshadows the work done before training the network, which includes Network Architecture Search (NAS) and its suitability concerning the task. This paper presents a modified Genetic-NAS framework designed to prevent network stagnation and reduce training loss. The network hyperparameters are initialized in a “Chaos on Edge” region, preventing premature convergence through reverse biases. The Genetic-NAS and parameter space exploration process is co-evolved by applying genetic operators and subjugating them to layer-wise competition. The inherent parallelism offered by both the neural network and its genetic extension is exploited by deploying the model on a GPU which improves the throughput. the GPU device provides an acceleration of 8.4x with 92.9% of the workload placed on the GPU device for the text-based datasets. On average, the task of classifying an image-based dataset takes 3 GPU hours.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信