PerfHD: Efficient ViT Architecture Performance Ranking using Hyperdimensional Computing

Dongning Ma, Pengfei Zhao, Xun Jiao
{"title":"PerfHD: Efficient ViT Architecture Performance Ranking using Hyperdimensional Computing","authors":"Dongning Ma, Pengfei Zhao, Xun Jiao","doi":"10.1109/CVPRW59228.2023.00217","DOIUrl":null,"url":null,"abstract":"Neural Architecture Search (NAS) aims at identifying the optimal network architecture for a specific need in an automated manner, which serves as an alternative to the manual process of model development, selection, evaluation and performance estimation. However, evaluating performance of candidate architectures in the search space during NAS, which often requires training and ranking a mass amount of architectures, is often prohibitively computation-demanding. To reduce this cost, recent works propose to estimate and rank the architecture performance with-out actual training or inference. In this paper, we present PerfHD, an efficient-while-accurate architecture performance ranking approach using hyperdimensional computing for the emerging vision transformer (ViT), which has demonstrated state-of-the-art (SOTA) performance in vision tasks. Given a set of ViT models, PerfHD can accurately and quickly rank their performance solely based on their hyper-parameters without training. We develop two encoding schemes for PerfHD, Gram-based and Record-based, to encode the features from candidate ViT architecture parameters. Using the VIMER-UFO benchmark dataset of eight tasks from a diverse range of domains, we compare PerfHD with four SOTA methods. Experimental results show that PerfHD can rank nearly 100K ViT models in about just 1 minute, which is up to 10X faster than SOTA methods, while achieving comparable or even superior ranking accuracy. We open-source PerfHD in PyTorch implementation at https://github.com/VU-DETAIL/PerfHD.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW59228.2023.00217","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Neural Architecture Search (NAS) aims at identifying the optimal network architecture for a specific need in an automated manner, which serves as an alternative to the manual process of model development, selection, evaluation and performance estimation. However, evaluating performance of candidate architectures in the search space during NAS, which often requires training and ranking a mass amount of architectures, is often prohibitively computation-demanding. To reduce this cost, recent works propose to estimate and rank the architecture performance with-out actual training or inference. In this paper, we present PerfHD, an efficient-while-accurate architecture performance ranking approach using hyperdimensional computing for the emerging vision transformer (ViT), which has demonstrated state-of-the-art (SOTA) performance in vision tasks. Given a set of ViT models, PerfHD can accurately and quickly rank their performance solely based on their hyper-parameters without training. We develop two encoding schemes for PerfHD, Gram-based and Record-based, to encode the features from candidate ViT architecture parameters. Using the VIMER-UFO benchmark dataset of eight tasks from a diverse range of domains, we compare PerfHD with four SOTA methods. Experimental results show that PerfHD can rank nearly 100K ViT models in about just 1 minute, which is up to 10X faster than SOTA methods, while achieving comparable or even superior ranking accuracy. We open-source PerfHD in PyTorch implementation at https://github.com/VU-DETAIL/PerfHD.
PerfHD:使用超维计算的高效ViT架构性能排名
神经架构搜索(NAS)旨在以自动化的方式识别特定需求的最佳网络架构,作为模型开发,选择,评估和性能估计的手动过程的替代方案。然而,在NAS期间评估候选体系结构在搜索空间中的性能通常需要训练和对大量体系结构进行排序,这通常需要大量的计算。为了减少这种成本,最近的工作建议在没有实际训练或推理的情况下对体系结构性能进行评估和排名。在本文中,我们提出了PerfHD,这是一种高效而准确的架构性能排名方法,使用超维计算用于新兴视觉变压器(ViT),该方法在视觉任务中展示了最先进的(SOTA)性能。给定一组ViT模型,PerfHD可以在不经过训练的情况下,仅根据其超参数准确快速地对其性能进行排名。我们为PerfHD开发了基于gram和基于record的两种编码方案,对候选ViT架构参数的特征进行编码。使用VIMER-UFO基准数据集,包括来自不同领域的八个任务,我们将PerfHD与四种SOTA方法进行了比较。实验结果表明,PerfHD可以在大约1分钟内对近100K个ViT模型进行排序,比SOTA方法快10倍,同时达到相当甚至更高的排序精度。我们在https://github.com/VU-DETAIL/PerfHD的PyTorch实现中开源了PerfHD。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信