{"title":"量化大规模模型的艺术与科学:全面概述","authors":"Yanshu Wang, Tong Yang, Xiyan Liang, Guoan Wang, Hanning Lu, Xu Zhe, Yaoming Li, Li Weitao","doi":"arxiv-2409.11650","DOIUrl":null,"url":null,"abstract":"This paper provides a comprehensive overview of the principles, challenges,\nand methodologies associated with quantizing large-scale neural network models.\nAs neural networks have evolved towards larger and more complex architectures\nto address increasingly sophisticated tasks, the computational and energy costs\nhave escalated significantly. We explore the necessity and impact of model size\ngrowth, highlighting the performance benefits as well as the computational\nchallenges and environmental considerations. The core focus is on model\nquantization as a fundamental approach to mitigate these challenges by reducing\nmodel size and improving efficiency without substantially compromising\naccuracy. We delve into various quantization techniques, including both\npost-training quantization (PTQ) and quantization-aware training (QAT), and\nanalyze several state-of-the-art algorithms such as LLM-QAT, PEQA(L4Q),\nZeroQuant, SmoothQuant, and others. Through comparative analysis, we examine\nhow these methods address issues like outliers, importance weighting, and\nactivation quantization, ultimately contributing to more sustainable and\naccessible deployment of large-scale models.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Art and Science of Quantizing Large-Scale Models: A Comprehensive Overview\",\"authors\":\"Yanshu Wang, Tong Yang, Xiyan Liang, Guoan Wang, Hanning Lu, Xu Zhe, Yaoming Li, Li Weitao\",\"doi\":\"arxiv-2409.11650\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper provides a comprehensive overview of the principles, challenges,\\nand methodologies associated with quantizing large-scale neural network models.\\nAs neural networks have evolved towards larger and more complex architectures\\nto address increasingly sophisticated tasks, the computational and energy costs\\nhave escalated significantly. We explore the necessity and impact of model size\\ngrowth, highlighting the performance benefits as well as the computational\\nchallenges and environmental considerations. The core focus is on model\\nquantization as a fundamental approach to mitigate these challenges by reducing\\nmodel size and improving efficiency without substantially compromising\\naccuracy. We delve into various quantization techniques, including both\\npost-training quantization (PTQ) and quantization-aware training (QAT), and\\nanalyze several state-of-the-art algorithms such as LLM-QAT, PEQA(L4Q),\\nZeroQuant, SmoothQuant, and others. Through comparative analysis, we examine\\nhow these methods address issues like outliers, importance weighting, and\\nactivation quantization, ultimately contributing to more sustainable and\\naccessible deployment of large-scale models.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11650\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11650","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Art and Science of Quantizing Large-Scale Models: A Comprehensive Overview
This paper provides a comprehensive overview of the principles, challenges,
and methodologies associated with quantizing large-scale neural network models.
As neural networks have evolved towards larger and more complex architectures
to address increasingly sophisticated tasks, the computational and energy costs
have escalated significantly. We explore the necessity and impact of model size
growth, highlighting the performance benefits as well as the computational
challenges and environmental considerations. The core focus is on model
quantization as a fundamental approach to mitigate these challenges by reducing
model size and improving efficiency without substantially compromising
accuracy. We delve into various quantization techniques, including both
post-training quantization (PTQ) and quantization-aware training (QAT), and
analyze several state-of-the-art algorithms such as LLM-QAT, PEQA(L4Q),
ZeroQuant, SmoothQuant, and others. Through comparative analysis, we examine
how these methods address issues like outliers, importance weighting, and
activation quantization, ultimately contributing to more sustainable and
accessible deployment of large-scale models.