{"title":"关于弹性语言模型","authors":"Chen Zhang, Benyou Wang, Dawei Song","doi":"10.1145/3677375","DOIUrl":null,"url":null,"abstract":"\n Large-scale pretrained language models have achieved compelling performance in a wide range of language understanding and information retrieval tasks. While their large scales ensure capacity, they also hinder deployment. Knowledge distillation offers an opportunity to compress a large language model to a small one, in order to reach a reasonable latency-performance tradeoff. However, for scenarios where the number of requests (e.g., queries submitted to a search engine) is highly variant, the static tradeoff attained by the compressed language model might not always fit. Once a model is assigned with a static tradeoff, it could be inadequate in that the latency is too high when the number of requests is large, or the performance is too low when the number of requests is small. To this end, we propose an elastic language model (\n ElasticLM\n ) that elastically adjusts the tradeoff according to the request stream. The basic idea is to introduce a compute elasticity to the compressed language model, so that the tradeoff could vary on-the-fly along a scalable and controllable compute. Specifically, we impose an elastic structure to equip\n ElasticLM\n with compute elasticity and design an elastic optimization method to learn\n ElasticLM\n under compute elasticity. To serve\n ElasticLM\n , we apply an elastic schedule. Considering the specificity of information retrieval, we adapt\n ElasticLM\n to dense retrieval and reranking, and present an\n ElasticDenser\n and an\n ElasticRanker\n respectively. Offline evaluation is conducted on a language understanding benchmark GLUE, and several information retrieval tasks including Natural Question, Trivia QA and MS MARCO. The results show that\n ElasticLM\n along with\n ElasticDenser\n and\n ElasticRanker\n can perform correctly and competitively compared with an array of static baselines. Furthermore, an online simulation with concurrency is also carried out. The results demonstrate that\n ElasticLM\n can provide elastic tradeoffs with respect to varying request stream.\n","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.4000,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On Elastic Language Models\",\"authors\":\"Chen Zhang, Benyou Wang, Dawei Song\",\"doi\":\"10.1145/3677375\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Large-scale pretrained language models have achieved compelling performance in a wide range of language understanding and information retrieval tasks. While their large scales ensure capacity, they also hinder deployment. Knowledge distillation offers an opportunity to compress a large language model to a small one, in order to reach a reasonable latency-performance tradeoff. However, for scenarios where the number of requests (e.g., queries submitted to a search engine) is highly variant, the static tradeoff attained by the compressed language model might not always fit. Once a model is assigned with a static tradeoff, it could be inadequate in that the latency is too high when the number of requests is large, or the performance is too low when the number of requests is small. To this end, we propose an elastic language model (\\n ElasticLM\\n ) that elastically adjusts the tradeoff according to the request stream. The basic idea is to introduce a compute elasticity to the compressed language model, so that the tradeoff could vary on-the-fly along a scalable and controllable compute. Specifically, we impose an elastic structure to equip\\n ElasticLM\\n with compute elasticity and design an elastic optimization method to learn\\n ElasticLM\\n under compute elasticity. To serve\\n ElasticLM\\n , we apply an elastic schedule. Considering the specificity of information retrieval, we adapt\\n ElasticLM\\n to dense retrieval and reranking, and present an\\n ElasticDenser\\n and an\\n ElasticRanker\\n respectively. Offline evaluation is conducted on a language understanding benchmark GLUE, and several information retrieval tasks including Natural Question, Trivia QA and MS MARCO. The results show that\\n ElasticLM\\n along with\\n ElasticDenser\\n and\\n ElasticRanker\\n can perform correctly and competitively compared with an array of static baselines. Furthermore, an online simulation with concurrency is also carried out. The results demonstrate that\\n ElasticLM\\n can provide elastic tradeoffs with respect to varying request stream.\\n\",\"PeriodicalId\":50936,\"journal\":{\"name\":\"ACM Transactions on Information Systems\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2024-07-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Information Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3677375\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Information Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3677375","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Large-scale pretrained language models have achieved compelling performance in a wide range of language understanding and information retrieval tasks. While their large scales ensure capacity, they also hinder deployment. Knowledge distillation offers an opportunity to compress a large language model to a small one, in order to reach a reasonable latency-performance tradeoff. However, for scenarios where the number of requests (e.g., queries submitted to a search engine) is highly variant, the static tradeoff attained by the compressed language model might not always fit. Once a model is assigned with a static tradeoff, it could be inadequate in that the latency is too high when the number of requests is large, or the performance is too low when the number of requests is small. To this end, we propose an elastic language model (
ElasticLM
) that elastically adjusts the tradeoff according to the request stream. The basic idea is to introduce a compute elasticity to the compressed language model, so that the tradeoff could vary on-the-fly along a scalable and controllable compute. Specifically, we impose an elastic structure to equip
ElasticLM
with compute elasticity and design an elastic optimization method to learn
ElasticLM
under compute elasticity. To serve
ElasticLM
, we apply an elastic schedule. Considering the specificity of information retrieval, we adapt
ElasticLM
to dense retrieval and reranking, and present an
ElasticDenser
and an
ElasticRanker
respectively. Offline evaluation is conducted on a language understanding benchmark GLUE, and several information retrieval tasks including Natural Question, Trivia QA and MS MARCO. The results show that
ElasticLM
along with
ElasticDenser
and
ElasticRanker
can perform correctly and competitively compared with an array of static baselines. Furthermore, an online simulation with concurrency is also carried out. The results demonstrate that
ElasticLM
can provide elastic tradeoffs with respect to varying request stream.
期刊介绍:
The ACM Transactions on Information Systems (TOIS) publishes papers on information retrieval (such as search engines, recommender systems) that contain:
new principled information retrieval models or algorithms with sound empirical validation;
observational, experimental and/or theoretical studies yielding new insights into information retrieval or information seeking;
accounts of applications of existing information retrieval techniques that shed light on the strengths and weaknesses of the techniques;
formalization of new information retrieval or information seeking tasks and of methods for evaluating the performance on those tasks;
development of content (text, image, speech, video, etc) analysis methods to support information retrieval and information seeking;
development of computational models of user information preferences and interaction behaviors;
creation and analysis of evaluation methodologies for information retrieval and information seeking; or
surveys of existing work that propose a significant synthesis.
The information retrieval scope of ACM Transactions on Information Systems (TOIS) appeals to industry practitioners for its wealth of creative ideas, and to academic researchers for its descriptions of their colleagues'' work.