Dong Xiaofei, Zhang Xueqiang, Zhang Dan, Cao Feng, Bai Bingfeng
{"title":"A Survey of Research Progress and Theory Foundation in Large Model","authors":"Dong Xiaofei, Zhang Xueqiang, Zhang Dan, Cao Feng, Bai Bingfeng","doi":"10.1109/ccis57298.2022.10016400","DOIUrl":null,"url":null,"abstract":"In recent years, with the rapid development of key elements and core technologies of artificial intelligence, large-scale pre-training model (large model) has achieved remarkable effects. As specific practice progresses of large model, it is useful to realize the universality and generalizability of artificial intelligence, and respond to the strategic goal of building a strong model framework. From the perspective of theory, this article explores the support points of large model in the theories of intrinsic subspace, effective model complexity, and low rank decomposition. We discuss the research findings, implications and limitations of model development, and puts forward relevant suggestions for the future trend.","PeriodicalId":374660,"journal":{"name":"2022 IEEE 8th International Conference on Cloud Computing and Intelligent Systems (CCIS)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 8th International Conference on Cloud Computing and Intelligent Systems (CCIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ccis57298.2022.10016400","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, with the rapid development of key elements and core technologies of artificial intelligence, large-scale pre-training model (large model) has achieved remarkable effects. As specific practice progresses of large model, it is useful to realize the universality and generalizability of artificial intelligence, and respond to the strategic goal of building a strong model framework. From the perspective of theory, this article explores the support points of large model in the theories of intrinsic subspace, effective model complexity, and low rank decomposition. We discuss the research findings, implications and limitations of model development, and puts forward relevant suggestions for the future trend.