Colleen P. Bailey, Arthur C. Depoian, Ethan R. Adams
{"title":"边缘人工智能:解决效率范式","authors":"Colleen P. Bailey, Arthur C. Depoian, Ethan R. Adams","doi":"10.1109/MetroCon56047.2022.9971140","DOIUrl":null,"url":null,"abstract":"Recent years have seen a growing trend towards massive deep learning neural network algorithms. This movement is further perpetuated by the rapid growth in available computation. While these giant models attain remarkable performance, the required computational cost is proportionally huge. There is a resulting necessity for efficient and intelligent algorithm design that can achieve similar high performance to current state-of the-art.","PeriodicalId":292881,"journal":{"name":"2022 IEEE MetroCon","volume":"191 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Edge AI: Addressing the Efficiency Paradigm\",\"authors\":\"Colleen P. Bailey, Arthur C. Depoian, Ethan R. Adams\",\"doi\":\"10.1109/MetroCon56047.2022.9971140\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent years have seen a growing trend towards massive deep learning neural network algorithms. This movement is further perpetuated by the rapid growth in available computation. While these giant models attain remarkable performance, the required computational cost is proportionally huge. There is a resulting necessity for efficient and intelligent algorithm design that can achieve similar high performance to current state-of the-art.\",\"PeriodicalId\":292881,\"journal\":{\"name\":\"2022 IEEE MetroCon\",\"volume\":\"191 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE MetroCon\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MetroCon56047.2022.9971140\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE MetroCon","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MetroCon56047.2022.9971140","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Recent years have seen a growing trend towards massive deep learning neural network algorithms. This movement is further perpetuated by the rapid growth in available computation. While these giant models attain remarkable performance, the required computational cost is proportionally huge. There is a resulting necessity for efficient and intelligent algorithm design that can achieve similar high performance to current state-of the-art.