{"title":"基于快照的机器学习web应用卸载:工作在进行中","authors":"InChang Jeong, H. Jeong, Soo-Mook Moon","doi":"10.1145/3125503.3125625","DOIUrl":null,"url":null,"abstract":"We propose a new approach to running machine learning (ML) web app on resource-constrained embedded devices by offloading ML computations to servers. We can dynamically offload computations depending on the problem size and network status. The execution state is saved in the form of another web app called snapshot which simplifies the state migration. Some issues related to ML such as how to handle the Canvas object, the ML model, and the privacy of user data are addressed. The proposed offloading works for real web apps with a performance comparable to running the app entirely on the server.","PeriodicalId":143573,"journal":{"name":"International Conference on Embedded Software","volume":"69 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Snapshot-based offloading for machine learning web app: work-in-progress\",\"authors\":\"InChang Jeong, H. Jeong, Soo-Mook Moon\",\"doi\":\"10.1145/3125503.3125625\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a new approach to running machine learning (ML) web app on resource-constrained embedded devices by offloading ML computations to servers. We can dynamically offload computations depending on the problem size and network status. The execution state is saved in the form of another web app called snapshot which simplifies the state migration. Some issues related to ML such as how to handle the Canvas object, the ML model, and the privacy of user data are addressed. The proposed offloading works for real web apps with a performance comparable to running the app entirely on the server.\",\"PeriodicalId\":143573,\"journal\":{\"name\":\"International Conference on Embedded Software\",\"volume\":\"69 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Embedded Software\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3125503.3125625\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Embedded Software","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3125503.3125625","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Snapshot-based offloading for machine learning web app: work-in-progress
We propose a new approach to running machine learning (ML) web app on resource-constrained embedded devices by offloading ML computations to servers. We can dynamically offload computations depending on the problem size and network status. The execution state is saved in the form of another web app called snapshot which simplifies the state migration. Some issues related to ML such as how to handle the Canvas object, the ML model, and the privacy of user data are addressed. The proposed offloading works for real web apps with a performance comparable to running the app entirely on the server.