Gharib Gharibi, Ravi Patel, A.N. Khan, Babak Poorebrahim Gilkalaye, Praneeth Vepakomma, Ramesh Raskar, Steve Penrod, Greg Storm, Riddhiman Das
{"title":"分布式深度学习的自动化框架——一个工具演示","authors":"Gharib Gharibi, Ravi Patel, A.N. Khan, Babak Poorebrahim Gilkalaye, Praneeth Vepakomma, Ramesh Raskar, Steve Penrod, Greg Storm, Riddhiman Das","doi":"10.1109/ICDCS54860.2022.00142","DOIUrl":null,"url":null,"abstract":"Split learning (SL) is a distributed deep-learning approach that enables individual data owners to train a shared model over their joint data without exchanging it with one another. SL has been the subject of much research in recent years, leading to the development of several versions for facilitating distributed learning. However, the majority of this work mainly focuses on optimizing the training process while largely ignoring the design and implementation of practical tool support. To fill this gap, we present our automated software framework for training deep neural networks from decentralized data based on our extended version of SL, termed Blind Learning. Specifically, we shed light on the underlying optimization algorithm, explain the design and implementation details of our framework, and present our preliminary evaluation results. We demonstrate that Blind Learning is 65% more computationally efficient than SL and can produce better performing models. Moreover, we show that running the same job in our framework is at least 4.5× faster than PySyft. Our goal is to spur the development of proper tool support for distributed deep learning.","PeriodicalId":225883,"journal":{"name":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"An Automated Framework for Distributed Deep Learning–A Tool Demo\",\"authors\":\"Gharib Gharibi, Ravi Patel, A.N. Khan, Babak Poorebrahim Gilkalaye, Praneeth Vepakomma, Ramesh Raskar, Steve Penrod, Greg Storm, Riddhiman Das\",\"doi\":\"10.1109/ICDCS54860.2022.00142\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Split learning (SL) is a distributed deep-learning approach that enables individual data owners to train a shared model over their joint data without exchanging it with one another. SL has been the subject of much research in recent years, leading to the development of several versions for facilitating distributed learning. However, the majority of this work mainly focuses on optimizing the training process while largely ignoring the design and implementation of practical tool support. To fill this gap, we present our automated software framework for training deep neural networks from decentralized data based on our extended version of SL, termed Blind Learning. Specifically, we shed light on the underlying optimization algorithm, explain the design and implementation details of our framework, and present our preliminary evaluation results. We demonstrate that Blind Learning is 65% more computationally efficient than SL and can produce better performing models. Moreover, we show that running the same job in our framework is at least 4.5× faster than PySyft. Our goal is to spur the development of proper tool support for distributed deep learning.\",\"PeriodicalId\":225883,\"journal\":{\"name\":\"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDCS54860.2022.00142\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDCS54860.2022.00142","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Automated Framework for Distributed Deep Learning–A Tool Demo
Split learning (SL) is a distributed deep-learning approach that enables individual data owners to train a shared model over their joint data without exchanging it with one another. SL has been the subject of much research in recent years, leading to the development of several versions for facilitating distributed learning. However, the majority of this work mainly focuses on optimizing the training process while largely ignoring the design and implementation of practical tool support. To fill this gap, we present our automated software framework for training deep neural networks from decentralized data based on our extended version of SL, termed Blind Learning. Specifically, we shed light on the underlying optimization algorithm, explain the design and implementation details of our framework, and present our preliminary evaluation results. We demonstrate that Blind Learning is 65% more computationally efficient than SL and can produce better performing models. Moreover, we show that running the same job in our framework is at least 4.5× faster than PySyft. Our goal is to spur the development of proper tool support for distributed deep learning.