{"title":"基于边缘深度学习的动态环境高效体积估计","authors":"Chandan Kumar, Yamini Mathur, A. Jannesari","doi":"10.1109/IPDPSW55747.2022.00159","DOIUrl":null,"url":null,"abstract":"The utility of edge devices has increased in volume estimation of uneven terrains. Existing techniques utilize several geo-tagged images of the landscape, captured in-flight by an edge device mounted over a UAV, to generate 3D models and perform volume estimation through manual boundary marking. These methods, although accurate, require significant time, human effort and are heavily dependent on GPS. We present an efficient deep learning framework that detects the object of interest and automatically determines the volume (independent of GPS) of the detected object on-the-fly. Our method employs a stereo camera for depth sensing of the object and overlays a unit mesh grid over the object's boundary to perform volume estimation. We explore the accuracy vs computational complexity trade-off on variations of our technique. Experiments indicate that our method reduces the time for volume estimation by several orders of magnitude in contrast to existing methods and is independent of GPS as well. Also, to the best of our knowledge, this is the first method that can perform volume analysis in a dynamic environment.","PeriodicalId":286968,"journal":{"name":"2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Efficient Volume Estimation for Dynamic Environments using Deep Learning on the Edge\",\"authors\":\"Chandan Kumar, Yamini Mathur, A. Jannesari\",\"doi\":\"10.1109/IPDPSW55747.2022.00159\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The utility of edge devices has increased in volume estimation of uneven terrains. Existing techniques utilize several geo-tagged images of the landscape, captured in-flight by an edge device mounted over a UAV, to generate 3D models and perform volume estimation through manual boundary marking. These methods, although accurate, require significant time, human effort and are heavily dependent on GPS. We present an efficient deep learning framework that detects the object of interest and automatically determines the volume (independent of GPS) of the detected object on-the-fly. Our method employs a stereo camera for depth sensing of the object and overlays a unit mesh grid over the object's boundary to perform volume estimation. We explore the accuracy vs computational complexity trade-off on variations of our technique. Experiments indicate that our method reduces the time for volume estimation by several orders of magnitude in contrast to existing methods and is independent of GPS as well. Also, to the best of our knowledge, this is the first method that can perform volume analysis in a dynamic environment.\",\"PeriodicalId\":286968,\"journal\":{\"name\":\"2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)\",\"volume\":\"72 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IPDPSW55747.2022.00159\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPDPSW55747.2022.00159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Efficient Volume Estimation for Dynamic Environments using Deep Learning on the Edge
The utility of edge devices has increased in volume estimation of uneven terrains. Existing techniques utilize several geo-tagged images of the landscape, captured in-flight by an edge device mounted over a UAV, to generate 3D models and perform volume estimation through manual boundary marking. These methods, although accurate, require significant time, human effort and are heavily dependent on GPS. We present an efficient deep learning framework that detects the object of interest and automatically determines the volume (independent of GPS) of the detected object on-the-fly. Our method employs a stereo camera for depth sensing of the object and overlays a unit mesh grid over the object's boundary to perform volume estimation. We explore the accuracy vs computational complexity trade-off on variations of our technique. Experiments indicate that our method reduces the time for volume estimation by several orders of magnitude in contrast to existing methods and is independent of GPS as well. Also, to the best of our knowledge, this is the first method that can perform volume analysis in a dynamic environment.