S. Dorafshan, R. Thomas, C. Coopmans, Marc Maguire
{"title":"深度学习神经网络用于suas辅助结构检测:可行性与应用","authors":"S. Dorafshan, R. Thomas, C. Coopmans, Marc Maguire","doi":"10.1109/ICUAS.2018.8453409","DOIUrl":null,"url":null,"abstract":"This paper investigates the feasibility of using a Deep Learning Convolutional Neural Network (DLCNN) in inspection of concrete decks and buildings using small Unmanned Aerial Systems (sUAS). The training dataset consists of images of lab-made bridge decks taken with a point-and-shoot high resolution camera. The network is trained on this dataset in two modes: fully trained (94.7% validation accuracy) and transfer learning (97.1% validation accuracy). The testing datasets consist of 1620 sub-images from bridge decks with the same cracks, 2340 sub-images from bridge decks with similar cracks, and 3600 sub-images from a building with different cracks, all taken by sUAS. The sUAS used in the first dataset has a low-resolution camera whereas the sUAS used in the second and third datasets has a camera comparable to the point-and-shoot camera. In this study it has been shown that it is feasible to apply DLCNNs in autonomous civil structural inspections with comparable results to human inspectors when using off-the-shelf sUAS and training datasets collected with point-and-shoot handheld cameras.","PeriodicalId":246293,"journal":{"name":"2018 International Conference on Unmanned Aircraft Systems (ICUAS)","volume":"159 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"39","resultStr":"{\"title\":\"Deep Learning Neural Networks for sUAS-Assisted Structural Inspections: Feasibility and Application\",\"authors\":\"S. Dorafshan, R. Thomas, C. Coopmans, Marc Maguire\",\"doi\":\"10.1109/ICUAS.2018.8453409\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper investigates the feasibility of using a Deep Learning Convolutional Neural Network (DLCNN) in inspection of concrete decks and buildings using small Unmanned Aerial Systems (sUAS). The training dataset consists of images of lab-made bridge decks taken with a point-and-shoot high resolution camera. The network is trained on this dataset in two modes: fully trained (94.7% validation accuracy) and transfer learning (97.1% validation accuracy). The testing datasets consist of 1620 sub-images from bridge decks with the same cracks, 2340 sub-images from bridge decks with similar cracks, and 3600 sub-images from a building with different cracks, all taken by sUAS. The sUAS used in the first dataset has a low-resolution camera whereas the sUAS used in the second and third datasets has a camera comparable to the point-and-shoot camera. In this study it has been shown that it is feasible to apply DLCNNs in autonomous civil structural inspections with comparable results to human inspectors when using off-the-shelf sUAS and training datasets collected with point-and-shoot handheld cameras.\",\"PeriodicalId\":246293,\"journal\":{\"name\":\"2018 International Conference on Unmanned Aircraft Systems (ICUAS)\",\"volume\":\"159 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"39\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 International Conference on Unmanned Aircraft Systems (ICUAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICUAS.2018.8453409\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 International Conference on Unmanned Aircraft Systems (ICUAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICUAS.2018.8453409","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Learning Neural Networks for sUAS-Assisted Structural Inspections: Feasibility and Application
This paper investigates the feasibility of using a Deep Learning Convolutional Neural Network (DLCNN) in inspection of concrete decks and buildings using small Unmanned Aerial Systems (sUAS). The training dataset consists of images of lab-made bridge decks taken with a point-and-shoot high resolution camera. The network is trained on this dataset in two modes: fully trained (94.7% validation accuracy) and transfer learning (97.1% validation accuracy). The testing datasets consist of 1620 sub-images from bridge decks with the same cracks, 2340 sub-images from bridge decks with similar cracks, and 3600 sub-images from a building with different cracks, all taken by sUAS. The sUAS used in the first dataset has a low-resolution camera whereas the sUAS used in the second and third datasets has a camera comparable to the point-and-shoot camera. In this study it has been shown that it is feasible to apply DLCNNs in autonomous civil structural inspections with comparable results to human inspectors when using off-the-shelf sUAS and training datasets collected with point-and-shoot handheld cameras.