Marianne Bakken, Vignesh R. Ponnambalam, R. Moore, J. G. Gjevestad, P. From
{"title":"作物行分割的机器人监督学习*","authors":"Marianne Bakken, Vignesh R. Ponnambalam, R. Moore, J. G. Gjevestad, P. From","doi":"10.1109/ICRA48506.2021.9560815","DOIUrl":null,"url":null,"abstract":"We propose an approach for robot-supervised learning that automates label generation for semantic segmentation with Convolutional Neural Networks (CNNs) for crop row detection in a field. Using a training robot equipped with RTK GNSS and RGB camera, we train a neural network that can later be used for pure vision-based navigation. We test our approach on an agri-robot in a strawberry field and successfully train crop row segmentation without any hand-drawn image labels. Our main finding is that the resulting segmentation output of the CNN shows better performance than the noisy labels it was trained on. Finally, we conduct open-loop field trials with our agri-robot and show that row-following based on the segmentation result is accurate enough for closed-loop guidance. We conclude that training with noisy segmentation labels is a promising approach for learning vision-based crop row following.","PeriodicalId":108312,"journal":{"name":"2021 IEEE International Conference on Robotics and Automation (ICRA)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Robot-supervised Learning of Crop Row Segmentation*\",\"authors\":\"Marianne Bakken, Vignesh R. Ponnambalam, R. Moore, J. G. Gjevestad, P. From\",\"doi\":\"10.1109/ICRA48506.2021.9560815\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose an approach for robot-supervised learning that automates label generation for semantic segmentation with Convolutional Neural Networks (CNNs) for crop row detection in a field. Using a training robot equipped with RTK GNSS and RGB camera, we train a neural network that can later be used for pure vision-based navigation. We test our approach on an agri-robot in a strawberry field and successfully train crop row segmentation without any hand-drawn image labels. Our main finding is that the resulting segmentation output of the CNN shows better performance than the noisy labels it was trained on. Finally, we conduct open-loop field trials with our agri-robot and show that row-following based on the segmentation result is accurate enough for closed-loop guidance. We conclude that training with noisy segmentation labels is a promising approach for learning vision-based crop row following.\",\"PeriodicalId\":108312,\"journal\":{\"name\":\"2021 IEEE International Conference on Robotics and Automation (ICRA)\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Robotics and Automation (ICRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICRA48506.2021.9560815\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICRA48506.2021.9560815","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robot-supervised Learning of Crop Row Segmentation*
We propose an approach for robot-supervised learning that automates label generation for semantic segmentation with Convolutional Neural Networks (CNNs) for crop row detection in a field. Using a training robot equipped with RTK GNSS and RGB camera, we train a neural network that can later be used for pure vision-based navigation. We test our approach on an agri-robot in a strawberry field and successfully train crop row segmentation without any hand-drawn image labels. Our main finding is that the resulting segmentation output of the CNN shows better performance than the noisy labels it was trained on. Finally, we conduct open-loop field trials with our agri-robot and show that row-following based on the segmentation result is accurate enough for closed-loop guidance. We conclude that training with noisy segmentation labels is a promising approach for learning vision-based crop row following.