João Vitor Pereira Sabino, Francisco Assis da Silva, Leandro Luiz de Almeida, Danillo Roberto Pereira, A. O. Artero
{"title":"利用图像处理和YOLO构建无人机导航系统,并在工业上应用","authors":"João Vitor Pereira Sabino, Francisco Assis da Silva, Leandro Luiz de Almeida, Danillo Roberto Pereira, A. O. Artero","doi":"10.5747/ce.2021.v13.n4.e375","DOIUrl":null,"url":null,"abstract":"In this work we developed a semi-autonomous drone navigation system for a cardboard box industry, to assist in counting the stock of cardboard reels. The developed methodology has four main steps, being the QR Code decoding, optical marker detection, navigation system and drone movement. For the QR Code decoding step, the pyzbar library was used. In the optical marker detection step, the YOLOv4 Tiny library was used, which uses machine learning techniques to detect objects in real time. YOLOv4 Tiny was trained using a custom dataset with images of optical markers and labels in a closed simulation environment, achieving a hit rate of 92.10%. The navigation system step is fed by the response of the neural network, in which each marker has a function associated with it. The last step depends on the navigation system, since it sends which command the drone must follow and the movement sends this command to the drone.","PeriodicalId":30414,"journal":{"name":"Colloquium Exactarum","volume":"47 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"UTILIZANDO PROCESSAMENTO DE IMAGENS E YOLO PARA A CONSTRUÇÃO DE UM SISTEMA DE NAVEGAÇÃO DE UM DRONE COM APLICAÇÃO EM UMA INDÚSTRIA\",\"authors\":\"João Vitor Pereira Sabino, Francisco Assis da Silva, Leandro Luiz de Almeida, Danillo Roberto Pereira, A. O. Artero\",\"doi\":\"10.5747/ce.2021.v13.n4.e375\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this work we developed a semi-autonomous drone navigation system for a cardboard box industry, to assist in counting the stock of cardboard reels. The developed methodology has four main steps, being the QR Code decoding, optical marker detection, navigation system and drone movement. For the QR Code decoding step, the pyzbar library was used. In the optical marker detection step, the YOLOv4 Tiny library was used, which uses machine learning techniques to detect objects in real time. YOLOv4 Tiny was trained using a custom dataset with images of optical markers and labels in a closed simulation environment, achieving a hit rate of 92.10%. The navigation system step is fed by the response of the neural network, in which each marker has a function associated with it. The last step depends on the navigation system, since it sends which command the drone must follow and the movement sends this command to the drone.\",\"PeriodicalId\":30414,\"journal\":{\"name\":\"Colloquium Exactarum\",\"volume\":\"47 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Colloquium Exactarum\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5747/ce.2021.v13.n4.e375\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Colloquium Exactarum","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5747/ce.2021.v13.n4.e375","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
UTILIZANDO PROCESSAMENTO DE IMAGENS E YOLO PARA A CONSTRUÇÃO DE UM SISTEMA DE NAVEGAÇÃO DE UM DRONE COM APLICAÇÃO EM UMA INDÚSTRIA
In this work we developed a semi-autonomous drone navigation system for a cardboard box industry, to assist in counting the stock of cardboard reels. The developed methodology has four main steps, being the QR Code decoding, optical marker detection, navigation system and drone movement. For the QR Code decoding step, the pyzbar library was used. In the optical marker detection step, the YOLOv4 Tiny library was used, which uses machine learning techniques to detect objects in real time. YOLOv4 Tiny was trained using a custom dataset with images of optical markers and labels in a closed simulation environment, achieving a hit rate of 92.10%. The navigation system step is fed by the response of the neural network, in which each marker has a function associated with it. The last step depends on the navigation system, since it sends which command the drone must follow and the movement sends this command to the drone.