P. Piratelo, Rodrigo Negri de Azeredo, Eduardo M. Yamão, Gabriel Maidl, Laércio de Jesus, Renato de Arruda Penteado Neto, L. dos Santos Coelho, G. Leandro
{"title":"Convolutional neural network applied for object recognition in a warehouse of an electric company","authors":"P. Piratelo, Rodrigo Negri de Azeredo, Eduardo M. Yamão, Gabriel Maidl, Laércio de Jesus, Renato de Arruda Penteado Neto, L. dos Santos Coelho, G. Leandro","doi":"10.1109/INDUSCON51756.2021.9529716","DOIUrl":null,"url":null,"abstract":"This paper presents a computational tool that analyzes the quality of captured colored images and classifies products of an electrical company’s warehouse using Convolutional Neural Network (CNN). The tool is part of a prototype aiming at automated flow control and inventory, saving time and cost. In the data acquisition process, the tool examines the quality of images through an Image Quality Assessment algorithm (IQA) named BRISQUE. Two classes of materials were chosen to compose the dataset which went through a dataset resampling and data augmentation. The binary object classification is performed by a residual neural network (Resnet). Using a pre-trained model, a transfer learning method called feature extraction was applied, adjusting the network to respond to the addressed task by updating the final layer’s weights, biases, and number of neurons. An extensive test was conducted to define the best set of hyperparameters for the application. The neural network was tested 10 times on every combination of these hyperparameters values and the average accuracy on test dataset defined the best set. Adaptive moment estimation (ADAM) optimizer, a learning rate of 0.001, and batch size of 16 proved to have an improvement over other combinations, achieving an average accuracy on the test set of 92.876%. The feature extraction proved to be powerful, once the accuracy on training was close to 90% even at the first epochs and it did not require full architecture training. The tool is a combination of automation, deep learning, and computer vision applied to a real engineering problem.","PeriodicalId":344476,"journal":{"name":"2021 14th IEEE International Conference on Industry Applications (INDUSCON)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 14th IEEE International Conference on Industry Applications (INDUSCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/INDUSCON51756.2021.9529716","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
This paper presents a computational tool that analyzes the quality of captured colored images and classifies products of an electrical company’s warehouse using Convolutional Neural Network (CNN). The tool is part of a prototype aiming at automated flow control and inventory, saving time and cost. In the data acquisition process, the tool examines the quality of images through an Image Quality Assessment algorithm (IQA) named BRISQUE. Two classes of materials were chosen to compose the dataset which went through a dataset resampling and data augmentation. The binary object classification is performed by a residual neural network (Resnet). Using a pre-trained model, a transfer learning method called feature extraction was applied, adjusting the network to respond to the addressed task by updating the final layer’s weights, biases, and number of neurons. An extensive test was conducted to define the best set of hyperparameters for the application. The neural network was tested 10 times on every combination of these hyperparameters values and the average accuracy on test dataset defined the best set. Adaptive moment estimation (ADAM) optimizer, a learning rate of 0.001, and batch size of 16 proved to have an improvement over other combinations, achieving an average accuracy on the test set of 92.876%. The feature extraction proved to be powerful, once the accuracy on training was close to 90% even at the first epochs and it did not require full architecture training. The tool is a combination of automation, deep learning, and computer vision applied to a real engineering problem.