{"title":"Development of AI-based System for Classification of Objects in Farms Using Deep Learning by Chainer and a Template-Matching Based Detection Method","authors":"Shinji Kawakura, R. Shibasaki","doi":"10.18178/joaat.6.3.175-179","DOIUrl":null,"url":null,"abstract":"—It has generally been difficult for agri-system developers to identify diverse objects automatically and accurately before the harvesting without touching something dangerous (e.g., poisonous creatures, toxic substances). Such objects could include harvestings for sale, stems, leaves, artificial stiff frames, unnecessary weeds, agri-tools, and creatures, especially in Japanese traditional small-medium sized, insufficiently trimmed (messed) farmlands. Scientists, agri-managers, and workers have been trying to solve these problems. On the other side, researchers have been advancing robot systems, mainly based on automatic machines for harvesting and pulling up weeds utilizing visual-data analysis systems. These studies have captured a significant amount of visual data, identified objects with short time delay. However, previous products have not yet met these requirements. We have considered the achievements of recent technologies to develop and test new systems. The purpose of this research is proving the utility of this visual-data analysis system by classifying and outputting datasets from an AI-based image system that obtained field pictures in outdoor farmlands. We then apply Chainer for deep learning, and focus on computing methodologies relating to template-matching and deep learning to classify the captured objects. The presented sets of results confirm the utility of the methodologies to some extent.","PeriodicalId":222254,"journal":{"name":"Journal of Advanced Agricultural Technologies","volume":"97 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Advanced Agricultural Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18178/joaat.6.3.175-179","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
—It has generally been difficult for agri-system developers to identify diverse objects automatically and accurately before the harvesting without touching something dangerous (e.g., poisonous creatures, toxic substances). Such objects could include harvestings for sale, stems, leaves, artificial stiff frames, unnecessary weeds, agri-tools, and creatures, especially in Japanese traditional small-medium sized, insufficiently trimmed (messed) farmlands. Scientists, agri-managers, and workers have been trying to solve these problems. On the other side, researchers have been advancing robot systems, mainly based on automatic machines for harvesting and pulling up weeds utilizing visual-data analysis systems. These studies have captured a significant amount of visual data, identified objects with short time delay. However, previous products have not yet met these requirements. We have considered the achievements of recent technologies to develop and test new systems. The purpose of this research is proving the utility of this visual-data analysis system by classifying and outputting datasets from an AI-based image system that obtained field pictures in outdoor farmlands. We then apply Chainer for deep learning, and focus on computing methodologies relating to template-matching and deep learning to classify the captured objects. The presented sets of results confirm the utility of the methodologies to some extent.