Distinction of Edible and Inedible Harvests Using a Fine-Tuning-Based Deep Learning System

Shinji Kawakura, R. Shibasaki
{"title":"Distinction of Edible and Inedible Harvests Using a Fine-Tuning-Based Deep Learning System","authors":"Shinji Kawakura, R. Shibasaki","doi":"10.18178/joaat.6.4.236-240","DOIUrl":null,"url":null,"abstract":"—Effectively detecting and removing inedible harvests before or after harvesting is important for many agri-workers. Recent studies have suggested diverse measures, including various robot arm-based machines for harvesting vegetables and pulling up weeds, using camera systems to detect relevant coordinates. Although some of these systems have included monitoring and identification tools for edible and inedible targets, their accuracy has not been sufficient for use. Thus, further improvements have incorporated computing into the process based on human feelings and commonsense-based thinking, which considers up-to-date technologies and determines how solutions reflect the experience of traditional agri-workers. Our focus is on Japanese small- to middle-sized farms. Thus, we developed a fine-tuning (transfer-learning)-based deep learning system that gathers field pictures and performs static visual data analyses using artificial intelligence (AI)-based computing. In this study, pictures included kiwi fruits, eggplants, and mini tomatoes in outdoor farmlands. We focused on several program-based applications with deep learning-based systems using several hidden layers. To align with this year’s technical trends, the data is presented concerning two patterns with different target layers: (1) all bonding layers with a revised pattern, and (2) some convolution layers with a visual geometry group (VGG) 16 and picture classifier created by convolutional neural network (CNN) revised pattern. Our results confirmed the utility of the fine-tuning methodologies, thus supporting other similar analyses in different academic research fields. In future, these results could assist the development of automatic agricultural harvesting systems and other high-tech agri-systems.","PeriodicalId":222254,"journal":{"name":"Journal of Advanced Agricultural Technologies","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Advanced Agricultural Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18178/joaat.6.4.236-240","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

—Effectively detecting and removing inedible harvests before or after harvesting is important for many agri-workers. Recent studies have suggested diverse measures, including various robot arm-based machines for harvesting vegetables and pulling up weeds, using camera systems to detect relevant coordinates. Although some of these systems have included monitoring and identification tools for edible and inedible targets, their accuracy has not been sufficient for use. Thus, further improvements have incorporated computing into the process based on human feelings and commonsense-based thinking, which considers up-to-date technologies and determines how solutions reflect the experience of traditional agri-workers. Our focus is on Japanese small- to middle-sized farms. Thus, we developed a fine-tuning (transfer-learning)-based deep learning system that gathers field pictures and performs static visual data analyses using artificial intelligence (AI)-based computing. In this study, pictures included kiwi fruits, eggplants, and mini tomatoes in outdoor farmlands. We focused on several program-based applications with deep learning-based systems using several hidden layers. To align with this year’s technical trends, the data is presented concerning two patterns with different target layers: (1) all bonding layers with a revised pattern, and (2) some convolution layers with a visual geometry group (VGG) 16 and picture classifier created by convolutional neural network (CNN) revised pattern. Our results confirmed the utility of the fine-tuning methodologies, thus supporting other similar analyses in different academic research fields. In future, these results could assist the development of automatic agricultural harvesting systems and other high-tech agri-systems.
使用基于微调的深度学习系统区分可食用和不可食用的收获
-对许多农业工人来说,在收获之前或之后有效地检测和去除不可食用的收获是很重要的。最近的研究提出了多种措施,包括各种基于机械臂的收割蔬菜和拔除杂草的机器,使用摄像系统检测相关坐标。虽然其中一些系统包括可食用和不可食用目标的监测和识别工具,但其准确性尚不足以使用。因此,进一步的改进已将计算纳入基于人类感觉和基于常识的思维的过程中,该过程考虑到最新技术并确定解决方案如何反映传统农业工人的经验。我们的重点是日本中小型农场。因此,我们开发了一种基于微调(迁移学习)的深度学习系统,该系统使用基于人工智能(AI)的计算来收集现场图片并执行静态视觉数据分析。在这项研究中,照片包括猕猴桃、茄子和户外农田里的迷你西红柿。我们专注于几个基于程序的应用程序,这些应用程序使用了几个隐藏层的基于深度学习的系统。为了与今年的技术趋势保持一致,数据涉及两种具有不同目标层的模式:(1)所有键合层都具有修正模式,(2)一些卷积层具有视觉几何组(VGG) 16和卷积神经网络(CNN)修正模式创建的图像分类器。我们的结果证实了微调方法的实用性,从而支持了不同学术研究领域的其他类似分析。未来,这些结果可以帮助开发自动农业收获系统和其他高科技农业系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信