Vision-based Postharvest Analysis of Musa Acuminata Using Feature-based Machine Learning and Deep Transfer Networks

Christan Hail R. Mendigoria, Heinrick L. Aquino, Ronnie S. Concepcion, Oliver John Y. Alajas, E. Dadios, E. Sybingco
{"title":"Vision-based Postharvest Analysis of Musa Acuminata Using Feature-based Machine Learning and Deep Transfer Networks","authors":"Christan Hail R. Mendigoria, Heinrick L. Aquino, Ronnie S. Concepcion, Oliver John Y. Alajas, E. Dadios, E. Sybingco","doi":"10.1109/R10-HTC53172.2021.9641575","DOIUrl":null,"url":null,"abstract":"Traditional practice of classifying postharvest crops creates variability in quality assessment due to human-related limitations including individual disparities in visual recognition. As a solution, computer vision approach was adapted. This study aims to classify the native banana fruit to South and Southern Asia (Musa acuminata) using the image-based deep transfer networks of ResNet101, MobileNetV2, and InceptionV3, and machine learning algorithms, including classification tree (CTree), Naïve Bayes algorithm (NB), k-nearest neighbors (KNN) and support vector machine (SVM). A total of 1,164 images, derived from 194 banana tier subjects with different orientations were utilized. Color channel thresholding in CIELab(L*a*b*) color space was applied to extract the spectral (RGB, HSV, YCbCr, L*a*b*), textural (correlation, contrast, entropy, homogeneity, energy) and morphological (total area) features. These 18-feature vectors were further simplified into two most significant features (S and V) using combined neighborhood component analysis and principal component analysis (hybrid NCA-PCA). The length of the top middle finger of the banana tier was added to the features. The classification tree (CTree), regardless of feature set, was validated to have the best performance, with accuracy of 91.28% and inference time of 19.34seconds. In addition, NB, KNN and SVM models provided acceptable performance with 89.72%, 89.30%, and 89.36% accuracies, respectively. However, the deep transfer networks did not provide acceptable classification results (with ResNet101 having the highest accuracy of 50.01% among the networks used). Lastly, the proposed machine learning models served as a feasible approach in the postharvest classification of Musa acuminata.","PeriodicalId":117626,"journal":{"name":"2021 IEEE 9th Region 10 Humanitarian Technology Conference (R10-HTC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 9th Region 10 Humanitarian Technology Conference (R10-HTC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/R10-HTC53172.2021.9641575","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Traditional practice of classifying postharvest crops creates variability in quality assessment due to human-related limitations including individual disparities in visual recognition. As a solution, computer vision approach was adapted. This study aims to classify the native banana fruit to South and Southern Asia (Musa acuminata) using the image-based deep transfer networks of ResNet101, MobileNetV2, and InceptionV3, and machine learning algorithms, including classification tree (CTree), Naïve Bayes algorithm (NB), k-nearest neighbors (KNN) and support vector machine (SVM). A total of 1,164 images, derived from 194 banana tier subjects with different orientations were utilized. Color channel thresholding in CIELab(L*a*b*) color space was applied to extract the spectral (RGB, HSV, YCbCr, L*a*b*), textural (correlation, contrast, entropy, homogeneity, energy) and morphological (total area) features. These 18-feature vectors were further simplified into two most significant features (S and V) using combined neighborhood component analysis and principal component analysis (hybrid NCA-PCA). The length of the top middle finger of the banana tier was added to the features. The classification tree (CTree), regardless of feature set, was validated to have the best performance, with accuracy of 91.28% and inference time of 19.34seconds. In addition, NB, KNN and SVM models provided acceptable performance with 89.72%, 89.30%, and 89.36% accuracies, respectively. However, the deep transfer networks did not provide acceptable classification results (with ResNet101 having the highest accuracy of 50.01% among the networks used). Lastly, the proposed machine learning models served as a feasible approach in the postharvest classification of Musa acuminata.
基于特征的机器学习和深度转移网络的木沙采后视觉分析
由于与人类有关的限制,包括视觉识别方面的个体差异,对采后作物进行分类的传统做法在质量评估方面存在差异。为了解决这一问题,采用了计算机视觉方法。本研究旨在利用基于图像的ResNet101、MobileNetV2和InceptionV3深度传输网络,以及包括分类树(CTree)、Naïve贝叶斯算法(NB)、k近邻(KNN)和支持向量机(SVM)在内的机器学习算法,对南亚和南亚的原生香蕉(Musa acuminata)进行分类。共使用了来自194个不同方向香蕉层受试者的1164幅图像。采用CIELab(L*a*b*)色彩空间中的颜色通道阈值提取光谱(RGB、HSV、YCbCr、L*a*b*)、纹理(相关性、对比度、熵、均匀性、能量)和形态(总面积)特征。结合邻域成分分析和主成分分析(hybrid NCA-PCA),将这18个特征向量进一步简化为两个最显著的特征(S和V)。香蕉层顶部中指的长度被添加到特征中。在不考虑特征集的情况下,分类树(CTree)的准确率为91.28%,推理时间为19.34秒,具有最好的性能。此外,NB、KNN和SVM模型的准确率分别为89.72%、89.30%和89.36%。然而,深度传输网络并没有提供可接受的分类结果(在使用的网络中,ResNet101的准确率最高,为50.01%)。最后,本文提出的机器学习模型在针叶树采后分类中是一种可行的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信