Sohail Anwar, Shoaib Rehman Soomro, Shadi Khan Baloch, Aamir Ali Patoli, Abdul Rahim Kolachi
{"title":"Performance Analysis of Deep Transfer Learning Models for the Automated Detection of Cotton Plant Diseases","authors":"Sohail Anwar, Shoaib Rehman Soomro, Shadi Khan Baloch, Aamir Ali Patoli, Abdul Rahim Kolachi","doi":"10.48084/etasr.6187","DOIUrl":null,"url":null,"abstract":"Cotton is one of the most important agricultural products and is closely linked to the economic development of Pakistan. However, the cotton plant is susceptible to bacterial and viral diseases that can quickly spread and damage plants and ultimately affect the cotton yield. The automated and early detection of affected plants can significantly reduce the potential spread of the disease. This paper presents the implementation and performance analysis of bacterial blight and curl virus disease detection in cotton crops through deep learning techniques. The automated disease detection is performed through transfer learning of six pre-trained deep learning models, namely DenseNet121, DenseNet169, MobileNetV2, ResNet50V2, VGG16, and VGG19. A total of 1362 images of local agricultural fields and 1292 images from online resources were used to train and validate the models. Image augmentation techniques were performed to increase the dataset diversity and size. Transfer learning was implemented for different image resolutions ranging from 32×32 to 256×256 pixels. Performance metrics such as accuracy, precision, recall, F1 Score, and prediction time were evaluated for each implemented model. The results indicate higher accuracy, up to 96%, for DenseNet169 and ResNet50V2 models when trained on the 256×256 pixels image dataset. The lowest accuracy, 52%, was obtained by the MobileNetV2 model when trained on low-resolution, 32×32, images. The confusion matrix analysis indicates the true-positive prediction rates higher than 91% for fresh leaves, 87% for bacterial blight, and 76% for curl virus detection for all implemented models when trained and tested on an image dataset of 128×128 pixels or higher resolution.","PeriodicalId":11826,"journal":{"name":"Engineering, Technology & Applied Science Research","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering, Technology & Applied Science Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48084/etasr.6187","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Cotton is one of the most important agricultural products and is closely linked to the economic development of Pakistan. However, the cotton plant is susceptible to bacterial and viral diseases that can quickly spread and damage plants and ultimately affect the cotton yield. The automated and early detection of affected plants can significantly reduce the potential spread of the disease. This paper presents the implementation and performance analysis of bacterial blight and curl virus disease detection in cotton crops through deep learning techniques. The automated disease detection is performed through transfer learning of six pre-trained deep learning models, namely DenseNet121, DenseNet169, MobileNetV2, ResNet50V2, VGG16, and VGG19. A total of 1362 images of local agricultural fields and 1292 images from online resources were used to train and validate the models. Image augmentation techniques were performed to increase the dataset diversity and size. Transfer learning was implemented for different image resolutions ranging from 32×32 to 256×256 pixels. Performance metrics such as accuracy, precision, recall, F1 Score, and prediction time were evaluated for each implemented model. The results indicate higher accuracy, up to 96%, for DenseNet169 and ResNet50V2 models when trained on the 256×256 pixels image dataset. The lowest accuracy, 52%, was obtained by the MobileNetV2 model when trained on low-resolution, 32×32, images. The confusion matrix analysis indicates the true-positive prediction rates higher than 91% for fresh leaves, 87% for bacterial blight, and 76% for curl virus detection for all implemented models when trained and tested on an image dataset of 128×128 pixels or higher resolution.