Efficient identification, localization and quantification of grapevine inflorescences and flowers in unprepared field images using Fully Convolutional Networks
Robert Rudolph, Katja Herzog, R. Töpfer, V. Steinhage
{"title":"Efficient identification, localization and quantification of grapevine inflorescences and flowers in unprepared field images using Fully Convolutional Networks","authors":"Robert Rudolph, Katja Herzog, R. Töpfer, V. Steinhage","doi":"10.5073/VITIS.2019.58.95-104","DOIUrl":null,"url":null,"abstract":"Yield and its prediction is one of the most important tasks in grapevine breeding purposes and vineyard management. Commonly, this trait is estimated manually right before harvest by extrapolation, which mostly is labor-intensive, destructive and inaccurate. In the present study an automated image-based workflow was developed for quantifying inflorescences and single flowers in unprepared field images of grapevines, i.e. no artificial background or light was applied. It is a novel approach for non-invasive, inexpensive and objective phenotyping with high-throughput. First, image regions depicting inflorescences were identified and localized. This was done by segmenting the images into the classes \"inflorescence\" and \"non-inflorescence\" using a Fully Convolutional Network (FCN). Efficient image segmentation hereby is the most challenging step regarding the small geometry and dense distribution of single flowers (several hundred single flowers per inflorescence), similar color of all plant organs in the fore- and background as well as the circumstance that only approximately 5 % of an image show inflorescences. The trained FCN achieved a mean Intersection Over Union (IOU) of 87.6 % on the test data set. Finally, single flowers were extracted from the \"inflorescence\"-areas using Circular Hough Transform. The flower extraction achieved a recall of 80.3 % and a precision of 70.7 % using the segmentation derived by the trained FCN model. Summarized, the presented approach is a promising strategy in order to predict yield potential automatically in the earliest stage of grapevine development which is applicable for objective monitoring and evaluations of breeding material, genetic repositories or commercial vineyards.","PeriodicalId":23613,"journal":{"name":"Vitis: Journal of Grapevine Research","volume":"20 1","pages":"95-104"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Vitis: Journal of Grapevine Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5073/VITIS.2019.58.95-104","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15
Abstract
Yield and its prediction is one of the most important tasks in grapevine breeding purposes and vineyard management. Commonly, this trait is estimated manually right before harvest by extrapolation, which mostly is labor-intensive, destructive and inaccurate. In the present study an automated image-based workflow was developed for quantifying inflorescences and single flowers in unprepared field images of grapevines, i.e. no artificial background or light was applied. It is a novel approach for non-invasive, inexpensive and objective phenotyping with high-throughput. First, image regions depicting inflorescences were identified and localized. This was done by segmenting the images into the classes "inflorescence" and "non-inflorescence" using a Fully Convolutional Network (FCN). Efficient image segmentation hereby is the most challenging step regarding the small geometry and dense distribution of single flowers (several hundred single flowers per inflorescence), similar color of all plant organs in the fore- and background as well as the circumstance that only approximately 5 % of an image show inflorescences. The trained FCN achieved a mean Intersection Over Union (IOU) of 87.6 % on the test data set. Finally, single flowers were extracted from the "inflorescence"-areas using Circular Hough Transform. The flower extraction achieved a recall of 80.3 % and a precision of 70.7 % using the segmentation derived by the trained FCN model. Summarized, the presented approach is a promising strategy in order to predict yield potential automatically in the earliest stage of grapevine development which is applicable for objective monitoring and evaluations of breeding material, genetic repositories or commercial vineyards.