{"title":"仅叶片 SAM:用于零镜头自动叶片分割的分割流水线","authors":"","doi":"10.1016/j.atech.2024.100515","DOIUrl":null,"url":null,"abstract":"<div><p>Segment Anything Model (SAM) is a new “foundation model” that can be used as a zero-shot object segmentation method with the use of either guide prompts such as bounding boxes, polygons, or points. Alternatively, additional post processing steps can be used to identify objects of interest after segmenting everything in an image. Here a method is presented using segment anything together with a series of post processing steps to segment potato leaves, called Leaf Only SAM. The advantage of this proposed method is that it does not require any training data to produce its results so has many applications across the field of plant phenotyping where there is limited high quality annotated data available. The performance of Leaf Only SAM is compared to a Mask R-CNN model which has been fine-tuned on a small novel potato leaf dataset. On the evaluation dataset, Leaf Only SAM finds an average recall of 73.1 and an average precision of 73.9, compared to recall of 87.6 and precision of 84.4 for Mask R-CNN. Leaf Only SAM does not perform better than the fine-tuned Mask R-CNN model on the potato leaf dataset, but the SAM based model does not require any extra training or annotation. This shows there is potential to use SAM as a zero-shot classifier with the addition of post processing steps.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001205/pdfft?md5=f2f846805dc88ddc407945516f183172&pid=1-s2.0-S2772375524001205-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Leaf only SAM: A segment anything pipeline for zero-shot automated leaf segmentation\",\"authors\":\"\",\"doi\":\"10.1016/j.atech.2024.100515\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Segment Anything Model (SAM) is a new “foundation model” that can be used as a zero-shot object segmentation method with the use of either guide prompts such as bounding boxes, polygons, or points. Alternatively, additional post processing steps can be used to identify objects of interest after segmenting everything in an image. Here a method is presented using segment anything together with a series of post processing steps to segment potato leaves, called Leaf Only SAM. The advantage of this proposed method is that it does not require any training data to produce its results so has many applications across the field of plant phenotyping where there is limited high quality annotated data available. The performance of Leaf Only SAM is compared to a Mask R-CNN model which has been fine-tuned on a small novel potato leaf dataset. On the evaluation dataset, Leaf Only SAM finds an average recall of 73.1 and an average precision of 73.9, compared to recall of 87.6 and precision of 84.4 for Mask R-CNN. Leaf Only SAM does not perform better than the fine-tuned Mask R-CNN model on the potato leaf dataset, but the SAM based model does not require any extra training or annotation. This shows there is potential to use SAM as a zero-shot classifier with the addition of post processing steps.</p></div>\",\"PeriodicalId\":74813,\"journal\":{\"name\":\"Smart agricultural technology\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2024-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2772375524001205/pdfft?md5=f2f846805dc88ddc407945516f183172&pid=1-s2.0-S2772375524001205-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Smart agricultural technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2772375524001205\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AGRICULTURAL ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart agricultural technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772375524001205","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
引用次数: 0
摘要
Segment Anything Model (SAM) 是一种全新的 "基础模型",可用作零镜头对象分割方法,使用边界框、多边形或点等引导提示。另外,在分割完图像中的所有内容后,还可以使用额外的后处理步骤来识别感兴趣的对象。这里介绍的是一种使用分割任何东西和一系列后处理步骤来分割马铃薯叶片的方法,称为 "仅叶 SAM"。这种方法的优点是不需要任何训练数据就能得出结果,因此在植物表型领域有很多应用,因为该领域的高质量注释数据有限。在一个小型新颖的马铃薯叶片数据集上,我们将纯叶 SAM 的性能与经过微调的 Mask R-CNN 模型进行了比较。在评估数据集上,Leaf Only SAM 的平均召回率为 73.1,平均精度为 73.9,而 Mask R-CNN 的召回率为 87.6,精度为 84.4。在马铃薯叶片数据集上,Leaf Only SAM 的表现并不比经过微调的 Mask R-CNN 模型更好,但基于 SAM 的模型不需要任何额外的训练或注释。这表明,在增加后处理步骤后,将 SAM 用作零镜头分类器是有潜力的。
Leaf only SAM: A segment anything pipeline for zero-shot automated leaf segmentation
Segment Anything Model (SAM) is a new “foundation model” that can be used as a zero-shot object segmentation method with the use of either guide prompts such as bounding boxes, polygons, or points. Alternatively, additional post processing steps can be used to identify objects of interest after segmenting everything in an image. Here a method is presented using segment anything together with a series of post processing steps to segment potato leaves, called Leaf Only SAM. The advantage of this proposed method is that it does not require any training data to produce its results so has many applications across the field of plant phenotyping where there is limited high quality annotated data available. The performance of Leaf Only SAM is compared to a Mask R-CNN model which has been fine-tuned on a small novel potato leaf dataset. On the evaluation dataset, Leaf Only SAM finds an average recall of 73.1 and an average precision of 73.9, compared to recall of 87.6 and precision of 84.4 for Mask R-CNN. Leaf Only SAM does not perform better than the fine-tuned Mask R-CNN model on the potato leaf dataset, but the SAM based model does not require any extra training or annotation. This shows there is potential to use SAM as a zero-shot classifier with the addition of post processing steps.