{"title":"IngredSAM: Open-World Food Ingredient Segmentation via a Single Image Prompt.","authors":"Leyi Chen, Bowen Wang, Jiaxin Zhang","doi":"10.3390/jimaging10120305","DOIUrl":null,"url":null,"abstract":"<p><p>Food semantic segmentation is of great significance in the field of computer vision and artificial intelligence, especially in the application of food image analysis. Due to the complexity and variety of food, it is difficult to effectively handle this task using supervised methods. Thus, we introduce IngredSAM, a novel approach for open-world food ingredient semantic segmentation, extending the capabilities of the Segment Anything Model (SAM). Utilizing visual foundation models (VFMs) and prompt engineering, IngredSAM leverages discriminative and matchable semantic features between a single clean image prompt of specific ingredients and open-world images to guide the generation of accurate segmentation masks in real-world scenarios. This method addresses the challenges of traditional supervised models in dealing with the diverse appearances and class imbalances of food ingredients. Our framework demonstrates significant advancements in the segmentation of food ingredients without any training process, achieving 2.85% and 6.01% better performance than previous state-of-the-art methods on both FoodSeg103 and UECFoodPix datasets. IngredSAM exemplifies a successful application of one-shot, open-world segmentation, paving the way for downstream applications such as enhancements in nutritional analysis and consumer dietary trend monitoring.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11677470/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/jimaging10120305","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Food semantic segmentation is of great significance in the field of computer vision and artificial intelligence, especially in the application of food image analysis. Due to the complexity and variety of food, it is difficult to effectively handle this task using supervised methods. Thus, we introduce IngredSAM, a novel approach for open-world food ingredient semantic segmentation, extending the capabilities of the Segment Anything Model (SAM). Utilizing visual foundation models (VFMs) and prompt engineering, IngredSAM leverages discriminative and matchable semantic features between a single clean image prompt of specific ingredients and open-world images to guide the generation of accurate segmentation masks in real-world scenarios. This method addresses the challenges of traditional supervised models in dealing with the diverse appearances and class imbalances of food ingredients. Our framework demonstrates significant advancements in the segmentation of food ingredients without any training process, achieving 2.85% and 6.01% better performance than previous state-of-the-art methods on both FoodSeg103 and UECFoodPix datasets. IngredSAM exemplifies a successful application of one-shot, open-world segmentation, paving the way for downstream applications such as enhancements in nutritional analysis and consumer dietary trend monitoring.
食品语义分割在计算机视觉和人工智能领域,特别是在食品图像分析中的应用具有重要意义。由于食物的复杂性和多样性,使用监督方法很难有效地处理这项任务。因此,我们引入了一种用于开放世界食品成分语义分割的新方法IngredSAM,扩展了Segment Anything Model (SAM)的功能。IngredSAM利用视觉基础模型(VFMs)和提示工程,利用特定成分的单个干净图像提示和开放世界图像之间的判别和匹配语义特征,指导在真实场景中生成准确的分割掩码。该方法解决了传统监督模型在处理食品成分的不同外观和类别不平衡方面的挑战。我们的框架在没有任何训练过程的情况下,在食品成分分割方面取得了显著的进步,在FoodSeg103和UECFoodPix数据集上,比以前最先进的方法的性能分别提高了2.85%和6.01%。IngredSAM是一次性开放世界分割的成功应用范例,为下游应用铺平了道路,如增强营养分析和消费者饮食趋势监测。