{"title":"Performance Comparison of Saliency Detection Methods for Food Region Extraction","authors":"Takuya Futagami, N. Hayasaka, T. Onoye","doi":"10.1145/3406971.3406974","DOIUrl":null,"url":null,"abstract":"Several methods for extracting food regions from food images use visual saliency to improve accuracy. The effectiveness of saliency detection methods for food extraction, however, has not been discussed sufficiently. Thus, the effectiveness of well-known saliency detection methods is compared thoroughly for the future development of highly accurate food-extraction methods. Ten saliency detection methods, which consisted of seven handcrafted feature-based approaches and three deep learning-based approaches, were tested by applying them to 240 food images. The results suggest that MSI, which uses only neural networks without the assumption that food regions tend to be found at the center of images, predicted food regions most accurately in terms of areas under a receiver operating characteristic curve (AUC). Additionally, GMR, which assumes that food regions tend not to be found around the four sides of an image, was also effective on the food extraction task. The AUCs of these methods were more than 4% larger than that of a center model that is frequently used as a baseline for saliency detection. Furthermore, this paper supports these results by comparing other methods and determining the properties of food images.","PeriodicalId":111905,"journal":{"name":"Proceedings of the 4th International Conference on Graphics and Signal Processing","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th International Conference on Graphics and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3406971.3406974","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Several methods for extracting food regions from food images use visual saliency to improve accuracy. The effectiveness of saliency detection methods for food extraction, however, has not been discussed sufficiently. Thus, the effectiveness of well-known saliency detection methods is compared thoroughly for the future development of highly accurate food-extraction methods. Ten saliency detection methods, which consisted of seven handcrafted feature-based approaches and three deep learning-based approaches, were tested by applying them to 240 food images. The results suggest that MSI, which uses only neural networks without the assumption that food regions tend to be found at the center of images, predicted food regions most accurately in terms of areas under a receiver operating characteristic curve (AUC). Additionally, GMR, which assumes that food regions tend not to be found around the four sides of an image, was also effective on the food extraction task. The AUCs of these methods were more than 4% larger than that of a center model that is frequently used as a baseline for saliency detection. Furthermore, this paper supports these results by comparing other methods and determining the properties of food images.