{"title":"CRF-driven multi-compartment geometric model","authors":"Sepehr Farhand, F. Andreopoulos, G. Tsechpenakis","doi":"10.1109/ISBI.2013.6556669","DOIUrl":null,"url":null,"abstract":"We present a hybrid framework for segmenting structures consisting of distinct inter-connected parts. We combine the robustness of Conditional Random Fields in appearance classification with the shape constraints of geometric models and the relative part topology constraints that multi-compartment modeling provides. We demonstrate the performance of our method in cell segmentation from fluorescent microscopic images, where the compartments of interest are the cell nucleus, cytoplasm, and the negative hypothesis (background). We compare our results with the most relevant model- and appearance-based segmentation methods.","PeriodicalId":178011,"journal":{"name":"2013 IEEE 10th International Symposium on Biomedical Imaging","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE 10th International Symposium on Biomedical Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISBI.2013.6556669","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We present a hybrid framework for segmenting structures consisting of distinct inter-connected parts. We combine the robustness of Conditional Random Fields in appearance classification with the shape constraints of geometric models and the relative part topology constraints that multi-compartment modeling provides. We demonstrate the performance of our method in cell segmentation from fluorescent microscopic images, where the compartments of interest are the cell nucleus, cytoplasm, and the negative hypothesis (background). We compare our results with the most relevant model- and appearance-based segmentation methods.