Roberto Rojas, Carlos F. Navarro, Gabriel A. Orellana, Carmen Gloria Lemus C., V. Castañeda
{"title":"3D Nuclei Segmentation through Deep Learning","authors":"Roberto Rojas, Carlos F. Navarro, Gabriel A. Orellana, Carmen Gloria Lemus C., V. Castañeda","doi":"10.1109/CAI54212.2023.00137","DOIUrl":null,"url":null,"abstract":"Nowadays, deep-learning has been used successfully to solve difficult problems in fluorescence microscopy field. In this work, we propose a Drosophila 3D Nuclei segmentation based on a pipeline that detects nuclei centers and then segments each detected nucleus individually, using a different 3D U-net for detection and segmentation steps. Our method is among the top-3 performers in the Cell Tracking Challenge segmentation benchmark for Light Sheet Microscopy Drosophila dataset, reaching a final score of 0.827. The proposed methodology: i) allows the utilization of a U-net model to perform a detection task, and ii) requires much fewer training samples than direct segmentation of the entire volume, reducing the manual annotation effort.","PeriodicalId":129324,"journal":{"name":"2023 IEEE Conference on Artificial Intelligence (CAI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Conference on Artificial Intelligence (CAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAI54212.2023.00137","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Nowadays, deep-learning has been used successfully to solve difficult problems in fluorescence microscopy field. In this work, we propose a Drosophila 3D Nuclei segmentation based on a pipeline that detects nuclei centers and then segments each detected nucleus individually, using a different 3D U-net for detection and segmentation steps. Our method is among the top-3 performers in the Cell Tracking Challenge segmentation benchmark for Light Sheet Microscopy Drosophila dataset, reaching a final score of 0.827. The proposed methodology: i) allows the utilization of a U-net model to perform a detection task, and ii) requires much fewer training samples than direct segmentation of the entire volume, reducing the manual annotation effort.