Ronald Xie, Ben Mulcahy, Ali Darbandi, Sagar Marwah, Fez Ali, Yuna Lee, Gunes Parlakgul, Gokhan S Hotamisligil, Bo Wang, Sonya MacParland, Mei Zhen, Gary D Bader
{"title":"迁移学习提高了容量电镜细胞器跨组织分割的性能。","authors":"Ronald Xie, Ben Mulcahy, Ali Darbandi, Sagar Marwah, Fez Ali, Yuna Lee, Gunes Parlakgul, Gokhan S Hotamisligil, Bo Wang, Sonya MacParland, Mei Zhen, Gary D Bader","doi":"10.1093/bioadv/vbaf021","DOIUrl":null,"url":null,"abstract":"<p><strong>Motivation: </strong>Volumetric electron microscopy (VEM) enables nanoscale resolution three-dimensional imaging of biological samples. Identification and labeling of organelles, cells, and other structures in the image volume is required for image interpretation, but manual labeling is extremely time-consuming. This can be automated using deep learning segmentation algorithms, but these traditionally require substantial manual annotation for training and typically these labeled datasets are unavailable for new samples.</p><p><strong>Results: </strong>We show that transfer learning can help address this challenge. By pretraining on VEM data from multiple mammalian tissues and organelle types and then fine-tuning on a target dataset, we segment multiple organelles at high performance, yet require a relatively small amount of new training data. We benchmark our method on three published VEM datasets and a new rat liver dataset we imaged over a 56×56×11 <math><mi>μ</mi></math> m volume measuring 7000×7000×219 px using serial block face scanning electron microscopy with corresponding manually labeled mitochondria and endoplasmic reticulum structures. We further benchmark our approach against the Segment Anything Model 2 and MitoNet in zero-shot, prompted, and fine-tuned settings.</p><p><strong>Availability and implementation: </strong>Our rat liver dataset's raw image volume, manual ground truth annotation, and model predictions are freely shared at github.com/Xrioen/cross-tissue-transfer-learning-in-VEM.</p>","PeriodicalId":72368,"journal":{"name":"Bioinformatics advances","volume":"5 1","pages":"vbaf021"},"PeriodicalIF":2.4000,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11974384/pdf/","citationCount":"0","resultStr":"{\"title\":\"Transfer learning improves performance in volumetric electron microscopy organelle segmentation across tissues.\",\"authors\":\"Ronald Xie, Ben Mulcahy, Ali Darbandi, Sagar Marwah, Fez Ali, Yuna Lee, Gunes Parlakgul, Gokhan S Hotamisligil, Bo Wang, Sonya MacParland, Mei Zhen, Gary D Bader\",\"doi\":\"10.1093/bioadv/vbaf021\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Motivation: </strong>Volumetric electron microscopy (VEM) enables nanoscale resolution three-dimensional imaging of biological samples. Identification and labeling of organelles, cells, and other structures in the image volume is required for image interpretation, but manual labeling is extremely time-consuming. This can be automated using deep learning segmentation algorithms, but these traditionally require substantial manual annotation for training and typically these labeled datasets are unavailable for new samples.</p><p><strong>Results: </strong>We show that transfer learning can help address this challenge. By pretraining on VEM data from multiple mammalian tissues and organelle types and then fine-tuning on a target dataset, we segment multiple organelles at high performance, yet require a relatively small amount of new training data. We benchmark our method on three published VEM datasets and a new rat liver dataset we imaged over a 56×56×11 <math><mi>μ</mi></math> m volume measuring 7000×7000×219 px using serial block face scanning electron microscopy with corresponding manually labeled mitochondria and endoplasmic reticulum structures. We further benchmark our approach against the Segment Anything Model 2 and MitoNet in zero-shot, prompted, and fine-tuned settings.</p><p><strong>Availability and implementation: </strong>Our rat liver dataset's raw image volume, manual ground truth annotation, and model predictions are freely shared at github.com/Xrioen/cross-tissue-transfer-learning-in-VEM.</p>\",\"PeriodicalId\":72368,\"journal\":{\"name\":\"Bioinformatics advances\",\"volume\":\"5 1\",\"pages\":\"vbaf021\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2025-04-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11974384/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Bioinformatics advances\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/bioadv/vbaf021\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICAL & COMPUTATIONAL BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Bioinformatics advances","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/bioadv/vbaf021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
引用次数: 0
摘要
动机体积电子显微镜(VEM)可对生物样本进行纳米级分辨率的三维成像。图像解读需要对图像体积中的细胞器、细胞和其他结构进行识别和标记,但人工标记非常耗时。使用深度学习分割算法可以自动完成这项工作,但这些算法传统上需要大量的人工标注来进行训练,而且这些标注数据集通常无法用于新样本:我们的研究表明,迁移学习有助于解决这一难题。通过在多个哺乳动物组织和细胞器类型的 VEM 数据上进行预训练,然后在目标数据集上进行微调,我们可以高性能地分割多个细胞器,而且只需要相对较少的新训练数据。我们在三个已发表的 VEM 数据集和一个新的大鼠肝脏数据集上对我们的方法进行了基准测试,我们使用序列块面扫描电子显微镜对一个 56×56×11 μ m 的体积进行了成像,该体积的尺寸为 7000×7000×219 px,并带有相应的手动标记的线粒体和内质网结构。我们还进一步将我们的方法与 Segment Anything Model 2 和 MitoNet 在零拍摄、提示和微调设置中进行了比较:我们的大鼠肝脏数据集的原始图像卷、人工地面实况标注和模型预测可在 github.com/Xrioen/cross-tissue-transfer-learning-in-VEM 免费共享。
Transfer learning improves performance in volumetric electron microscopy organelle segmentation across tissues.
Motivation: Volumetric electron microscopy (VEM) enables nanoscale resolution three-dimensional imaging of biological samples. Identification and labeling of organelles, cells, and other structures in the image volume is required for image interpretation, but manual labeling is extremely time-consuming. This can be automated using deep learning segmentation algorithms, but these traditionally require substantial manual annotation for training and typically these labeled datasets are unavailable for new samples.
Results: We show that transfer learning can help address this challenge. By pretraining on VEM data from multiple mammalian tissues and organelle types and then fine-tuning on a target dataset, we segment multiple organelles at high performance, yet require a relatively small amount of new training data. We benchmark our method on three published VEM datasets and a new rat liver dataset we imaged over a 56×56×11 m volume measuring 7000×7000×219 px using serial block face scanning electron microscopy with corresponding manually labeled mitochondria and endoplasmic reticulum structures. We further benchmark our approach against the Segment Anything Model 2 and MitoNet in zero-shot, prompted, and fine-tuned settings.
Availability and implementation: Our rat liver dataset's raw image volume, manual ground truth annotation, and model predictions are freely shared at github.com/Xrioen/cross-tissue-transfer-learning-in-VEM.