Transfer learning improves performance in volumetric electron microscopy organelle segmentation across tissues.

IF 2.4 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY
Bioinformatics advances Pub Date : 2025-04-02 eCollection Date: 2025-01-01 DOI:10.1093/bioadv/vbaf021
Ronald Xie, Ben Mulcahy, Ali Darbandi, Sagar Marwah, Fez Ali, Yuna Lee, Gunes Parlakgul, Gokhan S Hotamisligil, Bo Wang, Sonya MacParland, Mei Zhen, Gary D Bader
{"title":"Transfer learning improves performance in volumetric electron microscopy organelle segmentation across tissues.","authors":"Ronald Xie, Ben Mulcahy, Ali Darbandi, Sagar Marwah, Fez Ali, Yuna Lee, Gunes Parlakgul, Gokhan S Hotamisligil, Bo Wang, Sonya MacParland, Mei Zhen, Gary D Bader","doi":"10.1093/bioadv/vbaf021","DOIUrl":null,"url":null,"abstract":"<p><strong>Motivation: </strong>Volumetric electron microscopy (VEM) enables nanoscale resolution three-dimensional imaging of biological samples. Identification and labeling of organelles, cells, and other structures in the image volume is required for image interpretation, but manual labeling is extremely time-consuming. This can be automated using deep learning segmentation algorithms, but these traditionally require substantial manual annotation for training and typically these labeled datasets are unavailable for new samples.</p><p><strong>Results: </strong>We show that transfer learning can help address this challenge. By pretraining on VEM data from multiple mammalian tissues and organelle types and then fine-tuning on a target dataset, we segment multiple organelles at high performance, yet require a relatively small amount of new training data. We benchmark our method on three published VEM datasets and a new rat liver dataset we imaged over a 56×56×11 <math><mi>μ</mi></math> m volume measuring 7000×7000×219 px using serial block face scanning electron microscopy with corresponding manually labeled mitochondria and endoplasmic reticulum structures. We further benchmark our approach against the Segment Anything Model 2 and MitoNet in zero-shot, prompted, and fine-tuned settings.</p><p><strong>Availability and implementation: </strong>Our rat liver dataset's raw image volume, manual ground truth annotation, and model predictions are freely shared at github.com/Xrioen/cross-tissue-transfer-learning-in-VEM.</p>","PeriodicalId":72368,"journal":{"name":"Bioinformatics advances","volume":"5 1","pages":"vbaf021"},"PeriodicalIF":2.4000,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11974384/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Bioinformatics advances","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/bioadv/vbaf021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Motivation: Volumetric electron microscopy (VEM) enables nanoscale resolution three-dimensional imaging of biological samples. Identification and labeling of organelles, cells, and other structures in the image volume is required for image interpretation, but manual labeling is extremely time-consuming. This can be automated using deep learning segmentation algorithms, but these traditionally require substantial manual annotation for training and typically these labeled datasets are unavailable for new samples.

Results: We show that transfer learning can help address this challenge. By pretraining on VEM data from multiple mammalian tissues and organelle types and then fine-tuning on a target dataset, we segment multiple organelles at high performance, yet require a relatively small amount of new training data. We benchmark our method on three published VEM datasets and a new rat liver dataset we imaged over a 56×56×11 μ m volume measuring 7000×7000×219 px using serial block face scanning electron microscopy with corresponding manually labeled mitochondria and endoplasmic reticulum structures. We further benchmark our approach against the Segment Anything Model 2 and MitoNet in zero-shot, prompted, and fine-tuned settings.

Availability and implementation: Our rat liver dataset's raw image volume, manual ground truth annotation, and model predictions are freely shared at github.com/Xrioen/cross-tissue-transfer-learning-in-VEM.

迁移学习提高了容量电镜细胞器跨组织分割的性能。
动机体积电子显微镜(VEM)可对生物样本进行纳米级分辨率的三维成像。图像解读需要对图像体积中的细胞器、细胞和其他结构进行识别和标记,但人工标记非常耗时。使用深度学习分割算法可以自动完成这项工作,但这些算法传统上需要大量的人工标注来进行训练,而且这些标注数据集通常无法用于新样本:我们的研究表明,迁移学习有助于解决这一难题。通过在多个哺乳动物组织和细胞器类型的 VEM 数据上进行预训练,然后在目标数据集上进行微调,我们可以高性能地分割多个细胞器,而且只需要相对较少的新训练数据。我们在三个已发表的 VEM 数据集和一个新的大鼠肝脏数据集上对我们的方法进行了基准测试,我们使用序列块面扫描电子显微镜对一个 56×56×11 μ m 的体积进行了成像,该体积的尺寸为 7000×7000×219 px,并带有相应的手动标记的线粒体和内质网结构。我们还进一步将我们的方法与 Segment Anything Model 2 和 MitoNet 在零拍摄、提示和微调设置中进行了比较:我们的大鼠肝脏数据集的原始图像卷、人工地面实况标注和模型预测可在 github.com/Xrioen/cross-tissue-transfer-learning-in-VEM 免费共享。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.60
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信