AnyStar: Domain randomized universal star-convex 3D instance segmentation.

Neel Dey, S Mazdak Abulnaga, Benjamin Billot, Esra Abaci Turk, P Ellen Grant, Adrian V Dalca, Polina Golland
{"title":"AnyStar: Domain randomized universal star-convex 3D instance segmentation.","authors":"Neel Dey, S Mazdak Abulnaga, Benjamin Billot, Esra Abaci Turk, P Ellen Grant, Adrian V Dalca, Polina Golland","doi":"10.1109/wacv57701.2024.00742","DOIUrl":null,"url":null,"abstract":"<p><p>Star-convex shapes arise across bio-microscopy and radiology in the form of nuclei, nodules, metastases, and other units. Existing instance segmentation networks for such structures train on densely labeled instances for each dataset, which requires substantial and often impractical manual annotation effort. Further, significant reengineering or finetuning is needed when presented with new datasets and imaging modalities due to changes in contrast, shape, orientation, resolution, and density. We present AnyStar, a domain-randomized generative model that simulates synthetic training data of blob-like objects with randomized appearance, environments, and imaging physics to train general-purpose star-convex instance segmentation networks. As a result, networks trained using our generative model do not require annotated images from unseen datasets. A single network trained on our synthesized data accurately 3D segments C. elegans and P. dumerilii nuclei in fluorescence microscopy, mouse cortical nuclei in <math><mi>μ</mi> <mi>C</mi> <mi>T</mi></math> , zebrafish brain nuclei in EM, and placental cotyledons in human fetal MRI, all without any retraining, finetuning, transfer learning, or domain adaptation. Code is available at https://github.com/neel-dey/AnyStar.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2024 ","pages":"7578-7588"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12381811/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/wacv57701.2024.00742","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/4/9 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Star-convex shapes arise across bio-microscopy and radiology in the form of nuclei, nodules, metastases, and other units. Existing instance segmentation networks for such structures train on densely labeled instances for each dataset, which requires substantial and often impractical manual annotation effort. Further, significant reengineering or finetuning is needed when presented with new datasets and imaging modalities due to changes in contrast, shape, orientation, resolution, and density. We present AnyStar, a domain-randomized generative model that simulates synthetic training data of blob-like objects with randomized appearance, environments, and imaging physics to train general-purpose star-convex instance segmentation networks. As a result, networks trained using our generative model do not require annotated images from unseen datasets. A single network trained on our synthesized data accurately 3D segments C. elegans and P. dumerilii nuclei in fluorescence microscopy, mouse cortical nuclei in μ C T , zebrafish brain nuclei in EM, and placental cotyledons in human fetal MRI, all without any retraining, finetuning, transfer learning, or domain adaptation. Code is available at https://github.com/neel-dey/AnyStar.

AnyStar:领域随机通用星凸三维实例分割。
在生物显微镜和放射学中,以核、结节、转移和其他单位的形式出现星凸形状。现有的此类结构的实例分割网络对每个数据集的密集标记实例进行训练,这需要大量且通常不切实际的手动注释工作。此外,由于对比度、形状、方向、分辨率和密度的变化,当出现新的数据集和成像模式时,需要进行重大的重新设计或微调。我们提出了AnyStar,一个领域随机生成模型,它模拟具有随机外观、环境和成像物理的斑点状物体的合成训练数据,以训练通用的星形凸实例分割网络。因此,使用我们的生成模型训练的网络不需要来自未见过的数据集的注释图像。在我们的合成数据上训练的单个网络可以准确地在荧光显微镜下对秀丽隐杆线虫和P. dumerilii的细胞核进行三维分割,在μ C T上对小鼠皮质核进行三维分割,在EM上对斑马鱼的脑核进行三维分割,在人类胎儿MRI上对胎盘子叶进行三维分割,所有这些都不需要任何再训练、微调、迁移学习或区域适应。代码可从https://github.com/neel-dey/AnyStar获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信