多域数据聚合用于组织学图像中的轴突和髓鞘分割

Armand Collin, Arthur Boschet, Mathieu Boudreau, Julien Cohen-Adad
{"title":"多域数据聚合用于组织学图像中的轴突和髓鞘分割","authors":"Armand Collin, Arthur Boschet, Mathieu Boudreau, Julien Cohen-Adad","doi":"arxiv-2409.11552","DOIUrl":null,"url":null,"abstract":"Quantifying axon and myelin properties (e.g., axon diameter, myelin\nthickness, g-ratio) in histology images can provide useful information about\nmicrostructural changes caused by neurodegenerative diseases. Automatic tissue\nsegmentation is an important tool for these datasets, as a single stained\nsection can contain up to thousands of axons. Advances in deep learning have\nmade this task quick and reliable with minimal overhead, but a deep learning\nmodel trained by one research group will hardly ever be usable by other groups\ndue to differences in their histology training data. This is partly due to\nsubject diversity (different body parts, species, genetics, pathologies) and\nalso to the range of modern microscopy imaging techniques resulting in a wide\nvariability of image features (i.e., contrast, resolution). There is a pressing\nneed to make AI accessible to neuroscience researchers to facilitate and\naccelerate their workflow, but publicly available models are scarce and poorly\nmaintained. Our approach is to aggregate data from multiple imaging modalities\n(bright field, electron microscopy, Raman spectroscopy) and species (mouse,\nrat, rabbit, human), to create an open-source, durable tool for axon and myelin\nsegmentation. Our generalist model makes it easier for researchers to process\ntheir data and can be fine-tuned for better performance on specific domains. We\nstudy the benefits of different aggregation schemes. This multi-domain\nsegmentation model performs better than single-modality dedicated learners\n(p=0.03077), generalizes better on out-of-distribution data and is easier to\nuse and maintain. Importantly, we package the segmentation tool into a\nwell-maintained open-source software ecosystem (see\nhttps://github.com/axondeepseg/axondeepseg).","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-Domain Data Aggregation for Axon and Myelin Segmentation in Histology Images\",\"authors\":\"Armand Collin, Arthur Boschet, Mathieu Boudreau, Julien Cohen-Adad\",\"doi\":\"arxiv-2409.11552\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Quantifying axon and myelin properties (e.g., axon diameter, myelin\\nthickness, g-ratio) in histology images can provide useful information about\\nmicrostructural changes caused by neurodegenerative diseases. Automatic tissue\\nsegmentation is an important tool for these datasets, as a single stained\\nsection can contain up to thousands of axons. Advances in deep learning have\\nmade this task quick and reliable with minimal overhead, but a deep learning\\nmodel trained by one research group will hardly ever be usable by other groups\\ndue to differences in their histology training data. This is partly due to\\nsubject diversity (different body parts, species, genetics, pathologies) and\\nalso to the range of modern microscopy imaging techniques resulting in a wide\\nvariability of image features (i.e., contrast, resolution). There is a pressing\\nneed to make AI accessible to neuroscience researchers to facilitate and\\naccelerate their workflow, but publicly available models are scarce and poorly\\nmaintained. Our approach is to aggregate data from multiple imaging modalities\\n(bright field, electron microscopy, Raman spectroscopy) and species (mouse,\\nrat, rabbit, human), to create an open-source, durable tool for axon and myelin\\nsegmentation. Our generalist model makes it easier for researchers to process\\ntheir data and can be fine-tuned for better performance on specific domains. We\\nstudy the benefits of different aggregation schemes. This multi-domain\\nsegmentation model performs better than single-modality dedicated learners\\n(p=0.03077), generalizes better on out-of-distribution data and is easier to\\nuse and maintain. Importantly, we package the segmentation tool into a\\nwell-maintained open-source software ecosystem (see\\nhttps://github.com/axondeepseg/axondeepseg).\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11552\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11552","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

对组织学图像中的轴突和髓鞘特性(如轴突直径、髓鞘厚度、g 比)进行量化可提供有关神经退行性疾病引起的微结构变化的有用信息。对于这些数据集来说,自动组织分割是一项重要工具,因为一个染色切片可能包含多达数千个轴突。深度学习的进步使这项任务变得快速可靠,开销最小,但由于组织学训练数据的差异,一个研究小组训练的深度学习模型很难被其他小组使用。这一方面是由于研究对象的多样性(不同的身体部位、物种、遗传学、病理学),另一方面是由于现代显微成像技术的广泛应用导致了图像特征(如对比度、分辨率)的差异。神经科学研究人员迫切需要人工智能来促进和加快他们的工作流程,但公开可用的模型很少,而且维护不善。我们的方法是汇总来自多种成像模式(明场、电子显微镜、拉曼光谱)和物种(小鼠、大鼠、兔子、人类)的数据,创建一个开源、耐用的轴突和髓鞘分割工具。我们的通用模型使研究人员更容易处理他们的数据,并可进行微调,以提高在特定领域的性能。研究不同聚合方案的优势。这种多领域分割模型比单模式专用学习器的性能更好(P=0.03077),对分布外数据的泛化效果更好,而且更易于使用和维护。重要的是,我们将分割工具打包到了一个维护良好的开源软件生态系统中(见https://github.com/axondeepseg/axondeepseg)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Multi-Domain Data Aggregation for Axon and Myelin Segmentation in Histology Images
Quantifying axon and myelin properties (e.g., axon diameter, myelin thickness, g-ratio) in histology images can provide useful information about microstructural changes caused by neurodegenerative diseases. Automatic tissue segmentation is an important tool for these datasets, as a single stained section can contain up to thousands of axons. Advances in deep learning have made this task quick and reliable with minimal overhead, but a deep learning model trained by one research group will hardly ever be usable by other groups due to differences in their histology training data. This is partly due to subject diversity (different body parts, species, genetics, pathologies) and also to the range of modern microscopy imaging techniques resulting in a wide variability of image features (i.e., contrast, resolution). There is a pressing need to make AI accessible to neuroscience researchers to facilitate and accelerate their workflow, but publicly available models are scarce and poorly maintained. Our approach is to aggregate data from multiple imaging modalities (bright field, electron microscopy, Raman spectroscopy) and species (mouse, rat, rabbit, human), to create an open-source, durable tool for axon and myelin segmentation. Our generalist model makes it easier for researchers to process their data and can be fine-tuned for better performance on specific domains. We study the benefits of different aggregation schemes. This multi-domain segmentation model performs better than single-modality dedicated learners (p=0.03077), generalizes better on out-of-distribution data and is easier to use and maintain. Importantly, we package the segmentation tool into a well-maintained open-source software ecosystem (see https://github.com/axondeepseg/axondeepseg).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信