Human microscopic vagus nerve anatomy using deep learning on 3D-MUSE images.

Naomi Joseph, Chaitanya Kolluru, James Seckler, Jun Chen, Justin Kim, Michael Jenkins, Andrew Shofstall, Nikki Pelot, David L Wilson
{"title":"Human microscopic vagus nerve anatomy using deep learning on 3D-MUSE images.","authors":"Naomi Joseph, Chaitanya Kolluru, James Seckler, Jun Chen, Justin Kim, Michael Jenkins, Andrew Shofstall, Nikki Pelot, David L Wilson","doi":"10.1117/12.3009682","DOIUrl":null,"url":null,"abstract":"<p><p>We are microscopically imaging and analyzing the human vagus nerve (VN) anatomy to create the first ever VN connectome to support modeling of neuromodulation therapies. Although micro-CT and MRI roughly identify vagus nerve anatomy, they lack the spatial resolution required to identify small fascicle splitting and merging, and perineurium boundaries. We developed 3D serial block-face Microscopy with Ultraviolet Surface Excitation (3D-MUSE), with 0.9-μm in-plane resolution and 3-μm cut thickness. 3D-MUSE is ideal for VN imaging, capturing large myelinated fibers, connective sheaths, fascicle dynamics, and nerve bundle tractography. Each 3-mm 3D-MUSE ROI generates ~1,000 grayscale images, necessitating automatic segmentation as over 50-hrs were spent manually annotating fascicles, perineurium, and epineurium in every 20th image, giving 50 images. We trained three types of multi-class deep learning segmentation models. First, 25 annotated images trained a 2D U-Net and Attention U-Net. Second, we trained a Vision Transformer (ViT) using self-supervised learning with 200 unlabeled images before refining the ViT's initialized weights of a U-Net Transformer with 25 training images and labels. Third, we created pseudo-3D images by concatenating each annotated image with an image ±k slices apart (k=1,10), and trained a 2D U-Net similarly. All models were tested on 25 held-out images and evaluated using Dice. While all trained models performed comparably, the 2D U-Net model trained on pseudo-3D images demonstrated highest Dice values (0.936). With sample-based-training, one obtains very promising results on thousands of images in terms of segmentation and nerve fiber tractography estimation. Additional training from more samples could obtain excellent results.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12433149/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of SPIE--the International Society for Optical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3009682","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/4/2 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We are microscopically imaging and analyzing the human vagus nerve (VN) anatomy to create the first ever VN connectome to support modeling of neuromodulation therapies. Although micro-CT and MRI roughly identify vagus nerve anatomy, they lack the spatial resolution required to identify small fascicle splitting and merging, and perineurium boundaries. We developed 3D serial block-face Microscopy with Ultraviolet Surface Excitation (3D-MUSE), with 0.9-μm in-plane resolution and 3-μm cut thickness. 3D-MUSE is ideal for VN imaging, capturing large myelinated fibers, connective sheaths, fascicle dynamics, and nerve bundle tractography. Each 3-mm 3D-MUSE ROI generates ~1,000 grayscale images, necessitating automatic segmentation as over 50-hrs were spent manually annotating fascicles, perineurium, and epineurium in every 20th image, giving 50 images. We trained three types of multi-class deep learning segmentation models. First, 25 annotated images trained a 2D U-Net and Attention U-Net. Second, we trained a Vision Transformer (ViT) using self-supervised learning with 200 unlabeled images before refining the ViT's initialized weights of a U-Net Transformer with 25 training images and labels. Third, we created pseudo-3D images by concatenating each annotated image with an image ±k slices apart (k=1,10), and trained a 2D U-Net similarly. All models were tested on 25 held-out images and evaluated using Dice. While all trained models performed comparably, the 2D U-Net model trained on pseudo-3D images demonstrated highest Dice values (0.936). With sample-based-training, one obtains very promising results on thousands of images in terms of segmentation and nerve fiber tractography estimation. Additional training from more samples could obtain excellent results.

基于3D-MUSE图像的深度学习人体显微迷走神经解剖。
我们正在显微成像和分析人类迷走神经(VN)解剖,以创建有史以来第一个VN连接组,以支持神经调节治疗的建模。虽然微ct和MRI大致识别迷走神经解剖,但它们缺乏识别小束分裂和合并以及神经周围边界所需的空间分辨率。我们开发了具有紫外表面激发的3D连续块面显微镜(3D- muse),平面内分辨率为0.9 μm,切割厚度为3 μm。3D-MUSE是理想的VN成像,捕获大髓鞘纤维,结缔组织鞘,束动力学和神经束束束造影。每个3-mm 3D-MUSE ROI产生约1000张灰度图像,需要自动分割,因为每20张图像需要花费超过50小时的时间手动注释神经束、神经外膜和神经外膜,得到50张图像。我们训练了三种类型的多类深度学习分割模型。首先,用25张带注释的图像训练二维U-Net和注意力U-Net。其次,我们使用200张未标记图像的自监督学习训练了一个Vision Transformer (ViT),然后用25张训练图像和标签对U-Net Transformer的ViT初始化权重进行了细化。第三,我们通过将每个带注释的图像与图像的±k个切片(k=1,10)连接起来创建伪3d图像,并类似地训练2D U-Net。所有模型都在25张手持图像上进行了测试,并使用Dice进行了评估。虽然所有训练模型的性能都相当,但在伪3d图像上训练的2D U-Net模型显示出最高的Dice值(0.936)。通过基于样本的训练,在数千张图像的分割和神经纤维束图估计方面,人们获得了非常有希望的结果。从更多的样本中进行额外的训练可以获得很好的效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
0.50
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信