Naomi Joseph, Chaitanya Kolluru, James Seckler, Jun Chen, Justin Kim, Michael Jenkins, Andrew Shofstall, Nikki Pelot, David L Wilson
{"title":"Human microscopic vagus nerve anatomy using deep learning on 3D-MUSE images.","authors":"Naomi Joseph, Chaitanya Kolluru, James Seckler, Jun Chen, Justin Kim, Michael Jenkins, Andrew Shofstall, Nikki Pelot, David L Wilson","doi":"10.1117/12.3009682","DOIUrl":null,"url":null,"abstract":"<p><p>We are microscopically imaging and analyzing the human vagus nerve (VN) anatomy to create the first ever VN connectome to support modeling of neuromodulation therapies. Although micro-CT and MRI roughly identify vagus nerve anatomy, they lack the spatial resolution required to identify small fascicle splitting and merging, and perineurium boundaries. We developed 3D serial block-face Microscopy with Ultraviolet Surface Excitation (3D-MUSE), with 0.9-μm in-plane resolution and 3-μm cut thickness. 3D-MUSE is ideal for VN imaging, capturing large myelinated fibers, connective sheaths, fascicle dynamics, and nerve bundle tractography. Each 3-mm 3D-MUSE ROI generates ~1,000 grayscale images, necessitating automatic segmentation as over 50-hrs were spent manually annotating fascicles, perineurium, and epineurium in every 20th image, giving 50 images. We trained three types of multi-class deep learning segmentation models. First, 25 annotated images trained a 2D U-Net and Attention U-Net. Second, we trained a Vision Transformer (ViT) using self-supervised learning with 200 unlabeled images before refining the ViT's initialized weights of a U-Net Transformer with 25 training images and labels. Third, we created pseudo-3D images by concatenating each annotated image with an image ±k slices apart (k=1,10), and trained a 2D U-Net similarly. All models were tested on 25 held-out images and evaluated using Dice. While all trained models performed comparably, the 2D U-Net model trained on pseudo-3D images demonstrated highest Dice values (0.936). With sample-based-training, one obtains very promising results on thousands of images in terms of segmentation and nerve fiber tractography estimation. Additional training from more samples could obtain excellent results.</p>","PeriodicalId":74505,"journal":{"name":"Proceedings of SPIE--the International Society for Optical Engineering","volume":"12930 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12433149/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of SPIE--the International Society for Optical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3009682","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/4/2 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We are microscopically imaging and analyzing the human vagus nerve (VN) anatomy to create the first ever VN connectome to support modeling of neuromodulation therapies. Although micro-CT and MRI roughly identify vagus nerve anatomy, they lack the spatial resolution required to identify small fascicle splitting and merging, and perineurium boundaries. We developed 3D serial block-face Microscopy with Ultraviolet Surface Excitation (3D-MUSE), with 0.9-μm in-plane resolution and 3-μm cut thickness. 3D-MUSE is ideal for VN imaging, capturing large myelinated fibers, connective sheaths, fascicle dynamics, and nerve bundle tractography. Each 3-mm 3D-MUSE ROI generates ~1,000 grayscale images, necessitating automatic segmentation as over 50-hrs were spent manually annotating fascicles, perineurium, and epineurium in every 20th image, giving 50 images. We trained three types of multi-class deep learning segmentation models. First, 25 annotated images trained a 2D U-Net and Attention U-Net. Second, we trained a Vision Transformer (ViT) using self-supervised learning with 200 unlabeled images before refining the ViT's initialized weights of a U-Net Transformer with 25 training images and labels. Third, we created pseudo-3D images by concatenating each annotated image with an image ±k slices apart (k=1,10), and trained a 2D U-Net similarly. All models were tested on 25 held-out images and evaluated using Dice. While all trained models performed comparably, the 2D U-Net model trained on pseudo-3D images demonstrated highest Dice values (0.936). With sample-based-training, one obtains very promising results on thousands of images in terms of segmentation and nerve fiber tractography estimation. Additional training from more samples could obtain excellent results.