Learning the heterogeneous representation of brain's structure from serial SEM images using a masked autoencoder.

IF 2.5 4区 医学 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY
Frontiers in Neuroinformatics Pub Date : 2023-06-08 eCollection Date: 2023-01-01 DOI:10.3389/fninf.2023.1118419
Ao Cheng, Jiahao Shi, Lirong Wang, Ruobing Zhang
{"title":"Learning the heterogeneous representation of brain's structure from serial SEM images using a masked autoencoder.","authors":"Ao Cheng, Jiahao Shi, Lirong Wang, Ruobing Zhang","doi":"10.3389/fninf.2023.1118419","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>The exorbitant cost of accurately annotating the large-scale serial scanning electron microscope (SEM) images as the ground truth for training has always been a great challenge for brain map reconstruction by deep learning methods in neural connectome studies. The representation ability of the model is strongly correlated with the number of such high-quality labels. Recently, the masked autoencoder (MAE) has been shown to effectively pre-train Vision Transformers (ViT) to improve their representational capabilities.</p><p><strong>Methods: </strong>In this paper, we investigated a self-pre-training paradigm for serial SEM images with MAE to implement downstream segmentation tasks. We randomly masked voxels in three-dimensional brain image patches and trained an autoencoder to reconstruct the neuronal structures.</p><p><strong>Results and discussion: </strong>We tested different pre-training and fine-tuning configurations on three different serial SEM datasets of mouse brains, including two public ones, SNEMI3D and MitoEM-R, and one acquired in our lab. A series of masking ratios were examined and the optimal ratio for pre-training efficiency was spotted for 3D segmentation. The MAE pre-training strategy significantly outperformed the supervised learning from scratch. Our work shows that the general framework of can be a unified approach for effective learning of the representation of heterogeneous neural structural features in serial SEM images to greatly facilitate brain connectome reconstruction.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"17 ","pages":"1118419"},"PeriodicalIF":2.5000,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10285402/pdf/","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Neuroinformatics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3389/fninf.2023.1118419","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
引用次数: 1

Abstract

Introduction: The exorbitant cost of accurately annotating the large-scale serial scanning electron microscope (SEM) images as the ground truth for training has always been a great challenge for brain map reconstruction by deep learning methods in neural connectome studies. The representation ability of the model is strongly correlated with the number of such high-quality labels. Recently, the masked autoencoder (MAE) has been shown to effectively pre-train Vision Transformers (ViT) to improve their representational capabilities.

Methods: In this paper, we investigated a self-pre-training paradigm for serial SEM images with MAE to implement downstream segmentation tasks. We randomly masked voxels in three-dimensional brain image patches and trained an autoencoder to reconstruct the neuronal structures.

Results and discussion: We tested different pre-training and fine-tuning configurations on three different serial SEM datasets of mouse brains, including two public ones, SNEMI3D and MitoEM-R, and one acquired in our lab. A series of masking ratios were examined and the optimal ratio for pre-training efficiency was spotted for 3D segmentation. The MAE pre-training strategy significantly outperformed the supervised learning from scratch. Our work shows that the general framework of can be a unified approach for effective learning of the representation of heterogeneous neural structural features in serial SEM images to greatly facilitate brain connectome reconstruction.

Abstract Image

Abstract Image

Abstract Image

利用遮蔽式自动编码器从序列 SEM 图像中学习大脑结构的异质表示。
前言在神经连接组研究中,将大规模序列扫描电子显微镜(SEM)图像准确标注为训练的基本真相(ground truth)成本高昂,一直是深度学习方法重建脑图的巨大挑战。模型的表示能力与此类高质量标签的数量密切相关。最近的研究表明,遮蔽式自动编码器(MAE)可以有效地对视觉转换器(ViT)进行预训练,从而提高其表征能力:本文研究了利用 MAE 对序列 SEM 图像进行自我预训练的范例,以执行下游分割任务。我们随机屏蔽了三维脑图像斑块中的体素,并训练了一个自动编码器来重建神经元结构:我们在三个不同的小鼠大脑序列 SEM 数据集上测试了不同的预训练和微调配置,其中包括两个公开数据集 SNEMI3D 和 MitoEM-R,以及一个我们实验室获得的数据集。对一系列掩蔽比率进行了研究,发现了三维分割预训练效率的最佳比率。MAE 预训练策略明显优于从头开始的监督学习。我们的工作表明,MAE 的一般框架可以作为一种统一的方法,用于有效学习序列 SEM 图像中异质神经结构特征的表示,从而极大地促进大脑连接组的重建。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Frontiers in Neuroinformatics
Frontiers in Neuroinformatics MATHEMATICAL & COMPUTATIONAL BIOLOGY-NEUROSCIENCES
CiteScore
4.80
自引率
5.70%
发文量
132
审稿时长
14 weeks
期刊介绍: Frontiers in Neuroinformatics publishes rigorously peer-reviewed research on the development and implementation of numerical/computational models and analytical tools used to share, integrate and analyze experimental data and advance theories of the nervous system functions. Specialty Chief Editors Jan G. Bjaalie at the University of Oslo and Sean L. Hill at the École Polytechnique Fédérale de Lausanne are supported by an outstanding Editorial Board of international experts. This multidisciplinary open-access journal is at the forefront of disseminating and communicating scientific knowledge and impactful discoveries to researchers, academics and the public worldwide. Neuroscience is being propelled into the information age as the volume of information explodes, demanding organization and synthesis. Novel synthesis approaches are opening up a new dimension for the exploration of the components of brain elements and systems and the vast number of variables that underlie their functions. Neural data is highly heterogeneous with complex inter-relations across multiple levels, driving the need for innovative organizing and synthesizing approaches from genes to cognition, and covering a range of species and disease states. Frontiers in Neuroinformatics therefore welcomes submissions on existing neuroscience databases, development of data and knowledge bases for all levels of neuroscience, applications and technologies that can facilitate data sharing (interoperability, formats, terminologies, and ontologies), and novel tools for data acquisition, analyses, visualization, and dissemination of nervous system data. Our journal welcomes submissions on new tools (software and hardware) that support brain modeling, and the merging of neuroscience databases with brain models used for simulation and visualization.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信