Euijeong Song, Minsuh Kim, Siyoung Lee, Hui-Wen Liu, Jihyun Kim, Dong-Hee Choi, Roger Kamm, Seok Chung, Ji Hun Yang, Tae Hwan Kwak
{"title":"VONet:用最少的共聚焦图像进行类器官结构三维重建的深度学习网络。","authors":"Euijeong Song, Minsuh Kim, Siyoung Lee, Hui-Wen Liu, Jihyun Kim, Dong-Hee Choi, Roger Kamm, Seok Chung, Ji Hun Yang, Tae Hwan Kwak","doi":"10.1016/j.patter.2024.101063","DOIUrl":null,"url":null,"abstract":"<p><p>Organoids and 3D imaging techniques are crucial for studying human tissue structure and function, but traditional 3D reconstruction methods are expensive and time consuming, relying on complete z stack confocal microscopy data. This paper introduces VONet, a deep learning-based system for 3D organoid rendering that uses a fully convolutional neural network to reconstruct entire 3D structures from a minimal number of z stack images. VONet was trained on a library of over 39,000 virtual organoids (VOs) with diverse structural features and achieved an average intersection over union of 0.82 in performance validation. Remarkably, VONet can predict the structure of deeper focal plane regions, unseen by conventional confocal microscopy. This innovative approach and VO dataset offer significant advancements in 3D bioimaging technologies.</p>","PeriodicalId":36242,"journal":{"name":"Patterns","volume":"5 10","pages":"101063"},"PeriodicalIF":6.7000,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11573902/pdf/","citationCount":"0","resultStr":"{\"title\":\"VONet: A deep learning network for 3D reconstruction of organoid structures with a minimal number of confocal images.\",\"authors\":\"Euijeong Song, Minsuh Kim, Siyoung Lee, Hui-Wen Liu, Jihyun Kim, Dong-Hee Choi, Roger Kamm, Seok Chung, Ji Hun Yang, Tae Hwan Kwak\",\"doi\":\"10.1016/j.patter.2024.101063\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Organoids and 3D imaging techniques are crucial for studying human tissue structure and function, but traditional 3D reconstruction methods are expensive and time consuming, relying on complete z stack confocal microscopy data. This paper introduces VONet, a deep learning-based system for 3D organoid rendering that uses a fully convolutional neural network to reconstruct entire 3D structures from a minimal number of z stack images. VONet was trained on a library of over 39,000 virtual organoids (VOs) with diverse structural features and achieved an average intersection over union of 0.82 in performance validation. Remarkably, VONet can predict the structure of deeper focal plane regions, unseen by conventional confocal microscopy. This innovative approach and VO dataset offer significant advancements in 3D bioimaging technologies.</p>\",\"PeriodicalId\":36242,\"journal\":{\"name\":\"Patterns\",\"volume\":\"5 10\",\"pages\":\"101063\"},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2024-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11573902/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Patterns\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1016/j.patter.2024.101063\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/10/11 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Patterns","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1016/j.patter.2024.101063","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/10/11 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
类器官和三维成像技术对研究人体组织结构和功能至关重要,但传统的三维重建方法依赖于完整的z堆栈共聚焦显微镜数据,既昂贵又耗时。本文介绍的 VONet 是一种基于深度学习的三维类器官渲染系统,它使用完全卷积神经网络从最少的 z 叠加图像重建整个三维结构。VONet 在一个包含 39,000 多个具有不同结构特征的虚拟类器官(VO)的库中进行了训练,并在性能验证中取得了 0.82 的平均交集比结合率。值得注意的是,VONet 可以预测传统共聚焦显微镜无法看到的更深焦平面区域的结构。这种创新方法和 VO 数据集为三维生物成像技术带来了重大进步。
VONet: A deep learning network for 3D reconstruction of organoid structures with a minimal number of confocal images.
Organoids and 3D imaging techniques are crucial for studying human tissue structure and function, but traditional 3D reconstruction methods are expensive and time consuming, relying on complete z stack confocal microscopy data. This paper introduces VONet, a deep learning-based system for 3D organoid rendering that uses a fully convolutional neural network to reconstruct entire 3D structures from a minimal number of z stack images. VONet was trained on a library of over 39,000 virtual organoids (VOs) with diverse structural features and achieved an average intersection over union of 0.82 in performance validation. Remarkably, VONet can predict the structure of deeper focal plane regions, unseen by conventional confocal microscopy. This innovative approach and VO dataset offer significant advancements in 3D bioimaging technologies.