Yuhe Zhang, Zisheng Yao, Robert Klöfkorn, Tobias Ritschel, Pablo Villanueva-Perez
{"title":"4D-ONIX for reconstructing 3D movies from sparse X-ray projections via deep learning.","authors":"Yuhe Zhang, Zisheng Yao, Robert Klöfkorn, Tobias Ritschel, Pablo Villanueva-Perez","doi":"10.1038/s44172-025-00390-w","DOIUrl":null,"url":null,"abstract":"<p><p>The X-ray flux from X-ray free-electron lasers and storage rings enables new spatiotemporal opportunities for studying in-situ and operando dynamics, even with single pulses. X-ray multi-projection imaging is a technique that provides volumetric information using single pulses while avoiding the centrifugal forces induced by conventional time-resolved 3D methods like time-resolved tomography, and can acquire 3D movies (4D) at least three orders of magnitude faster than existing techniques. However, reconstructing 4D information from highly sparse projections remains a challenge for current algorithms. Here we present 4D-ONIX, a deep-learning-based approach that reconstructs 3D movies from an extremely limited number of projections. It combines the computational physical model of X-ray interaction with matter and state-of-the-art deep learning methods. We demonstrate its ability to reconstruct high-quality 4D by generalizing over multiple experiments with only two to three projections per timestamp on simulations of water droplet collisions and experimental data of additive manufacturing. Our results demonstrate 4D-ONIX as an enabling tool for 4D analysis, offering high-quality image reconstruction for fast dynamics three orders of magnitude faster than tomography.</p>","PeriodicalId":72644,"journal":{"name":"Communications engineering","volume":"4 1","pages":"54"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11928503/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communications engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1038/s44172-025-00390-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The X-ray flux from X-ray free-electron lasers and storage rings enables new spatiotemporal opportunities for studying in-situ and operando dynamics, even with single pulses. X-ray multi-projection imaging is a technique that provides volumetric information using single pulses while avoiding the centrifugal forces induced by conventional time-resolved 3D methods like time-resolved tomography, and can acquire 3D movies (4D) at least three orders of magnitude faster than existing techniques. However, reconstructing 4D information from highly sparse projections remains a challenge for current algorithms. Here we present 4D-ONIX, a deep-learning-based approach that reconstructs 3D movies from an extremely limited number of projections. It combines the computational physical model of X-ray interaction with matter and state-of-the-art deep learning methods. We demonstrate its ability to reconstruct high-quality 4D by generalizing over multiple experiments with only two to three projections per timestamp on simulations of water droplet collisions and experimental data of additive manufacturing. Our results demonstrate 4D-ONIX as an enabling tool for 4D analysis, offering high-quality image reconstruction for fast dynamics three orders of magnitude faster than tomography.