Benedikt J. Daurer, Hari Krishnan, Talita Perciano, Filipe R. N. C. Maia, David A. Shapiro, James A. Sethian, Stefano Marchesini
{"title":"Nanosurveyor: a framework for real-time data processing","authors":"Benedikt J. Daurer, Hari Krishnan, Talita Perciano, Filipe R. N. C. Maia, David A. Shapiro, James A. Sethian, Stefano Marchesini","doi":"10.1186/s40679-017-0039-0","DOIUrl":"https://doi.org/10.1186/s40679-017-0039-0","url":null,"abstract":"<p>The ever improving brightness of accelerator based sources is enabling novel observations and discoveries with faster frame rates, larger fields of view, higher resolution, and higher dimensionality.</p><p>Here we present an integrated software/algorithmic framework designed to capitalize on high-throughput experiments through efficient kernels, load-balanced workflows, which are scalable in design. We describe the streamlined processing pipeline of ptychography data analysis.</p><p>The pipeline provides throughput, compression, and resolution as well as rapid feedback to the microscope operators.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"3 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2017-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-017-0039-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4050615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tekin Bicer, Doğa Gürsoy, Vincent De Andrade, Rajkumar Kettimuthu, William Scullin, Francesco De Carlo, Ian T. Foster
{"title":"Trace: a high-throughput tomographic reconstruction engine for large-scale datasets","authors":"Tekin Bicer, Doğa Gürsoy, Vincent De Andrade, Rajkumar Kettimuthu, William Scullin, Francesco De Carlo, Ian T. Foster","doi":"10.1186/s40679-017-0040-7","DOIUrl":"https://doi.org/10.1186/s40679-017-0040-7","url":null,"abstract":"<p>Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis.</p><p>We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source.\u0000</p><p>Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration.</p><p>The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"3 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2017-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-017-0040-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"5079740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francesco Brun, Lorenzo Massimi, Michela Fratini, Diego Dreossi, Fulvio Billé, Agostino Accardo, Roberto Pugliese, Alessia Cedola
{"title":"SYRMEP Tomo Project: a graphical user interface for customizing CT reconstruction workflows","authors":"Francesco Brun, Lorenzo Massimi, Michela Fratini, Diego Dreossi, Fulvio Billé, Agostino Accardo, Roberto Pugliese, Alessia Cedola","doi":"10.1186/s40679-016-0036-8","DOIUrl":"https://doi.org/10.1186/s40679-016-0036-8","url":null,"abstract":"<p>When considering the acquisition of experimental synchrotron radiation (SR) X-ray CT data, the reconstruction workflow cannot be limited to the essential computational steps of flat fielding and filtered back projection (FBP). More refined image processing is often required, usually to compensate artifacts and enhance the quality of the reconstructed images. In principle, it would be desirable to optimize the reconstruction workflow at the facility during the experiment (beamtime). However, several practical factors affect the image reconstruction part of the experiment and users are likely to conclude the beamtime with sub-optimal reconstructed images. Through an example of application, this article presents <i>SYRMEP Tomo Project</i> (STP), an open-source software tool conceived to let users design custom CT reconstruction workflows. STP has been designed for post-beamtime (off-line use) and for a new reconstruction of past archived data at user’s home institution where simple computing resources are available. Releases of the software can be downloaded at the Elettra Scientific Computing group GitHub repository https://github.com/ElettraSciComp/STP-Gui.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"3 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2017-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-016-0036-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4750241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient implementation of a local tomography reconstruction algorithm","authors":"Pierre Paleo, Alessandro Mirone","doi":"10.1186/s40679-017-0038-1","DOIUrl":"https://doi.org/10.1186/s40679-017-0038-1","url":null,"abstract":"<p>We propose an efficient implementation of an interior tomography reconstruction method based on a known subregion. This method iteratively refines a reconstruction, aiming at reducing the local tomography artifacts. To cope with the ever increasing data volumes, this method is highly optimized on two aspects: firstly, the problem is reformulated to reduce the number of variables, and secondly, the operators involved in the optimization algorithms are efficiently implemented. Results show that <span>(4096^2)</span> slices can be processed in tens of seconds, while being beyond the reach of equivalent exact local tomography method.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"3 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2017-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-017-0038-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4750240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Thayer, D. Damiani, C. Ford, M. Dubrovin, I. Gaponenko, C. P. O’Grady, W. Kroeger, J. Pines, T. J. Lane, A. Salnikov, D. Schneider, T. Tookey, M. Weaver, C. H. Yoon, A. Perazzo
{"title":"Data systems for the Linac coherent light source","authors":"J. Thayer, D. Damiani, C. Ford, M. Dubrovin, I. Gaponenko, C. P. O’Grady, W. Kroeger, J. Pines, T. J. Lane, A. Salnikov, D. Schneider, T. Tookey, M. Weaver, C. H. Yoon, A. Perazzo","doi":"10.1186/s40679-016-0037-7","DOIUrl":"https://doi.org/10.1186/s40679-016-0037-7","url":null,"abstract":"<p>The data systems for X-ray free-electron laser (FEL) experiments at the Linac coherent light source (LCLS) are described. These systems are designed to acquire and to reliably transport shot-by-shot data at a peak throughput of 5?GB/s to the offline data storage where experimental data and the relevant metadata are archived and made available for user analysis. The analysis and monitoring implementation (AMI) and Photon Science ANAlysis (psana) software packages are described. Psana is open source and freely available.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"3 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2017-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-016-0037-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4570758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federica Marone, Alain Studer, Heiner Billich, Leonardo Sala, Marco Stampanoni
{"title":"Towards on-the-fly data post-processing for real-time tomographic imaging at TOMCAT","authors":"Federica Marone, Alain Studer, Heiner Billich, Leonardo Sala, Marco Stampanoni","doi":"10.1186/s40679-016-0035-9","DOIUrl":"https://doi.org/10.1186/s40679-016-0035-9","url":null,"abstract":"<p>Sub-second full-field tomographic microscopy at third-generation synchrotron sources is a reality, opening up new possibilities for the study of dynamic systems in different fields. Sustained elevated data rates of multiple GB/s in tomographic experiments will become even more common at diffraction-limited storage rings, coming in operation soon. The computational tools necessary for the post-processing of raw tomographic projections have generally not experienced the same efficiency increase as the experimental facilities, hindering optimal exploitation of this new potential. We present here a fast, flexible, and user-friendly post-processing pipeline overcoming this efficiency mismatch and delivering reconstructed tomographic datasets just few seconds after the data have been acquired, enabling fast parameter and image quality evaluation as well as efficient post-processing of TBs of tomographic data. With this new tool, also able to accept a stream of data directly from a detector, few selected tomographic slices are available in less than half a second, providing advanced previewing capabilities paving the way to new concepts for on-the-fly control of dynamic experiments.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"3 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2017-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-016-0035-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4465196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. A. Moeglein, R. Griswold, B. L. Mehdi, N. D. Browning, J. Teuton
{"title":"Applying shot boundary detection for automated crystal growth analysis during in situ transmission electron microscope experiments","authors":"W. A. Moeglein, R. Griswold, B. L. Mehdi, N. D. Browning, J. Teuton","doi":"10.1186/s40679-016-0034-x","DOIUrl":"https://doi.org/10.1186/s40679-016-0034-x","url":null,"abstract":"<p>In situ scanning transmission electron microscopy is being developed for numerous applications in the study of nucleation and growth under electrochemical driving forces. For this type of experiment, one of the key parameters is to identify when nucleation initiates. Typically, the process of identifying the moment that crystals begin to form is a manual process requiring the user to perform an observation and respond accordingly (adjust focus, magnification, translate the stage, etc.). However, as the speed of the cameras being used to perform these observations increases, the ability of a user to “catch” the important initial stage of nucleation decreases (there is more information that is available in the first few milliseconds of the process). Here, we show that video shot boundary detection can automatically detect frames where a change in the image occurs. We show that this method can be applied to quickly and accurately identify points of change during crystal growth. This technique allows for automated segmentation of a digital stream for further analysis and the assignment of arbitrary time stamps for the initiation of processes that are independent of the user’s ability to observe and react.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"3 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2017-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-016-0034-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4465193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Willem Jan Palenstijn, Jeroen Bédorf, Jan Sijbers, K. Joost Batenburg
{"title":"A distributed ASTRA toolbox","authors":"Willem Jan Palenstijn, Jeroen Bédorf, Jan Sijbers, K. Joost Batenburg","doi":"10.1186/s40679-016-0032-z","DOIUrl":"https://doi.org/10.1186/s40679-016-0032-z","url":null,"abstract":"<p>\u0000While iterative reconstruction algorithms for tomography have several advantages compared to standard backprojection methods, the adoption of such algorithms in large-scale imaging facilities is still limited, one of the key obstacles being their high computational load. Although GPU-enabled computing clusters are, in principle, powerful enough to carry out iterative reconstructions on large datasets in reasonable time, creating efficient distributed algorithms has so far remained a complex task, requiring low-level programming to deal with memory management and network communication. The ASTRA toolbox is a software toolbox that enables rapid development of GPU accelerated tomography algorithms. It contains GPU implementations of forward and backprojection operations for many scanning geometries, as well as a set of algorithms for iterative reconstruction. These algorithms are currently limited to using GPUs in a single workstation. In this paper, we present an extension of the ASTRA toolbox and its Python interface with implementations of forward projection, backprojection and the SIRT algorithm that can be distributed over multiple GPUs and multiple workstations, as well as the tools to write distributed versions of custom reconstruction algorithms, to make processing larger datasets with ASTRA feasible. As a result, algorithms that are implemented in a high-level conceptual script can run seamlessly on GPU-enabled computing clusters, up to 32 GPUs or more. Our approach is not limited to slice-based reconstruction, facilitating a direct portability of algorithms coded for parallel-beam synchrotron tomography to cone-beam laboratory tomography setups without making changes to the reconstruction algorithm.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"2 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2016-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-016-0032-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4611980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emmanuelle Gouillart, Juan Nunez-Iglesias, Stéfan van der Walt
{"title":"Analyzing microtomography data with Python and the scikit-image library","authors":"Emmanuelle Gouillart, Juan Nunez-Iglesias, Stéfan van der Walt","doi":"10.1186/s40679-016-0031-0","DOIUrl":"https://doi.org/10.1186/s40679-016-0031-0","url":null,"abstract":"<p>The exploration and processing of images is a vital aspect of the scientific workflows of many X-ray imaging modalities. Users require tools that combine interactivity, versatility, and performance. <span>scikit-image</span> is an open-source image processing toolkit for the Python language that supports a large variety of file formats and is compatible with 2D and 3D images. The toolkit exposes a simple programming interface, with thematic modules grouping functions according to their purpose, such as image restoration, segmentation, and measurements. <span>scikit-image</span> users benefit from a rich scientific Python ecosystem that contains many powerful libraries for tasks such as visualization or machine learning. <span>scikit-image</span> combines a gentle learning curve, versatile image processing capabilities, and the scalable performance required for the high-throughput analysis of X-ray imaging data.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"2 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2016-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-016-0031-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4286118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improved tomographic reconstruction of large-scale real-world data by filter optimization","authors":"Daniël M. Pelt, Vincent De Andrade","doi":"10.1186/s40679-016-0033-y","DOIUrl":"https://doi.org/10.1186/s40679-016-0033-y","url":null,"abstract":"<p>In advanced tomographic experiments, large detector sizes and large numbers of acquired datasets can make it difficult to process the data in a reasonable time. At the same time, the acquired projections are often limited in some way, for example having a low number of projections or a low signal-to-noise ratio. Direct analytical reconstruction methods are able to produce reconstructions in very little time, even for large-scale data, but the quality of these reconstructions can be insufficient for further analysis in cases with limited data. Iterative reconstruction methods typically produce more accurate reconstructions, but take significantly more time to compute, which limits their usefulness in practice. In this paper, we present the application of the SIRT-FBP method to large-scale real-world tomographic data. The SIRT-FBP method is able to accurately approximate the simultaneous iterative reconstruction technique (SIRT) method by the computationally efficient filtered backprojection (FBP) method, using precomputed experiment-specific filters. We specifically focus on the many implementation details that are important for application on large-scale real-world data, and give solutions to common problems that occur with experimental data. We show that SIRT-FBP filters can be computed in reasonable time, even for large problem sizes, and that precomputed filters can be reused for future experiments. Reconstruction results are given for three different experiments, and are compared with results of popular existing methods. The results show that the SIRT-FBP method is able to accurately approximate iterative reconstructions of experimental data. Furthermore, they show that, in practice, the SIRT-FBP method can produce more accurate reconstructions than standard direct analytical reconstructions with popular filters, without increasing the required computation time.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"2 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2016-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-016-0033-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4112682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}