{"title":"From industry-wide parameters to aircraft-centric on-flight inference: Improving aeronautics performance prediction with machine learning","authors":"F. Dewez, Benjamin Guedj, V. Vandewalle","doi":"10.1017/dce.2020.12","DOIUrl":"https://doi.org/10.1017/dce.2020.12","url":null,"abstract":"Abstract Aircraft performance models play a key role in airline operations, especially in planning a fuel-efficient flight. In practice, manufacturers provide guidelines which are slightly modified throughout the aircraft life cycle via the tuning of a single factor, enabling better fuel predictions. However, this has limitations, in particular, they do not reflect the evolution of each feature impacting the aircraft performance. Our goal here is to overcome this limitation. The key contribution of the present article is to foster the use of machine learning to leverage the massive amounts of data continuously recorded during flights performed by an aircraft and provide models reflecting its actual and individual performance. We illustrate our approach by focusing on the estimation of the drag and lift coefficients from recorded flight data. As these coefficients are not directly recorded, we resort to aerodynamics approximations. As a safety check, we provide bounds to assess the accuracy of both the aerodynamics approximation and the statistical performance of our approach. We provide numerical results on a collection of machine learning algorithms. We report excellent accuracy on real-life data and exhibit empirical evidence to support our modeling, in coherence with aerodynamics principles. Impact Statement Current airline operations in both flight preparation (on-ground) and flight management (in-air) are mainly based on the performance of an aircraft. For instance, a trajectory is set before the flight and managed in-air by the Flight Management System using the manufacturer’s performance model. This numerical model is calibrated during in-service period using monitoring systems developed by manufacturers. However, the calibration is based on the tuning of a single parameter, and this overly simplified modeling leads to a lack of precision in optimizing fuel consumption. In this paper, we propose performance models that take into account real flight conditions and switch from industry-wide to aircraft-centric calibration of relevant parameters. To do this, we use massive collections of in-air data recorded by the Quick Access Recorder, which reflect the actual behavior of the aircraft. We then resort to machine learning algorithms to learn sufficiently accurate models from this data to infer the actual performance of the aircraft. The present paper describes our overall approach and its application to predicting the lift and drag coefficients. Bounds for the prediction errors are provided to assess the accuracy of the models. We aim at a twofold impact: (a) improve on the inference of in-flight parameters to optimize trajectories (e.g., regarding fuel consumption) and (b) provide a principled data-centric modeling approach which could be replicated in other intensive data-generating industries.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2020.12","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44999153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Gonçalves, A. Batchvarov, Yuyi Liu, Yuxin Liu, L. Mason, Indranil Pan, O. Matar
{"title":"Data-driven surrogate modeling and benchmarking for process equipment","authors":"G. Gonçalves, A. Batchvarov, Yuyi Liu, Yuxin Liu, L. Mason, Indranil Pan, O. Matar","doi":"10.1017/dce.2020.8","DOIUrl":"https://doi.org/10.1017/dce.2020.8","url":null,"abstract":"Abstract In chemical process engineering, surrogate models of complex systems are often necessary for tasks of domain exploration, sensitivity analysis of the design parameters, and optimization. A suite of computational fluid dynamics (CFD) simulations geared toward chemical process equipment modeling has been developed and validated with experimental results from the literature. Various regression-based active learning strategies are explored with these CFD simulators in-the-loop under the constraints of a limited function evaluation budget. Specifically, five different sampling strategies and five regression techniques are compared, considering a set of four test cases of industrial significance and varying complexity. Gaussian process regression was observed to have a consistently good performance for these applications. The present quantitative study outlines the pros and cons of the different available techniques and highlights the best practices for their adoption. The test cases and tools are available with an open-source license to ensure reproducibility and engage the wider research community in contributing to both the CFD models and developing and benchmarking new improved algorithms tailored to this field. Impact Statement The recommendations provided here can be used for engineers interested in building computationally inexpensive surrogate models for fluid systems for design or optimization purposes. The test cases can be used by researchers to test and benchmark new algorithms for active learning for this class of problems. An open-source library with tools and scripts has been provided in order to support derived work.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2020.8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46511686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jarkko Suuronen, M. Emzir, Sari Lasanen, S. Särkkä, L. Roininen
{"title":"Enhancing industrial X-ray tomography by data-centric statistical methods","authors":"Jarkko Suuronen, M. Emzir, Sari Lasanen, S. Särkkä, L. Roininen","doi":"10.1017/dce.2020.10","DOIUrl":"https://doi.org/10.1017/dce.2020.10","url":null,"abstract":"Abstract X-ray tomography has applications in various industrial fields such as sawmill industry, oil and gas industry, as well as chemical, biomedical, and geotechnical engineering. In this article, we study Bayesian methods for the X-ray tomography reconstruction. In Bayesian methods, the inverse problem of tomographic reconstruction is solved with the help of a statistical prior distribution which encodes the possible internal structures by assigning probabilities for smoothness and edge distribution of the object. We compare Gaussian random field priors, that favor smoothness, to non-Gaussian total variation (TV), Besov, and Cauchy priors which promote sharp edges and high- and low-contrast areas in the object. We also present computational schemes for solving the resulting high-dimensional Bayesian inverse problem with 100,000–1,000,000 unknowns. We study the applicability of a no-U-turn variant of Hamiltonian Monte Carlo (HMC) methods and of a more classical adaptive Metropolis-within-Gibbs (MwG) algorithm to enable full uncertainty quantification of the reconstructions. We use maximum a posteriori (MAP) estimates with limited-memory BFGS (Broyden–Fletcher–Goldfarb–Shanno) optimization algorithm. As the first industrial application, we consider sawmill industry X-ray log tomography. The logs have knots, rotten parts, and even possibly metallic pieces, making them good examples for non-Gaussian priors. Secondly, we study drill-core rock sample tomography, an example from oil and gas industry. In that case, we compare the priors without uncertainty quantification. We show that Cauchy priors produce smaller number of artefacts than other choices, especially with sparse high-noise measurements, and choosing HMC enables systematic uncertainty quantification, provided that the posterior is not pathologically multimodal or heavy-tailed. Impact Statement Industrial X-ray tomography reconstruction accuracy depends on various factors, like the equipment, measurement geometry, and constraints of the target. For example, dynamical systems are harder targets than static ones. The harder and noisier the setting becomes, the more emphasis goes on mathematical modeling of the targets. Bayesian statistical inversion is a common choice for difficult measurement settings, and its limitations mainly come from the choice of the a priori models. Gaussian models are widely studied, but they provide smooth reconstructions. Total variation priors are not invariant under mesh changes, so doing systematic uncertainty quantification, like data-centric sensor optimization, cannot be done with them. Besov and Cauchy priors however provide systematic non-Gaussian random field models, which can be used for contrast-boosting tomography. The drawback is higher computational cost. Hence, the techniques developed here are useful for non–time-critical applications with difficult measurement settings. In these cases, the methods developed may provide significantly better recons","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2020.10","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44458078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MCMC for a hyperbolic Bayesian inverse problem in traffic flow modelling – ADDENDUM","authors":"Jeremie Coullon, Y. Pokern","doi":"10.1017/dce.2022.9","DOIUrl":"https://doi.org/10.1017/dce.2022.9","url":null,"abstract":"As work on hyperbolic Bayesian inverse problems remains rare in the literature, we explore empirically the sampling challenges these offer which have to do with shock formation in the solution of the PDE. Furthermore, we provide a unified statistical model to estimate using motorway data both boundary conditions and fundamental diagram parameters in LWR, a well known motorway traffic flow model. This allows us to provide a traffic flow density estimation method that is shown to be superior to two methods found in the traffic flow literature. Finally, we highlight how emph{Population Parallel Tempering} - a modification of Parallel Tempering - is a scalable method that can increase the mixing speed of the sampler by a factor of 10.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43148841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Girayhan Ozbay, A. Hamzehloo, S. Laizet, Panagiotis Tzirakis, Georgios Rizos, B. Schuller
{"title":"Poisson CNN: Convolutional neural networks for the solution of the Poisson equation on a Cartesian mesh","authors":"Ali Girayhan Ozbay, A. Hamzehloo, S. Laizet, Panagiotis Tzirakis, Georgios Rizos, B. Schuller","doi":"10.1017/dce.2021.7","DOIUrl":"https://doi.org/10.1017/dce.2021.7","url":null,"abstract":"Abstract The Poisson equation is commonly encountered in engineering, for instance, in computational fluid dynamics (CFD) where it is needed to compute corrections to the pressure field to ensure the incompressibility of the velocity field. In the present work, we propose a novel fully convolutional neural network (CNN) architecture to infer the solution of the Poisson equation on a 2D Cartesian grid with different resolutions given the right-hand side term, arbitrary boundary conditions, and grid parameters. It provides unprecedented versatility for a CNN approach dealing with partial differential equations. The boundary conditions are handled using a novel approach by decomposing the original Poisson problem into a homogeneous Poisson problem plus four inhomogeneous Laplace subproblems. The model is trained using a novel loss function approximating the continuous $ {L}^p $ norm between the prediction and the target. Even when predicting on grids denser than previously encountered, our model demonstrates encouraging capacity to reproduce the correct solution profile. The proposed model, which outperforms well-known neural network models, can be included in a CFD solver to help with solving the Poisson equation. Analytical test cases indicate that our CNN architecture is capable of predicting the correct solution of a Poisson problem with mean percentage errors below 10%, an improvement by comparison to the first step of conventional iterative methods. Predictions from our model, used as the initial guess to iterative algorithms like Multigrid, can reduce the root mean square error after a single iteration by more than 90% compared to a zero initial guess.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2021.7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48803625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lawrence M. Murray, Sumeetpal S. Singh, Anthony Lee
{"title":"Anytime Monte Carlo","authors":"Lawrence M. Murray, Sumeetpal S. Singh, Anthony Lee","doi":"10.1017/dce.2021.6","DOIUrl":"https://doi.org/10.1017/dce.2021.6","url":null,"abstract":"Abstract Monte Carlo algorithms simulates some prescribed number of samples, taking some random real time to complete the computations necessary. This work considers the converse: to impose a real-time budget on the computation, which results in the number of samples simulated being random. To complicate matters, the real time taken for each simulation may depend on the sample produced, so that the samples themselves are not independent of their number, and a length bias with respect to compute time is apparent. This is especially problematic when a Markov chain Monte Carlo (MCMC) algorithm is used and the final state of the Markov chain—rather than an average over all states—is required, which is the case in parallel tempering implementations of MCMC. The length bias does not diminish with the compute budget in this case. It also occurs in sequential Monte Carlo (SMC) algorithms, which is the focus of this paper. We propose an anytime framework to address the concern, using a continuous-time Markov jump process to study the progress of the computation in real time. We first show that for any MCMC algorithm, the length bias of the final state’s distribution due to the imposed real-time computing budget can be eliminated by using a multiple chain construction. The utility of this construction is then demonstrated on a large-scale SMC$ {}^2 $ implementation, using four billion particles distributed across a cluster of 128 graphics processing units on the Amazon EC2 service. The anytime framework imposes a real-time budget on the MCMC move steps within the SMC$ {}^2 $ algorithm, ensuring that all processors are simultaneously ready for the resampling step, demonstrably reducing idleness to due waiting times and providing substantial control over the total compute budget.","PeriodicalId":34169,"journal":{"name":"DataCentric Engineering","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/dce.2021.6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57162361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}