A Deep Learning approach to Reduced Order Modelling of Parameter Dependent Partial Differential Equations.

Published on Mar 10, 2021in arXiv: Numerical Analysis
Nicola Rares Franco1
Estimated H-index: 1
,
Andrea Manzoni22
Estimated H-index: 22
,
Paolo Zunino28
Estimated H-index: 28
Sources
Abstract
Within the framework of parameter dependent PDEs, we develop a constructive approach based on Deep Neural Networks for the efficient approximation of the parameter-to-solution map. The research is motivated by the limitations and drawbacks of state-of-the-art algorithms, such as the Reduced Basis method, when addressing problems that show a slow decay in the Kolmogorov n-width. Our work is based on the use of deep autoencoders, which we employ for encoding and decoding a high fidelity approximation of the solution manifold. In order to fully exploit the approximation capabilities of neural networks, we consider a nonlinear version of the Kolmogorov n-width over which we base the concept of a minimal latent dimension. We show that this minimal dimension is intimately related to the topological properties of the solution manifold, and we provide some theoretical results with particular emphasis on second order elliptic PDEs. Finally, we report numerical experiments where we compare the proposed approach with classical POD-Galerkin reduced order models. In particular, we consider parametrized advection-diffusion PDEs, and we test the methodology in the presence of strong transport fields, singular terms and stochastic coefficients.
References54
Newest
Deep learning-based reduced order models (DL-ROMs) have been recently proposed to overcome common limitations shared by conventional reduced order models (ROMs) - built, e.g., through proper orthogonal decomposition (POD) - when applied to nonlinear time-dependent parametrized partial differential equations (PDEs). These might be related to (i) the need to deal with projections onto high dimensional linear approximating trial manifolds, (ii) expensive hyper-reduction strategies, or (iii) the int...
3 Citations
#1Jonathan W. Siegel (PSU: Pennsylvania State University)H-Index: 5
#2Jinchao Xu (PSU: Pennsylvania State University)H-Index: 60
We study the approximation properties of shallow neural networks (NN) with activation function which is a power of the rectified linear unit. Specifically, we consider the dependence of the approximation rate on the dimension and the smoothness of the underlying function to be approximated. Like the finite element method, such networks represent piecewise polynomial functions. However, we show that for sufficiently smooth functions the approximation properties of shallow ReLU^knetworks are mu...
5 Citations
#1Nikolaj Takata Mücke (CWI: Centrum Wiskunde & Informatica)H-Index: 1
#2Sander M. Bohte (CWI: Centrum Wiskunde & Informatica)H-Index: 22
Last. Cornelis W. Oosterlee (CWI: Centrum Wiskunde & Informatica)H-Index: 39
view all 3 authors...
We present a novel reduced order model (ROM) approach for parameterized time-dependent PDEs based on modern learning. The ROM is suitable for multi-query problems and is nonintrusive. It is divided into two distinct stages: A nonlinear dimensionality reduction stage that handles the spatially distributed degrees of freedom based on convolutional autoencoders, and a parameterized time-stepping stage based on memory aware neural networks (NNs), specifically causal convolutional and long short-term...
2 Citations
#1Albert CohenH-Index: 80
#2Ronald A. DeVoreH-Index: 64
Last. Przemysław WojtaszczykH-Index: 21
view all 4 authors...
While it is well known that nonlinear methods of approximation can often perform dramatically better than linear methods, there are still questions on how to measure the optimal performance possible for such methods. This paper studies nonlinear methods of approximation that are compatible with numerical implementation in that they are required to be numerically stable. A measure of optimal performance, called {\em stable manifold widths}, for approximating a model class Kin a Banach space $X...
5 Citations
#1Wenqian ChenH-Index: 1
#2Qian WangH-Index: 6
Last. Chuhua ZhangH-Index: 1
view all 4 authors...
4 Citations
Physics informed neural networks (PINNs) have recently been very successfully applied for efficiently approximating inverse problems for PDEs. We focus on a particular class of inverse problems, the so-called data assimilation or unique continuation problems, and prove rigorous estimates on the generalization error of PINNs approximating them. An abstract framework is presented and conditional stability estimates for the underlying inverse problem are employed to derive the estimate on the PINN ...
9 Citations
#1Kaushik Bhattacharya (California Institute of Technology)H-Index: 54
#2Bamdad HosseiniH-Index: 7
Last. Andrew M. StuartH-Index: 63
view all 4 authors...
We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for com...
22 Citations
#1Moritz GeistH-Index: 1
#2Philipp PetersenH-Index: 14
Last. Gitta KutyniokH-Index: 47
view all 5 authors...
We perform a comprehensive numerical study of the effect of approximation-theoretical results for neural networks on practical learning problems in the context of numerical analysis. As the underlying model, we study the machine-learning-based solution of parametric partial differential equations. Here, approximation theory predicts that the performance of the model should depend only very mildly on the dimension of the parameter space and is determined by the intrinsic dimension of the solution...
13 Citations
#1Yeonjong ShinH-Index: 9
#2Jérôme DarbonH-Index: 21
Last. George Em KarniadakisH-Index: 6
view all 3 authors...
Physics informed neural networks (PINNs) are deep learning based techniques for solving partial differential equations (PDEs). Guided by data and physical laws, PINNs find a neural network that approximates the solution to a system of PDEs. Such a neural network is obtained by minimizing a loss function in which any prior knowledge of PDEs and data are encoded. Despite its remarkable empirical success, there is little theoretical justification for PINNs. In this paper, we establish a mathematica...
23 Citations
#1Kookjin Lee (SNL: Sandia National Laboratories)H-Index: 5
#1Kookjin Lee (SNL: Sandia National Laboratories)H-Index: 2
Last. Kevin Carlberg (SNL: Sandia National Laboratories)H-Index: 17
view all 2 authors...
Abstract Nearly all model-reduction techniques project the governing equations onto a linear subspace of the original state space. Such subspaces are typically computed using methods such as balanced truncation, rational interpolation, the reduced-basis method, and (balanced) proper orthogonal decomposition (POD). Unfortunately, restricting the state to evolve in a linear subspace imposes a fundamental limitation to the accuracy of the resulting reduced-order model (ROM). In particular, linear-s...
124 CitationsSource
Cited By0
Newest