# Difference between revisions of "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations"

Cameron Meaney

## Introduction

In recent years, there has been an enormous growth in the amount of data and computing power available to researchers. Unfortunately, for many real-world scenarios, the cost of data acquisition is simply too high to collect an amount of data sufficient to guarantee robustness or convergence of training algorithms. In such situations, researchers are faced with the challenge of trying to generate results based on partial or incomplete datasets. Regularization techniques or methods which can artificially inflate the dataset become particularly useful in these situations; however, such techniques are often highly dependent of the specifics of the problem.

Luckily, in important real-world scenarios that we endeavor to analyze, there is often a wealth of existing information from which we can draw. This existing information commonly manifests in the form of a mathematical model, particularly a set of partial differential equations (PDEs). In this paper, the authors provide a technique for incorporating the information of a physical system contained in a PDE into the optimization of a deep neural network. This technique is most useful in situations where established PDE models exist, but where our amount of available data is too small for neural network training. In essence, the accompanying PDE model can be used as a regularization agent, constraining the space of acceptable solutions to help the optimization converge more quickly and more accurately.

## Problem Setup

Consider the following general PDE

\begin{align*} u_t + N[u;\lambda] = 0 \end{align*}

where the $u$ is the function we wish to find, subscripts denote partial derivatives, $\vec{\lambda}$ is the set of parameters on which the PDE depends, and $N$ is a differential, potentially nonlinear operator. This general form encompasses a wide array of PDEs used across the physical sciences including conservation laws, diffusion processes, advection-diffusion-reaction systems, and kinetic equations. Suppose that we have noisy measurements of the PDE solution, $u$ scattered across the spatio-temporal input domain. Then, we are interested in answering two questions about the physical system:

(1) Given fixed model parameters $\vec{\lambda}$, what can be said about the unknown hidden state $u(t,x)$?

and

(2) What parameters $\vec{\lambda}$ best describe the observed data?

## Data-Driven Solutions of PDEs

We will begin by attempting to answer the first of the questions above. Specifically, if given a small amount of noisy measurements of the solution of the PDE

\begin{align*} u_t + N[u] = 0, \end{align*}

can we estimate the full solution, $u(t,x)$, by approximating it with a deep neural network? Approximating the solution of the PDE with a neural network results in what the authors refer to as a 'Physics-Informed Neural Network' (PINN). Importantly, this technique is most useful when we are in the small-data regime - for if we had lots of data, it simply wouldn't be necessary to include information from the PDE because the data alone would be sufficient. In these examples, we are seeking to learn from a very small amount of data which makes information from the PDE necessary to include.

### Continuous-Time Models

Consider the case where our noisy measurements of the solution are randomly scattered across the spatio-temporal input domain. This case is referred to as the 'continuous-time case.' Define the function

\begin{align*} f = u_t + N[u] \end{align*}

as the left hand side of the PDE above. Now assume that $u(t,x)$ can be approximated by a deep neural network. Therefore, the function $f(t,x)$ can also be approximate by a neural network since it is simply a function of $u(t,x)$. In order to calculate $f(t,x)$ as a function of $u(t,x)$, derivates of $u(t,x)$ will need to be taken with respect to the inputs which is accomplished using a technique called automatic differentiation [?]. Importantly, the weights of the two neural networks will be shared, since $f(t,x)$ is simply a function of $u(t,x)$. In order to find this set of weights, we create a loss function which has two distinct parts. The first part quantifies how well the neural network satisfies the known data points and is given by:

\begin{align*} MSE_u = \frac{1}{N_u} \sum_{i=1}^{N_u} [u(t_u^i,x_u^i) - u^i]^2 \end{align*}

where the summation is over the set of known data points. The second part of the loss function quantifies how well the neural network satisfies the PDE and is given by:

\begin{align*} MSE_f = \frac{1}{N_f} \sum_{i=1}^{N_f} [f(t_f^i,x_f^i)]^2 \end{align*}

where the summation is over the set of domain collocation points. The full loss function used in the optimization is then taken to be the sum of these two functions:

\begin{align*} MSE = MSE_u + MSE_f. \end{align*}

By using this loss function in the optimization, information from both the known data and the known physics (from PDE) can be incorporated into the neural network. This allows the network approximate the function by training on only a small number of data points.

For an example of this method in action, consider a problem involving Burger's equation, given by:

\begin{align*} &u_t + uu_x - (0.01/\pi)u_{xx} = 0, ~ x \in [-1,1], ~ t \in [0,1], \\ &u(0,x) = -\sin(\pi x), \\ &u(t, -1) = u(t,1) = 0. \end{align*}

Notably, Burger's equation is known as a challenging problem to solve because of the shock (discontinuity) that forms after sufficiently large time. However, using PINNs, this shockwave is easily handled. An example of this is given below in figure ?.

### Discrete-Time Models

Now consider the case where our available data is not randomly scattered across the spatio-temporal domain, but rather only present at two particular times. This is known as the discrete-time case and occurs frequently in real-world examples such as when dealing with discrete pictures or medical images with no data between them. These cases and can be dealt with in the same manner as the continuous case with a few small adjustments. To adapt the PINN technique to discrete time models, we must leverage Runge-Kutta methods for numerical solutions of differential equations [?]. Runge-Kutta methods approximate the solution of a differential equation at the next numerical time step by first approximating the solution at a set of intermediate points between the time steps, then using these values to predict the full time step. The general form of a Runge-Kutta method with $q$ stages is given by:

\begin{align*} u^{n+c_i} &= u^n - \Delta t \sum^q_{j=1} a_{ij} N[u^{n+c_j}], ~ i = 1,...,q \\ u^{n+1} &= u^n - \Delta t \sum^q_{j=1} b_j N[u^{n+c_j}] \end{align*}

where $u^{n+c_j} = u(t^n + c_j \Delta t, x)$ and the general form includes both explicit and implicit time-stepping schemes. For more information of Runge-Kutta methods, see [?].

In the continuous-time case, we had approximated the function $u(t,x)$ by a neural network. Therefore, our neural network approximation for $u(t,x)$ had two inputs and one output. In the discrete case, instead of creating a neural netowrk which takes $t$ and $x$ as input and outputs the value of $u(t,x)$, we create a neural network which only takes $x$ and outputs the values of the solution at the intermediate points of the Runge-Kutta time-stepping scheme, $[u^{n+c_j}]$ for $i=1,...,q$.