presented by: david dov

33
PRESENTED BY: DAVID DOV

Upload: others

Post on 16-Nov-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: PRESENTED BY: DAVID DOV

PRESENTED BY: DAVID DOV

Page 2: PRESENTED BY: DAVID DOV

Problem setting▪Consider a partial differential equation (PDE) of the form:

▪Where:

▪ is the latent solution

▪ is a nonlinear parametric operator

▪ is a subset of

Page 3: PRESENTED BY: DAVID DOV

Problem setting – cont’d▪Problem 1: data driven solution of PDE▪ Given a fixed find

▪Problem 2: system identification/ data driven discovery of PDE▪ Find that best describes the observed data

▪Consider the continuous and discrete time settings

Page 4: PRESENTED BY: DAVID DOV

Data driven solution of PDE (problem 1)Continuous time setting

▪Assuming is given, we have:

▪Let be a neural network

▪Let be a physics informed neural network defined by:

▪The key idea of this work:▪ Calculate with automatic differentiation

Page 5: PRESENTED BY: DAVID DOV

Training▪Proposed loss function:

▪Where:

▪Here is the training data on the boundaries

▪And:

▪Where are collocation points

Page 6: PRESENTED BY: DAVID DOV

Example - Schrodinger equationThe equation:

Initial and boundary conditions:

Goal: find (complex valued solution)

Page 7: PRESENTED BY: DAVID DOV

Example - Schrodinger equation – cont’dDefine:

Loss function:

Page 8: PRESENTED BY: DAVID DOV

Example - Schrodinger equation – cont’d

Understanding the loss function and the simulation

Initial conditions:

randomly sampledinitial colocations

Outputs of the NN

Page 9: PRESENTED BY: DAVID DOV

Example - Schrodinger equation – cont’d

Understanding the loss function and the simulation

Boundary conditions:

randomly sampledboundary colocations Outputs of the NN Obtained with automatic

differentiation

Page 10: PRESENTED BY: DAVID DOV

Example - Schrodinger equation – cont’d

Understanding the loss function and the simulation

Imposing the structure of the differential equation:

randomly sampledcolocations

Outputs of the physics-informed NN, (calculated with automatic differentiation)

Page 11: PRESENTED BY: DAVID DOV

Example - Schrodinger equation – cont’dResults

Ground truth: numerical (exact) solution for each x and t simulated at a fine grid using Runge–Kutta method

Page 12: PRESENTED BY: DAVID DOV

Example - Schrodinger equation – cont’d

Limitations

▪Absence of theoretical error/convergence estimates

▪The interplay between the neural architecture/training procedure and the differential equation is not understood

▪Requires dense sampling –a bottle neck in high dimensional problems

Page 13: PRESENTED BY: DAVID DOV

Data driven solution of PDE (problem 1)

Related work:1. DEQGAN [Randle et al 20’] (presented last week)

▪ Limited to only problem 1 in the continuous time setting

▪ New loss function based on GAN instead of L2.

2. Learning data-driven discretizations for partial differential equations [Bar-Sinai et al 19’]▪ Use a parametric model to approximate spatial derivatives in

▪ NN is used to estimate the parameters

▪ A classical approach is used to integrate the equation over time

3. FiniteNet [Stevens et al 20’]▪ Extends 2 by using LSTM to model evolution of the solution over time

4. PDENet2.0 [Long et al 19’]▪ Solves problem 2 (PDE/system identification)

▪ Reveals the PDE rather than only its parameters as in this work

▪ Assumes measurements over multiple time steps (on a grid)

Page 14: PRESENTED BY: DAVID DOV

Discrete time setting

Background: Runge-Kutta (RK) method for numerical solution of PDE via integration

▪Recall the PDE:

▪Discretization: let , where

▪Goal: calculate given

▪RK1 – the simplest form (Euler method):

Page 15: PRESENTED BY: DAVID DOV

Discrete time settingBackground: Runge-Kutta (RK) method for numerical solution of PDE via integration

▪RK4 – a very common variant:

Image credit: https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods#Implicit_Runge%E2%80%93Kutta_methods

Page 16: PRESENTED BY: DAVID DOV

Discrete time settingBackground: Runge-Kutta (RK) method for numerical solution of PDE via integration

▪RK the most general (implicit) form with q stages:

▪Where

▪The classical approach: solve a system of equations▪ The unknowns are and

▪ Guarantees on the temporal accumulated error:

▪ Computational bottleneck Image credit: https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods#Implicit_Runge%E2%80%93Kutta_methods

Page 17: PRESENTED BY: DAVID DOV

Discrete time setting

Proposed solution:

▪Rewrite the equations:

▪Define a neural network with the multiple outputs:

▪Define the physics informed neural network:

Page 18: PRESENTED BY: DAVID DOV

Example – Allen-Cahn equation

Page 19: PRESENTED BY: DAVID DOV

Example – Allen-Cahn equation-cont’d▪Loss function:

Page 20: PRESENTED BY: DAVID DOV

Example – Allen-Cahn equation- cont’d▪Observation: the method suggests using (training) a neural network for a single step

▪Consider a large step:

▪Large number of Runge-Kutta stages

▪Results:

Page 21: PRESENTED BY: DAVID DOV

Discrete time setting

Notes

▪The highest order of ever used in Runge–Kutta methods

▪Huge time step of vs in standard simulations

▪No guaranties

▪Every step requires to retrain the network

Page 22: PRESENTED BY: DAVID DOV

Data driven discovery of PDE (problem 2)

Continuous time

▪Example – Navier-Stokes equation

▪Constraint - divergence-free functions

▪Assume:

where is a latent function (potential)

▪Goal: given the measurements

find the parameters and the pressure

Page 23: PRESENTED BY: DAVID DOV

Data driven discovery of PDE (problem 2)

Propose approach

▪Define:

▪Neural network with two outputs:

▪ are learnable parameters

▪Physics-informed neural network:

▪Loss function

Page 24: PRESENTED BY: DAVID DOV

Data driven discovery of PDE (problem 2)Example simulation: incompressible flow past a circular cylinder

▪Training and testing data is sampled from the rectangle

Page 25: PRESENTED BY: DAVID DOV

Data driven discovery of PDE (problem 2)Results

Page 26: PRESENTED BY: DAVID DOV

Data driven discovery of PDE (problem 2)Observation

▪The continuous time setting assumes the availability of measurements throughout the entirespatio-temporal domain

▪In many cases – measurements are available only at distinct time instnats

▪Next: discrete time setting

▪Assume: only two data snapshots

▪Solution by utilizing Runge-Kutta method as in problem 1

Page 27: PRESENTED BY: DAVID DOV

Data driven discovery of PDE (problem 2)Discrete time setting

▪ Recall the general form of Runge-Kutta method

▪Define:

▪Note the difference from problem 1

Page 28: PRESENTED BY: DAVID DOV

Data driven discovery of PDE (problem 2)Discrete time setting

▪ Define the multi-output neural network:

▪Define the physics informed neural networks:

Page 29: PRESENTED BY: DAVID DOV

Data driven discovery of PDE (problem 2)Discrete time setting

▪Given the measurements:

▪ Loss function is defined by:

Page 30: PRESENTED BY: DAVID DOV

Data driven discovery of PDE (problem 2)Example: Korteweg-de Vries (KdV) equation

▪Initial condition:

▪Periodic boundary condition

Page 31: PRESENTED BY: DAVID DOV

Data driven discovery of PDE (problem 2)Data:

▪ points

▪At

Page 32: PRESENTED BY: DAVID DOV

Data driven discovery of PDE (problem 2)Results

Page 33: PRESENTED BY: DAVID DOV

Conclusions▪Very nice application of NN for PDE

▪More questions than answers:▪ Architecture selection – why some architecture works and fail

▪ How would it work in high dimensions

▪ How to quantify uncertainty of predictions