Solving PDEs with ModelingToolkit and NeuralPDE

Solving PDEs with ModelingToolkit and NeuralPDE#

Solving Poisson PDE Systems (https://docs.sciml.ai/NeuralPDE/stable/tutorials/pdesystem/)

\[ \partial^{2}_{x}u(x,y) + \partial^{2}_{y}u(x,y) = -\sin (\pi x) \sin (\pi y) \]

with boundary conditions

\[\begin{split} \begin{align} u(0, y) &= 0 \\ u(1, y) &= 0 \\ u(x, 0) &= 0 \\ u(x, 1) &= 0 \\ \end{align} \end{split}\]

where

\(x ∈ [0, 1], y ∈ [0, 1]\)

using NeuralPDE
using Lux
using Optimization
using OptimizationOptimJL
using ModelingToolkit
using ModelingToolkit: Interval
using LineSearches
using Plots
@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2
Differential(y) ∘ Differential(y)

2D PDE

eq  = Dxx(u(x, y)) + Dyy(u(x, y)) ~ -sinpi(x) * sinpi(y)
\[ \begin{equation} \frac{\mathrm{d}}{\mathrm{d}y} \frac{\mathrm{d}}{\mathrm{d}y} u\left( x, y \right) + \frac{\mathrm{d}}{\mathrm{d}x} \frac{\mathrm{d}}{\mathrm{d}x} u\left( x, y \right) = - sinpi\left( x \right) sinpi\left( y \right) \end{equation} \]

Boundary conditions

bcs = [
    u(0, y) ~ 0.0,
    u(1, y) ~ 0.0,
    u(x, 0) ~ 0.0,
    u(x, 1) ~ 0.0
]
\[\begin{split} \begin{align} u\left( 0, y \right) &= 0 \\ u\left( 1, y \right) &= 0 \\ u\left( x, 0 \right) &= 0 \\ u\left( x, 1 \right) &= 0 \end{align} \end{split}\]

Space domains

domains = [
    x  Interval(0.0, 1.0),
    y  Interval(0.0, 1.0)
]
2-element Vector{Symbolics.VarDomainPairing}:
 Symbolics.VarDomainPairing(x, 0.0 .. 1.0)
 Symbolics.VarDomainPairing(y, 0.0 .. 1.0)

Build a neural network for the PDE solver. Input: 2 dimensions. Hidden layers: 16 neurons * 2 layers. Output: single output u(x, y)

dim = 2
chain = Lux.Chain(Dense(dim, 16, Lux.σ), Dense(16, 16, Lux.σ), Dense(16, 1))
Chain(
    layer_1 = Dense(2 => 16, σ),        # 48 parameters
    layer_2 = Dense(16 => 16, σ),       # 272 parameters
    layer_3 = Dense(16 => 1),           # 17 parameters
)         # Total: 337 parameters,
          #        plus 0 states.

Discretization method usesPhysicsInformedNN() (PINN).

dx = 0.05
discretization = PhysicsInformedNN(chain, QuadratureTraining(; batch = 200, abstol = 1e-6, reltol = 1e-6))
NeuralPDE.PhysicsInformedNN{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}, Nothing, NeuralPDE.Phi{Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, typeof(NeuralPDE.numeric_derivative), Bool, Nothing, Nothing, Nothing, Base.RefValue{Int64}, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}(Integrals.CubatureJLh(0), 1.0e-6, 1.0e-6, 1000, 200), nothing, NeuralPDE.Phi{Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}(Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), nothing, (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()), nothing, static(true))), NeuralPDE.numeric_derivative, false, nothing, nothing, nothing, NeuralPDE.LogOptions(50), Base.RefValue{Int64}(1), true, false, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}())

Build the PDE system and discretize it.

@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x, y)])
prob = discretize(pde_system, discretization)
OptimizationProblem. In-place: true
u0: ComponentVector{Float64}(layer_1 = (weight = [-0.009443488903343678 -0.7993664145469666; 0.163873091340065 -0.3903619349002838; … ; 0.9304502010345459 -0.261756956577301; -0.2606809437274933 -0.5632691383361816], bias = [-0.20093569159507751, -0.20096148550510406, -0.6164044737815857, -0.4557146430015564, -0.33773383498191833, -0.6541056036949158, -0.5166013240814209, -0.10557457059621811, -0.3596141040325165, 0.38779619336128235, 0.48493194580078125, 0.49301546812057495, 0.521384596824646, 0.058735426515340805, 0.1428108811378479, -0.4327196776866913]), layer_2 = (weight = [-0.18180109560489655 0.3978543281555176 … 0.11195570975542068 -0.4241289794445038; 0.1692744791507721 -0.3835836946964264 … 0.13916297256946564 -0.09790544956922531; … ; 0.2605222463607788 0.369486927986145 … -0.12050548940896988 0.28747931122779846; -0.15228337049484253 0.22627434134483337 … 0.19471734762191772 0.1851874738931656], bias = [-0.18175598978996277, 0.122138112783432, 0.09334644675254822, 0.1539190709590912, 0.00934937596321106, -0.002923727035522461, -0.21060305833816528, 0.14812970161437988, 0.24739006161689758, 0.2150576114654541, 0.1534944772720337, -0.09044042229652405, -0.13382312655448914, 0.07359239459037781, 0.13213008642196655, 0.22387537360191345]), layer_3 = (weight = [0.09819988161325455 -0.15973211824893951 … 0.38566479086875916 -0.2686043977737427], bias = [0.04659181833267212]))

Callback function to record the loss

lossrecord = Float64[]
callback = function (p, l)
    push!(lossrecord, l)
    return false
end
#1 (generic function with 1 method)

Solve the problem. It may take a long time.

opt = OptimizationOptimJL.LBFGS(linesearch = LineSearches.BackTracking())
res = Optimization.solve(prob, opt, callback = callback, maxiters=1000)
retcode: Success
u: ComponentVector{Float64}(layer_1 = (weight = [0.5666522340614861 -1.173952803563179; 0.10920864744309333 -0.3632894511375322; … ; 1.6393236639967652 -0.2431305514303295; 1.649106100594892 -0.35510034424602094], bias = [-0.7549107211764958, -0.4868649645224069, -1.4014211225670852, -0.06472563928527529, 0.14396791098435552, -0.9562262671126022, -0.4596578402337904, -0.5157515963849989, -0.010734612501796666, 0.8934118705968788, 2.8128612920710196, 1.4541019011732785, -1.3032055692445546, 0.9282623162455879, 0.6721597785741303, -1.8829573724575708]), layer_2 = (weight = [-0.22221265056487574 0.47780603503173796 … 0.037861284215240334 -0.7710982611520251; 0.20336537659985338 -0.23887991603244998 … 0.3025880072609456 -0.20922817280718292; … ; 0.0851788853465367 0.22362464160908882 … 0.0043348009895594515 0.1489269552552157; -0.06541836872275711 0.35860498286569203 … 0.25638114438081944 0.14466380904166395], bias = [-0.12475806256555237, 0.35905200946425164, 0.033818840099065876, 0.37032756819156765, 0.02600522280230855, 0.11023960169222956, -0.14253478991976645, 0.29837956492008205, 0.9485955306919245, 0.5013469425223441, -0.06569041208810805, 0.4568551189847939, -0.35244894347096584, 0.12009918452472519, -0.1102148623465834, 0.43463257068743394]), layer_3 = (weight = [-0.7898777180692856 -0.21759276619396134 … -0.19097119581189947 -0.12064637303588406], bias = [-0.0009298200907269328]))
plot(lossrecord, xlabel="Iters", yscale=:log10, ylabel="Loss", lab=false)
_images/7da31db26853e758c3e479817b7b60b3364cadf5dae4259c0044fd3cee23e3bb.png

Plot the predicted solution of the PDE and compare it with the analytical solution to see the relative error.

xs, ys = [infimum(d.domain):dx/10:supremum(d.domain) for d in domains]
analytic_sol_func(x,y) = (sinpi(x)*sinpi(y))/(2pi^2)

phi = discretization.phi
u_predict = reshape([first(phi([x, y], res.u)) for x in xs for y in ys], (length(xs), length(ys)))
u_real = reshape([analytic_sol_func(x, y) for x in xs for y in ys], (length(xs), length(ys)))
diff_u = abs.(u_predict .- u_real)
201×201 Matrix{Float64}:
 0.00173412  0.00163645  0.00154122  0.0014484   …  0.000284493  0.000268586
 0.00175856  0.00166186  0.00156753  0.00147557     0.000280899  0.000265752
 0.00178082  0.0016851   0.0015917   0.00150061     0.000277164  0.000262749
 0.00180092  0.00170619  0.00161374  0.00152354     0.000273292  0.000259582
 0.00181886  0.00172515  0.00163366  0.00154438     0.000269289  0.000256257
 0.00183466  0.00174199  0.00165148  0.00156314  …  0.000265157  0.000252779
 0.00184835  0.00175673  0.00166723  0.00157984     0.000260902  0.000249151
 0.00185995  0.0017694   0.00168092  0.0015945      0.000256526  0.000245377
 0.00186946  0.00178001  0.00169257  0.00160714     0.000252032  0.000241461
 0.00187692  0.00178858  0.0017022   0.00161778     0.000247421  0.000237405
 ⋮                                               ⋱               ⋮
 0.00262469  0.00257278  0.00252043  0.00246766     0.00121257   0.00130472
 0.00271862  0.00266417  0.00260932  0.00255409     0.00117181   0.0012648
 0.00281312  0.0027561   0.00269872  0.00264098     0.00112973   0.00122356
 0.00290818  0.00284856  0.0027886   0.00272832  …  0.00108632   0.001181
 0.00300378  0.00294152  0.00287895  0.0028161      0.00104157   0.00113711
 0.00309991  0.00303497  0.00296975  0.00290428     0.000995491  0.00109189
 0.00319656  0.0031289   0.00306099  0.00299286     0.000948066  0.00104534
 0.0032937   0.00322328  0.00315264  0.00308182     0.000899297  0.000997456
 0.00339133  0.0033181   0.0032447   0.00317114  …  0.000849182  0.000948239
p1 = plot(xs, ys, u_real, linetype=:contourf, title = "analytic");
p2 = plot(xs, ys, u_predict, linetype=:contourf, title = "predicted");
p3 = plot(xs, ys, diff_u, linetype=:contourf, title = "error");
plot(p1, p2, p3)
_images/1e9c1ff3fcaa440af79c8e2a842eff373ce8b8e499e6aa783c6a5cc1317ea845.png

This notebook was generated using Literate.jl.