Solving PDEs with ModelingToolkit and NeuralPDE

Solving PDEs with ModelingToolkit and NeuralPDE#

Solving Poisson PDE Systems (https://docs.sciml.ai/NeuralPDE/stable/tutorials/pdesystem/)

\[ \partial^{2}_{x}u(x,y) + \partial^{2}_{y}u(x,y) = -\sin (\pi x) \sin (\pi y) \]

with boundary conditions

\[\begin{split} \begin{align} u(0, y) &= 0 \\ u(1, y) &= 0 \\ u(x, 0) &= 0 \\ u(x, 1) &= 0 \\ \end{align} \end{split}\]

where

\(x ∈ [0, 1], y ∈ [0, 1]\)

using NeuralPDE
using Lux
using Optimization
using OptimizationOptimJL
using ModelingToolkit
using DomainSets
using LineSearches
using Plots

2D PDE

@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2
eq  = Dxx(u(x, y)) + Dyy(u(x, y)) ~ -sinpi(x) * sinpi(y)
\[ \begin{equation} \frac{\mathrm{d}}{\mathrm{d}y} \frac{\mathrm{d}}{\mathrm{d}y} u\left( x, y \right) + \frac{\mathrm{d}}{\mathrm{d}x} \frac{\mathrm{d}}{\mathrm{d}x} u\left( x, y \right) = - sinpi\left( x \right) sinpi\left( y \right) \end{equation} \]

Boundary conditions

bcs = [
    u(0, y) ~ 0.0,
    u(1, y) ~ 0.0,
    u(x, 0) ~ 0.0,
    u(x, 1) ~ 0.0
]
\[\begin{split} \begin{align} u\left( 0, y \right) &= 0 \\ u\left( 1, y \right) &= 0 \\ u\left( x, 0 \right) &= 0 \\ u\left( x, 1 \right) &= 0 \end{align} \end{split}\]

Space domains

domains = [
    x  DomainSets.Interval(0.0, 1.0),
    y  DomainSets.Interval(0.0, 1.0)
]
2-element Vector{Symbolics.VarDomainPairing}:
 Symbolics.VarDomainPairing(x, 0.0 .. 1.0)
 Symbolics.VarDomainPairing(y, 0.0 .. 1.0)

Build a neural network for the PDE solver. Input: 2 dimensions. Hidden layers: 16 neurons * 2 layers. Output: single output u(x, y)

dim = 2
chain = Lux.Chain(Dense(dim, 16, Lux.σ), Dense(16, 16, Lux.σ), Dense(16, 1))
Chain(
    layer_1 = Dense(2 => 16, σ),                  # 48 parameters
    layer_2 = Dense(16 => 16, σ),                 # 272 parameters
    layer_3 = Dense(16 => 1),                     # 17 parameters
)         # Total: 337 parameters,
          #        plus 0 states.

Discretization method usesPhysicsInformedNN() (PINN).

dx = 0.05
discretization = PhysicsInformedNN(chain, QuadratureTraining(; batch = 200, abstol = 1e-6, reltol = 1e-6))
NeuralPDE.PhysicsInformedNN{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}, Nothing, Nothing, NeuralPDE.Phi{LuxCore.StatefulLuxLayerImpl.StatefulLuxLayer{Val{true}, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, typeof(NeuralPDE.numeric_derivative), Bool, Nothing, Nothing, Nothing, Base.RefValue{Int64}, Base.Pairs{Symbol, Union{}, Nothing, @NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}(Integrals.CubatureJLh(0), 1.0e-6, 1.0e-6, 1000, 200), nothing, nothing, NeuralPDE.Phi{LuxCore.StatefulLuxLayerImpl.StatefulLuxLayer{Val{true}, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}(LuxCore.StatefulLuxLayerImpl.StatefulLuxLayer{Val{true}, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), nothing, (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()), nothing, Val{true}())), NeuralPDE.numeric_derivative, false, nothing, nothing, nothing, NeuralPDE.LogOptions(50), Base.RefValue{Int64}(1), true, false, Base.Pairs{Symbol, Union{}, Nothing, @NamedTuple{}}())

Build the PDE system and discretize it.

@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x, y)])
prob = discretize(pde_system, discretization)
OptimizationProblem. In-place: true
u0: ComponentVector{Float64}(layer_1 = (weight = [-0.023855244740843773 0.12711910903453827; 0.6631618142127991 -1.0116028785705566; … ; 0.7869386672973633 0.1857447773218155; 0.4611424505710602 -0.19192397594451904], bias = [0.20912010967731476, 0.33219337463378906, 0.1265210509300232, -0.24428372085094452, -0.1436428725719452, -0.6807026863098145, -0.18831895291805267, -0.603789746761322, 0.32581740617752075, 0.6347813606262207, 0.4506238102912903, 0.5774304270744324, 0.6955792903900146, 0.5713812112808228, 0.5525085926055908, 0.02071593515574932]), layer_2 = (weight = [-0.3469170928001404 -0.1569669246673584 … -0.06509111076593399 0.12506279349327087; 0.0005954783409833908 0.24763144552707672 … 0.2888715863227844 -0.10798552632331848; … ; 0.42334815859794617 -0.168069988489151 … 0.30065277218818665 -0.017833689227700233; -0.1636839210987091 -0.09767166525125504 … 0.08071080595254898 0.3182185888290405], bias = [-0.22636306285858154, -0.1399635374546051, -0.20671066641807556, -0.24821099638938904, 0.00444832444190979, 0.008934050798416138, 0.17010539770126343, 0.2350662648677826, -0.04294350743293762, -0.08185058832168579, -0.18361160159111023, -0.015899628400802612, -0.041740238666534424, 0.1187208890914917, 0.2478432059288025, 0.180536687374115]), layer_3 = (weight = [0.37689080834388733 -0.18616792559623718 … 0.26542016863822937 0.09118076413869858], bias = [0.101006418466568]))

Callback function to record the loss

lossrecord = Float64[]
callback = function (p, l)
    push!(lossrecord, l)
    return false
end
#2 (generic function with 1 method)

Solve the problem. It may take a long time.

opt = OptimizationOptimJL.LBFGS(linesearch = LineSearches.BackTracking())
@time res = Optimization.solve(prob, opt, callback = callback, maxiters=1000)
1929.069740 seconds (7.59 G allocations: 520.742 GiB, 3.43% gc time, 6.49% compilation time: <1% of which was recompilation)
retcode: Success
u: ComponentVector{Float64}(layer_1 = (weight = [0.19520111831693074 0.2073303259693003; -0.057504511888743844 -1.523686806579155; … ; 1.3238313324799804 0.012501753724111176; -0.39930704532328587 -0.559673006824613], bias = [0.026777407021381323, 0.942469209990607, -0.2189124768773091, 0.06462727707542855, 0.5219186692976007, -2.1862265580716604, -0.16123470062379133, -1.543430349265925, 0.9620681452565635, 0.9566656051808753, 1.59499847788385, -2.5323976174823897, 1.2716446340654044, 1.2741152039182255, 0.5847092661640814, 0.8006843144767958]), layer_2 = (weight = [-0.595717086233514 -0.10517303900218401 … -0.7857028053234214 -0.18581801446953103; 0.1084933337543332 0.2715717221568538 … 0.4628865618751775 -0.07410956776822797; … ; 1.0300459302663314 -0.9509163555372006 … 0.19088939692460197 -1.2871270709366183; 0.008935334154669704 -0.19854040695257427 … 0.1436048681904119 0.13551636672685566], bias = [-0.6328604209154174, -0.023916042114625304, 0.23270315342941508, -0.178019188709081, 0.2721565607377771, -0.010644931294553964, 0.2957436252186067, 0.3284125932152943, -0.13973655078510172, -0.15578966346888903, -0.02291406734510188, 0.31879983491678554, 0.5270876497445215, 0.06524504038451347, 0.10934692933667471, 0.21537079990402794]), layer_3 = (weight = [2.1270354585641553 0.29716563404653046 … 2.4217814920621694 0.1533474084814175], bias = [0.8905715450709875]))
plot(lossrecord, xlabel="Iters", yscale=:log10, ylabel="Loss", lab=false)

Plot the predicted solution of the PDE and compare it with the analytical solution to see the relative error.

xs, ys = [DomainSets.infimum(d.domain):dx/10:DomainSets.supremum(d.domain) for d in domains]
analytic_sol_func(x,y) = (sinpi(x)*sinpi(y))/(2pi^2)

phi = discretization.phi
u_predict = reshape([first(phi([x, y], res.u)) for x in xs for y in ys], (length(xs), length(ys)))
u_real = reshape([analytic_sol_func(x, y) for x in xs for y in ys], (length(xs), length(ys)))
diff_u = abs.(u_predict .- u_real)
201×201 Matrix{Float64}:
 0.00175222   0.00171453   0.00167685   …  0.00183163  0.00185737  0.00188247
 0.0016468    0.00161228   0.00157772      0.00176461  0.00178871  0.00181214
 0.00154313   0.00151171   0.00148019      0.00169885  0.00172134  0.00174315
 0.00144121   0.00141281   0.00138426      0.00163431  0.00165524  0.00167547
 0.00134101   0.00131557   0.00128993      0.00157098  0.00159038  0.00160907
 0.00124254   0.00121999   0.00119718   …  0.00150884  0.00152676  0.00154395
 0.00114578   0.00112605   0.00110601      0.00144786  0.00146434  0.00148007
 0.00105073   0.00103376   0.00101642      0.00138804  0.00140311  0.00141742
 0.000957382  0.000943091  0.00092839      0.00132935  0.00134305  0.00135598
 0.000865722  0.000854051  0.000841922     0.00127177  0.00128414  0.00129572
 ⋮                                      ⋱                          ⋮
 0.0030848    0.00293719   0.00279236      0.00128146  0.00128935  0.00129669
 0.00307981   0.00292996   0.00278295      0.00132886  0.00133741  0.00134544
 0.00307226   0.00292017   0.00277098      0.00137715  0.0013864   0.00139514
 0.0030621    0.00290777   0.0027564    …  0.00142637  0.00143634  0.00144581
 0.00304927   0.00289271   0.00273917      0.00147654  0.00148726  0.00149748
 0.00303375   0.00287495   0.00271924      0.00152768  0.00153917  0.00155018
 0.00301549   0.00285445   0.00269657      0.00157983  0.00159212  0.00160393
 0.00299443   0.00283117   0.00267112      0.001633    0.00164611  0.00165876
 0.00297055   0.00280506   0.00264285   …  0.00168722  0.00170119  0.0017147
p1 = plot(xs, ys, u_real, linetype=:contourf, title = "analytic");
p2 = plot(xs, ys, u_predict, linetype=:contourf, title = "predicted");
p3 = plot(xs, ys, diff_u, linetype=:contourf, title = "error");
plot(p1, p2, p3)

This notebook was generated using Literate.jl.