Solving PDEs with ModelingToolkit and NeuralPDE

Solving PDEs with ModelingToolkit and NeuralPDE#

Solving Poisson PDE Systems (https://docs.sciml.ai/NeuralPDE/stable/tutorials/pdesystem/)

\[ \partial^{2}_{x}u(x,y) + \partial^{2}_{y}u(x,y) = -\sin (\pi x) \sin (\pi y) \]

with boundary conditions

\[\begin{split} \begin{align} u(0, y) &= 0 \\ u(1, y) &= 0 \\ u(x, 0) &= 0 \\ u(x, 1) &= 0 \\ \end{align} \end{split}\]

where

\(x ∈ [0, 1], y ∈ [0, 1]\)

using NeuralPDE
using Lux
using Optimization
using OptimizationOptimJL
using ModelingToolkit
using ModelingToolkit: Interval
using LineSearches
using Plots
@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2
Differential(y) ∘ Differential(y)

2D PDE

eq  = Dxx(u(x, y)) + Dyy(u(x, y)) ~ -sinpi(x) * sinpi(y)
\[ \begin{equation} \frac{\mathrm{d}}{\mathrm{d}y} \frac{\mathrm{d}}{\mathrm{d}y} u\left( x, y \right) + \frac{\mathrm{d}}{\mathrm{d}x} \frac{\mathrm{d}}{\mathrm{d}x} u\left( x, y \right) = - sinpi\left( x \right) sinpi\left( y \right) \end{equation} \]

Boundary conditions

bcs = [
    u(0, y) ~ 0.0,
    u(1, y) ~ 0.0,
    u(x, 0) ~ 0.0,
    u(x, 1) ~ 0.0
]
\[\begin{split} \begin{align} u\left( 0, y \right) &= 0 \\ u\left( 1, y \right) &= 0 \\ u\left( x, 0 \right) &= 0 \\ u\left( x, 1 \right) &= 0 \end{align} \end{split}\]

Space domains

domains = [
    x  Interval(0.0, 1.0),
    y  Interval(0.0, 1.0)
]
2-element Vector{Symbolics.VarDomainPairing}:
 Symbolics.VarDomainPairing(x, 0.0 .. 1.0)
 Symbolics.VarDomainPairing(y, 0.0 .. 1.0)

Build a neural network for the PDE solver. Input: 2 dimensions. Hidden layers: 16 neurons * 2 layers. Output: single output u(x, y)

dim = 2
chain = Lux.Chain(Dense(dim, 16, Lux.σ), Dense(16, 16, Lux.σ), Dense(16, 1))
Chain(
    layer_1 = Dense(2 => 16, σ),        # 48 parameters
    layer_2 = Dense(16 => 16, σ),       # 272 parameters
    layer_3 = Dense(16 => 1),           # 17 parameters
)         # Total: 337 parameters,
          #        plus 0 states.

Discretization method usesPhysicsInformedNN() (PINN).

dx = 0.05
discretization = PhysicsInformedNN(chain, QuadratureTraining(; batch = 200, abstol = 1e-6, reltol = 1e-6))
NeuralPDE.PhysicsInformedNN{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}, Nothing, NeuralPDE.Phi{Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, typeof(NeuralPDE.numeric_derivative), Bool, Nothing, Nothing, Nothing, Base.RefValue{Int64}, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}(Integrals.CubatureJLh(0), 1.0e-6, 1.0e-6, 1000, 200), nothing, NeuralPDE.Phi{Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}(Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), nothing, (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()), nothing, static(true))), NeuralPDE.numeric_derivative, false, nothing, nothing, nothing, NeuralPDE.LogOptions(50), Base.RefValue{Int64}(1), true, false, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}())

Build the PDE system and discretize it.

@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x, y)])
prob = discretize(pde_system, discretization)
OptimizationProblem. In-place: true
u0: ComponentVector{Float64}(layer_1 = (weight = [-0.7142259478569031 1.0968718528747559; 0.7453238368034363 1.166124701499939; … ; 0.32256242632865906 0.4637589454650879; -0.04364728182554245 0.6442335247993469], bias = [0.6589458584785461, -0.6236564517021179, 0.5994181036949158, -0.4475242495536804, -0.13381262123584747, 0.5266185402870178, -0.3601319193840027, 0.37684643268585205, -0.2622748613357544, 0.40279969573020935, -0.20935679972171783, 0.2266196459531784, -0.03051178902387619, 0.08401350677013397, -0.3630087971687317, 0.09482788294553757]), layer_2 = (weight = [0.30554986000061035 -0.18402525782585144 … -0.11897736042737961 -0.06555217504501343; 0.3536887466907501 0.271634042263031 … -0.05106392130255699 -0.2505244016647339; … ; 0.29460883140563965 -0.1647356003522873 … -0.392799437046051 0.09617439657449722; 0.22125691175460815 -0.38321158289909363 … -0.11348456889390945 -0.12464281916618347], bias = [0.00500836968421936, -0.11813437938690186, 0.147001713514328, -0.07672235369682312, -0.03138035535812378, -0.14689353108406067, -0.13021770119667053, 0.20518755912780762, -0.17889142036437988, -0.1344265639781952, 0.009041637182235718, 0.0948941707611084, 0.0868895947933197, 0.14202311635017395, -0.017515718936920166, -0.21773073077201843]), layer_3 = (weight = [-0.10869141668081284 0.08033883571624756 … -0.20645223557949066 -0.01966203935444355], bias = [0.14772191643714905]))

Callback function to record the loss

lossrecord = Float64[]
callback = function (p, l)
    push!(lossrecord, l)
    return false
end
#1 (generic function with 1 method)

Solve the problem. It may take a long time.

opt = OptimizationOptimJL.LBFGS(linesearch = LineSearches.BackTracking())
res = Optimization.solve(prob, opt, callback = callback, maxiters=1000)
retcode: Failure
u: ComponentVector{Float64}(layer_1 = (weight = [-1.854405097875546 1.7506015210134747; 1.9365827260636954 1.903314047167912; … ; 0.8374449237481497 0.4425850235098661; -0.07984268802659156 0.18837274122726852], bias = [0.8351207372918912, -1.2831384753316104, 0.9883941228214655, -0.8526450990566303, 0.15495797226066682, 1.6013406233239011, -0.28234573418288006, 0.911551571246709, -1.1377346951148917, 0.6211551733014153, -1.51199751384399, 0.04618899291710958, 0.5050945306862835, -0.18825390256955593, -0.7897363439884376, -0.9352788913020015]), layer_2 = (weight = [0.37148692298412966 -0.405389255932487 … 0.4676676680144038 0.15450143960032864; 0.9580740473395513 0.7048005683982272 … -0.5096245982951838 -0.3912843912470774; … ; 0.515933123302588 -0.020123132214431586 … -0.5336494763841177 0.08024231871790642; 0.19462476718000793 -0.3571428186845372 … -0.048901637480743894 -0.09945549713672003], bias = [-0.4780873937398617, 0.16970225447750314, 0.3193178689920508, -0.13845372487939947, -0.28197349015489814, -0.3083560243590359, 0.09603780648379584, -0.35122636046506034, -0.6119152972582342, -0.02255899387239898, 0.08167343079581821, -0.9060857718008476, 0.23333675373595278, 0.11351374087951482, 0.12083704986021628, -0.21541944183215833]), layer_3 = (weight = [1.6902183876594798 0.18465194078801225 … -0.25803552709554534 0.32782522045215284], bias = [0.22752440902811752]))
plot(lossrecord, xlabel="Iters", yscale=:log10, ylabel="Loss", lab=false)
_images/37101541b44910e41173cf24ddb5101df726c41db9f856f431e15c271a301eb1.png

Plot the predicted solution of the PDE and compare it with the analytical solution to see the relative error.

xs, ys = [infimum(d.domain):dx/10:supremum(d.domain) for d in domains]
analytic_sol_func(x,y) = (sinpi(x)*sinpi(y))/(2pi^2)

phi = discretization.phi
u_predict = reshape([first(phi([x, y], res.u)) for x in xs for y in ys], (length(xs), length(ys)))
u_real = reshape([analytic_sol_func(x, y) for x in xs for y in ys], (length(xs), length(ys)))
diff_u = abs.(u_predict .- u_real)
201×201 Matrix{Float64}:
 0.000823937  0.00079177   0.000759477  …  0.00268841   0.00277819
 0.000765216  0.000735278  0.000705168     0.00267022   0.00275853
 0.000706991  0.000679238  0.000651269     0.00265109   0.00273793
 0.000649285  0.000623675  0.000597805     0.00263103   0.00271642
 0.00059212   0.000568611  0.000544799     0.00261006   0.00269401
 0.00053552   0.00051407   0.000492274  …  0.00258819   0.00267069
 0.000479508  0.000460075  0.000440254     0.00256542   0.0026465
 0.000424107  0.000406649  0.000388762     0.00254177   0.00262142
 0.00036934   0.000353815  0.000337821     0.00251725   0.00259549
 0.00031523   0.000301595  0.000287453     0.00249187   0.00256871
 ⋮                                      ⋱               ⋮
 4.07285e-5   1.00342e-6   4.05966e-5      0.000538996  0.000600649
 2.1776e-5    6.31063e-5   0.000102267     0.000490528  0.000552032
 8.59558e-5   0.000126867  0.000165578     0.000440722  0.000502057
 0.000151799  0.000192274  0.000230517  …  0.000389553  0.000450699
 0.000219295  0.000259316  0.000297075     0.000337     0.000397937
 0.000288429  0.000327981  0.000365239     0.000283038  0.000343746
 0.000359189  0.000398256  0.000434996     0.000227643  0.000288101
 0.000431559  0.000470126  0.000506334     0.000170793  0.000230979
 0.000505525  0.000543577  0.000579238  …  0.000112461  0.000172354
p1 = plot(xs, ys, u_real, linetype=:contourf, title = "analytic");
p2 = plot(xs, ys, u_predict, linetype=:contourf, title = "predicted");
p3 = plot(xs, ys, diff_u, linetype=:contourf, title = "error");
plot(p1, p2, p3)
_images/931c542d701a9cb4efa560329cdc94618732ec3965dd523437dfc1abe86db631.png

This notebook was generated using Literate.jl.