Solving PDEs with ModelingToolkit and NeuralPDE

Solving PDEs with ModelingToolkit and NeuralPDE#

Solving Poisson PDE Systems (https://docs.sciml.ai/NeuralPDE/stable/tutorials/pdesystem/)

\[ \partial^{2}_{x}u(x,y) + \partial^{2}_{y}u(x,y) = -\sin (\pi x) \sin (\pi y) \]

with boundary conditions

\[\begin{split} \begin{align} u(0, y) &= 0 \\ u(1, y) &= 0 \\ u(x, 0) &= 0 \\ u(x, 1) &= 0 \\ \end{align} \end{split}\]

where

\(x ∈ [0, 1], y ∈ [0, 1]\)

using NeuralPDE
using Lux
using Optimization
using OptimizationOptimJL
using ModelingToolkit
using DomainSets
using LineSearches
using Plots

2D PDE

@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2
eq  = Dxx(u(x, y)) + Dyy(u(x, y)) ~ -sinpi(x) * sinpi(y)
\[ \begin{equation} \frac{\mathrm{d}}{\mathrm{d}y} \frac{\mathrm{d}}{\mathrm{d}y} u\left( x, y \right) + \frac{\mathrm{d}}{\mathrm{d}x} \frac{\mathrm{d}}{\mathrm{d}x} u\left( x, y \right) = - sinpi\left( x \right) sinpi\left( y \right) \end{equation} \]

Boundary conditions

bcs = [
    u(0, y) ~ 0.0,
    u(1, y) ~ 0.0,
    u(x, 0) ~ 0.0,
    u(x, 1) ~ 0.0
]
\[\begin{split} \begin{align} u\left( 0, y \right) &= 0 \\ u\left( 1, y \right) &= 0 \\ u\left( x, 0 \right) &= 0 \\ u\left( x, 1 \right) &= 0 \end{align} \end{split}\]

Space domains

domains = [
    x  DomainSets.Interval(0.0, 1.0),
    y  DomainSets.Interval(0.0, 1.0)
]
2-element Vector{Symbolics.VarDomainPairing}:
 Symbolics.VarDomainPairing(x, 0.0 .. 1.0)
 Symbolics.VarDomainPairing(y, 0.0 .. 1.0)

Build a neural network for the PDE solver. Input: 2 dimensions. Hidden layers: 16 neurons * 2 layers. Output: single output u(x, y)

dim = 2
chain = Lux.Chain(Dense(dim, 16, Lux.σ), Dense(16, 16, Lux.σ), Dense(16, 1))
Chain(
    layer_1 = Dense(2 => 16, σ),        # 48 parameters
    layer_2 = Dense(16 => 16, σ),       # 272 parameters
    layer_3 = Dense(16 => 1),           # 17 parameters
)         # Total: 337 parameters,
          #        plus 0 states.

Discretization method usesPhysicsInformedNN() (PINN).

dx = 0.05
discretization = PhysicsInformedNN(chain, QuadratureTraining(; batch = 200, abstol = 1e-6, reltol = 1e-6))
NeuralPDE.PhysicsInformedNN{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}, Nothing, Nothing, NeuralPDE.Phi{Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, typeof(NeuralPDE.numeric_derivative), Bool, Nothing, Nothing, Nothing, Base.RefValue{Int64}, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}(Integrals.CubatureJLh(0), 1.0e-6, 1.0e-6, 1000, 200), nothing, nothing, NeuralPDE.Phi{Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}(Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), nothing, (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()), nothing, static(true))), NeuralPDE.numeric_derivative, false, nothing, nothing, nothing, NeuralPDE.LogOptions(50), Base.RefValue{Int64}(1), true, false, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}())

Build the PDE system and discretize it.

@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x, y)])
prob = discretize(pde_system, discretization)
OptimizationProblem. In-place: true
u0: ComponentVector{Float64}(layer_1 = (weight = [-0.5384260416030884 0.7537503242492676; -0.2649165689945221 -0.5832751989364624; … ; 1.1344199180603027 0.891327440738678; -0.04863686487078667 -0.6521223783493042], bias = [0.6583268046379089, 0.40957701206207275, 0.2772509753704071, -0.062044546008110046, -0.23485344648361206, -0.05989682674407959, -0.3750241696834564, -0.29463672637939453, 0.6387938261032104, 0.42366382479667664, 0.02419515699148178, 0.3815577030181885, 0.5411459803581238, 0.34563568234443665, 0.6907174587249756, 0.39254799485206604]), layer_2 = (weight = [-0.010924519039690495 0.06252914667129517 … 0.3867081105709076 0.36070606112480164; -0.40877339243888855 -0.13817539811134338 … -0.05188559368252754 -0.2414761781692505; … ; -0.35752028226852417 0.059188615530729294 … 0.20610669255256653 -0.23279894888401031; 0.40617963671684265 -0.2787529230117798 … -0.41855108737945557 -0.33355873823165894], bias = [-0.11241960525512695, -0.16406691074371338, -0.03492823243141174, 0.1511823832988739, 0.09267774224281311, 0.006390243768692017, 0.1937355101108551, 0.21921402215957642, -0.13452017307281494, 0.18639034032821655, 0.187171071767807, 0.2108892798423767, 0.17324820160865784, 0.1000744104385376, 0.03190377354621887, 0.1076541543006897]), layer_3 = (weight = [0.08300625532865524 -0.11455628275871277 … 0.2910446226596832 0.0543125718832016], bias = [-0.02722260355949402]))

Callback function to record the loss

lossrecord = Float64[]
callback = function (p, l)
    push!(lossrecord, l)
    return false
end
#1 (generic function with 1 method)

Solve the problem. It may take a long time.

opt = OptimizationOptimJL.LBFGS(linesearch = LineSearches.BackTracking())
@time res = Optimization.solve(prob, opt, callback = callback, maxiters=1000)
1533.640660 seconds (6.06 G allocations: 585.671 GiB, 4.43% gc time, 8.20% compilation time: <1% of which was recompilation)
retcode: Success
u: ComponentVector{Float64}(layer_1 = (weight = [-0.31236796659017463 0.05493960839331902; -0.14276543158455735 -0.8156883180111498; … ; 1.0115236757688177 1.2891781342780493; 0.7693113121420344 -2.1040838700252222], bias = [0.4237826226096991, 0.18315465163990757, 0.04734497000546879, 0.6181501316330171, 0.020051610323236754, 1.6521441495907132, 0.009516783334555306, -0.1875859502852413, 0.6210886496561214, 0.708515959284126, 0.5541019445485856, 0.8925627240259169, 0.5830575990336544, 0.38018567728282265, 0.48944343599162865, -1.5218993863608739]), layer_2 = (weight = [0.04652364911882348 -0.0655085168830857 … 0.5785710907603058 0.39668368019129663; -0.22873751686797847 0.19414060437750172 … 0.1671945840335995 -0.3574704798620557; … ; 0.2608606494619194 0.12381346634442217 … 1.0843749293379141 -0.06121245914634059; 0.573424628084408 -0.19673693409034743 … -0.017378389824395074 -0.22580716435120785], bias = [-0.18492609433893797, -0.00031909337582831227, 0.0016867402323299996, 0.278728439140691, 0.054109972320113976, 0.02230478137378353, 0.25932265078989836, 0.19363537078623277, -0.20695906920918342, 0.26489992737327395, 0.4350721609440173, 0.29986646165720315, 0.3110244437400427, 0.37090610944984437, 0.46100340862590994, 0.2434582582199932]), layer_3 = (weight = [0.34538582101149634 -1.1243692164344916 … -0.030048481606021044 0.03254303558653353], bias = [-0.44634673385607954]))
plot(lossrecord, xlabel="Iters", yscale=:log10, ylabel="Loss", lab=false)
_images/d0414f361d01f28d877f9ee41c0b455659f560bd7f92cac72238b1c6bd79c93a.png

Plot the predicted solution of the PDE and compare it with the analytical solution to see the relative error.

xs, ys = [DomainSets.infimum(d.domain):dx/10:DomainSets.supremum(d.domain) for d in domains]
analytic_sol_func(x,y) = (sinpi(x)*sinpi(y))/(2pi^2)

phi = discretization.phi
u_predict = reshape([first(phi([x, y], res.u)) for x in xs for y in ys], (length(xs), length(ys)))
u_real = reshape([analytic_sol_func(x, y) for x in xs for y in ys], (length(xs), length(ys)))
diff_u = abs.(u_predict .- u_real)
201×201 Matrix{Float64}:
 0.00443926  0.00434157  0.00424394  0.00414641  …  0.000185977  0.000157556
 0.00433847  0.0042435   0.00414856  0.00405371     0.000135954  0.000107526
 0.00423699  0.00414473  0.00405249  0.00396032     8.69768e-5   5.85448e-5
 0.00413486  0.00404531  0.00395577  0.00386628     3.9046e-5    1.06137e-5
 0.00403214  0.0039453   0.00385845  0.00377162     7.83837e-6   3.62673e-5
 0.00392887  0.00384473  0.00376056  0.0036764   …  5.36765e-5   8.20982e-5
 0.00382509  0.00374365  0.00366216  0.00358066     9.84688e-5   0.000126879
 0.00372085  0.0036421   0.00356328  0.00348444     0.000142216  0.000170612
 0.00361619  0.00354012  0.00346398  0.00338778     0.00018492   0.000213296
 0.00351116  0.00343777  0.00336428  0.00329072     0.00022658   0.000254934
 ⋮                                               ⋱               ⋮
 0.00270872  0.00267146  0.00263358  0.0025951      0.00157908   0.00164728
 0.00281419  0.00277507  0.00273533  0.002695       0.0015831    0.00165203
 0.00291988  0.0028789   0.00283731  0.00279513     0.00158609   0.00165575
 0.00302576  0.00298293  0.00293949  0.00289546  …  0.00158803   0.00165839
 0.0031318   0.00308713  0.00304184  0.00299596     0.00158889   0.00165993
 0.00323797  0.00319146  0.00314433  0.00309661     0.00158862   0.00166034
 0.00334426  0.00329591  0.00324694  0.00319738     0.00158721   0.00165957
 0.00345061  0.00340044  0.00334964  0.00329825     0.00158461   0.00165759
 0.00355702  0.00350503  0.00345241  0.00339919  …  0.00158078   0.00165438
p1 = plot(xs, ys, u_real, linetype=:contourf, title = "analytic");
p2 = plot(xs, ys, u_predict, linetype=:contourf, title = "predicted");
p3 = plot(xs, ys, diff_u, linetype=:contourf, title = "error");
plot(p1, p2, p3)
_images/65e3b35d9cf99bee4d56cb00a083db28508ecd8015e3070d2458856f757d0a36.png

This notebook was generated using Literate.jl.