Solving PDEs with ModelingToolkit and NeuralPDE

Solving PDEs with ModelingToolkit and NeuralPDE#

Solving Poisson PDE Systems (https://docs.sciml.ai/NeuralPDE/stable/tutorials/pdesystem/)

\[ \partial^{2}_{x}u(x,y) + \partial^{2}_{y}u(x,y) = -\sin (\pi x) \sin (\pi y) \]

with boundary conditions

\[\begin{split} \begin{align} u(0, y) &= 0 \\ u(1, y) &= 0 \\ u(x, 0) &= 0 \\ u(x, 1) &= 0 \\ \end{align} \end{split}\]

where

\(x ∈ [0, 1], y ∈ [0, 1]\)

using NeuralPDE
using Lux
using Optimization
using OptimizationOptimJL
using ModelingToolkit
using DomainSets
using LineSearches
using Plots

2D PDE

@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2
eq  = Dxx(u(x, y)) + Dyy(u(x, y)) ~ -sinpi(x) * sinpi(y)
\[ \begin{equation} \frac{\mathrm{d}}{\mathrm{d}y} \frac{\mathrm{d}}{\mathrm{d}y} u\left( x, y \right) + \frac{\mathrm{d}}{\mathrm{d}x} \frac{\mathrm{d}}{\mathrm{d}x} u\left( x, y \right) = - sinpi\left( x \right) sinpi\left( y \right) \end{equation} \]

Boundary conditions

bcs = [
    u(0, y) ~ 0.0,
    u(1, y) ~ 0.0,
    u(x, 0) ~ 0.0,
    u(x, 1) ~ 0.0
]
\[\begin{split} \begin{align} u\left( 0, y \right) &= 0 \\ u\left( 1, y \right) &= 0 \\ u\left( x, 0 \right) &= 0 \\ u\left( x, 1 \right) &= 0 \end{align} \end{split}\]

Space domains

domains = [
    x  DomainSets.Interval(0.0, 1.0),
    y  DomainSets.Interval(0.0, 1.0)
]
2-element Vector{Symbolics.VarDomainPairing}:
 Symbolics.VarDomainPairing(x, 0.0 .. 1.0)
 Symbolics.VarDomainPairing(y, 0.0 .. 1.0)

Build a neural network for the PDE solver. Input: 2 dimensions. Hidden layers: 16 neurons * 2 layers. Output: single output u(x, y)

dim = 2
chain = Lux.Chain(Dense(dim, 16, Lux.σ), Dense(16, 16, Lux.σ), Dense(16, 1))
Chain(
    layer_1 = Dense(2 => 16, σ),                  # 48 parameters
    layer_2 = Dense(16 => 16, σ),                 # 272 parameters
    layer_3 = Dense(16 => 1),                     # 17 parameters
)         # Total: 337 parameters,
          #        plus 0 states.

Discretization method usesPhysicsInformedNN() (PINN).

dx = 0.05
discretization = PhysicsInformedNN(chain, QuadratureTraining(; batch = 200, abstol = 1e-6, reltol = 1e-6))
NeuralPDE.PhysicsInformedNN{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}, Nothing, Nothing, NeuralPDE.Phi{LuxCore.StatefulLuxLayerImpl.StatefulLuxLayer{Val{true}, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, typeof(NeuralPDE.numeric_derivative), Bool, Nothing, Nothing, Nothing, Base.RefValue{Int64}, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}(Integrals.CubatureJLh(0), 1.0e-6, 1.0e-6, 1000, 200), nothing, nothing, NeuralPDE.Phi{LuxCore.StatefulLuxLayerImpl.StatefulLuxLayer{Val{true}, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}(LuxCore.StatefulLuxLayerImpl.StatefulLuxLayer{Val{true}, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), nothing, (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()), nothing, Val{true}())), NeuralPDE.numeric_derivative, false, nothing, nothing, nothing, NeuralPDE.LogOptions(50), Base.RefValue{Int64}(1), true, false, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}())

Build the PDE system and discretize it.

@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x, y)])
prob = discretize(pde_system, discretization)
OptimizationProblem. In-place: true
u0: ComponentVector{Float64}(layer_1 = (weight = [0.7798402309417725 0.7714139223098755; 0.5802998542785645 0.4550359845161438; … ; -0.22006142139434814 -1.0936412811279297; -0.44520440697669983 1.166462779045105], bias = [0.24288538098335266, -0.40192458033561707, -0.1888885200023651, -0.04649345204234123, 0.1177700087428093, -0.6978550553321838, -0.21471956372261047, -0.05960913002490997, -0.10941161960363388, 0.2531987130641937, -0.2104482352733612, 0.6805310249328613, -0.23738285899162292, -0.18501876294612885, -0.21167142689228058, -0.2395436316728592]), layer_2 = (weight = [-0.2388668805360794 -0.22025632858276367 … -0.36523765325546265 -0.4286521077156067; 0.08630724996328354 0.3409085273742676 … 0.20255060493946075 0.025577591732144356; … ; 0.2902018129825592 0.35351452231407166 … 0.12296261638402939 0.11770133674144745; -0.2693678140640259 0.23862911760807037 … -0.40705570578575134 0.31215471029281616], bias = [0.2464446723461151, 0.046215057373046875, 0.24930480122566223, -0.06035134196281433, 0.1833418309688568, 0.06522560119628906, 0.23567911982536316, -0.07572835683822632, 0.22704532742500305, 0.18679052591323853, -0.1929524540901184, 0.07832619547843933, 0.23260295391082764, -0.07068678736686707, -0.01337626576423645, 0.20062783360481262]), layer_3 = (weight = [-0.4003361165523529 -0.29924434423446655 … -0.10808081924915314 -0.1426621824502945], bias = [0.1945304274559021]))

Callback function to record the loss

lossrecord = Float64[]
callback = function (p, l)
    push!(lossrecord, l)
    return false
end
#1 (generic function with 1 method)

Solve the problem. It may take a long time.

opt = OptimizationOptimJL.LBFGS(linesearch = LineSearches.BackTracking())
@time res = Optimization.solve(prob, opt, callback = callback, maxiters=1000)
997.075587 seconds (4.02 G allocations: 303.315 GiB, 3.75% gc time, 12.42% compilation time: <1% of which was recompilation)
retcode: Success
u: ComponentVector{Float64}(layer_1 = (weight = [0.26232464973338465 0.45880425204512687; 0.6402837101508196 0.6885308355711968; … ; 0.7000506750430829 -1.221748535803418; -2.1676108500802376 2.1361678333075034], bias = [-0.3931375953884019, -1.4123692283671643, 0.4663467292936297, 0.9741867884322727, 0.06064107505830261, -1.5540049311895578, -0.2023143336724759, 0.33300629288458816, 0.504492617124244, -0.24810627729663345, -1.0137417889097518, 2.5598155532673754, -0.3928532136610881, -0.11818131573229143, -0.6029112699149126, 0.77503355292077]), layer_2 = (weight = [-0.8448703226322946 -0.6961241440177792 … 0.09709474848929818 -0.20822342517911596; -0.08893327843668902 0.7155123964061884 … 0.6984127788038346 -0.17877451740860179; … ; 0.2437570216951347 0.3313208967151344 … 0.11317643576506745 0.08290354402867939; -0.46287790251631306 0.19442458435302676 … -0.3052662088921988 0.27852324650322224], bias = [0.3087754780251707, 0.10675688667129767, 0.32040996859416043, -0.141869038034641, 0.1666846386641946, 0.3711362822496281, 0.3011103898933208, -0.08294763579608508, 0.2516236207180197, 0.12512516124176928, -0.674660686064573, 0.01237463947186731, 0.31935495731511215, -0.03972503421392855, 0.06057960765538829, 0.14948313695291815]), layer_3 = (weight = [-1.2624623292702672 0.8472557931995872 … -0.2583310074587382 -0.06218957716564262], bias = [0.604627391779455]))
plot(lossrecord, xlabel="Iters", yscale=:log10, ylabel="Loss", lab=false)
_images/d366d472235488e6c036341145ecae8925c334b6a52a1c274c56f98ab70b4366.png

Plot the predicted solution of the PDE and compare it with the analytical solution to see the relative error.

xs, ys = [DomainSets.infimum(d.domain):dx/10:DomainSets.supremum(d.domain) for d in domains]
analytic_sol_func(x,y) = (sinpi(x)*sinpi(y))/(2pi^2)

phi = discretization.phi
u_predict = reshape([first(phi([x, y], res.u)) for x in xs for y in ys], (length(xs), length(ys)))
u_real = reshape([analytic_sol_func(x, y) for x in xs for y in ys], (length(xs), length(ys)))
diff_u = abs.(u_predict .- u_real)
201×201 Matrix{Float64}:
 0.00246386  0.00242635  0.00238977  …  0.00480502  0.00485943  0.00491334
 0.00243076  0.00239399  0.00235814     0.00467159  0.00472378  0.00477548
 0.00239742  0.0023614   0.00232626     0.00453856  0.00458854  0.00463802
 0.00236384  0.00232857  0.00229414     0.00440595  0.00445373  0.00450099
 0.00233003  0.00229548  0.00226177     0.00427379  0.00431935  0.0043644
 0.00229597  0.00226215  0.00222913  …  0.00414209  0.00418544  0.00422827
 0.00226166  0.00222856  0.00219624     0.00401087  0.00405202  0.00409263
 0.00222709  0.00219471  0.00216308     0.00388014  0.00391909  0.00395749
 0.00219227  0.00216059  0.00212966     0.00374994  0.00378669  0.00382288
 0.00215718  0.00212622  0.00209597     0.00362028  0.00365482  0.0036888
 ⋮                                   ⋱                          ⋮
 0.00205923  0.00192203  0.00178713     0.00300274  0.00301729  0.00302986
 0.0019893   0.0018514   0.00171581     0.00315484  0.00317152  0.00318621
 0.00191685  0.00177827  0.001642       0.00330762  0.00332643  0.00334324
 0.00184181  0.00170258  0.00156567  …  0.00346101  0.00348197  0.00350091
 0.00176417  0.00162431  0.00148677     0.00361498  0.00363809  0.00365915
 0.00168388  0.00154341  0.00140528     0.00376946  0.00379472  0.00381791
 0.0016009   0.00145986  0.00132115     0.00392441  0.00395181  0.00397714
 0.0015152   0.00137361  0.00123435     0.00407976  0.00410932  0.00413678
 0.00142674  0.00128463  0.00114485  …  0.00423547  0.00426718  0.00429677
p1 = plot(xs, ys, u_real, linetype=:contourf, title = "analytic");
p2 = plot(xs, ys, u_predict, linetype=:contourf, title = "predicted");
p3 = plot(xs, ys, diff_u, linetype=:contourf, title = "error");
plot(p1, p2, p3)
_images/f79f8b3aada5c26d68f436ef6cc0f5a25fa7ffd2b81a23bb9b240ee5f6a99e37.png

This notebook was generated using Literate.jl.