Solving PDEs with ModelingToolkit and NeuralPDE

Solving PDEs with ModelingToolkit and NeuralPDE#

Solving Poisson PDE Systems (https://docs.sciml.ai/NeuralPDE/stable/tutorials/pdesystem/)

\[ \partial^{2}_{x}u(x,y) + \partial^{2}_{y}u(x,y) = -\sin (\pi x) \sin (\pi y) \]

with boundary conditions

\[\begin{split} \begin{align} u(0, y) &= 0 \\ u(1, y) &= 0 \\ u(x, 0) &= 0 \\ u(x, 1) &= 0 \\ \end{align} \end{split}\]

where

\(x ∈ [0, 1], y ∈ [0, 1]\)

using NeuralPDE
using Lux
using Optimization
using OptimizationOptimJL
using ModelingToolkit
using DomainSets
using LineSearches
using Plots

2D PDE

@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2
eq  = Dxx(u(x, y)) + Dyy(u(x, y)) ~ -sinpi(x) * sinpi(y)
\[ \begin{equation} \frac{\mathrm{d}}{\mathrm{d}y} \frac{\mathrm{d}}{\mathrm{d}y} u\left( x, y \right) + \frac{\mathrm{d}}{\mathrm{d}x} \frac{\mathrm{d}}{\mathrm{d}x} u\left( x, y \right) = - sinpi\left( x \right) sinpi\left( y \right) \end{equation} \]

Boundary conditions

bcs = [
    u(0, y) ~ 0.0,
    u(1, y) ~ 0.0,
    u(x, 0) ~ 0.0,
    u(x, 1) ~ 0.0
]
\[\begin{split} \begin{align} u\left( 0, y \right) &= 0 \\ u\left( 1, y \right) &= 0 \\ u\left( x, 0 \right) &= 0 \\ u\left( x, 1 \right) &= 0 \end{align} \end{split}\]

Space domains

domains = [
    x  DomainSets.Interval(0.0, 1.0),
    y  DomainSets.Interval(0.0, 1.0)
]
2-element Vector{Symbolics.VarDomainPairing}:
 Symbolics.VarDomainPairing(x, 0.0 .. 1.0)
 Symbolics.VarDomainPairing(y, 0.0 .. 1.0)

Build a neural network for the PDE solver. Input: 2 dimensions. Hidden layers: 16 neurons * 2 layers. Output: single output u(x, y)

dim = 2
chain = Lux.Chain(Dense(dim, 16, Lux.σ), Dense(16, 16, Lux.σ), Dense(16, 1))
Chain(
    layer_1 = Dense(2 => 16, σ),        # 48 parameters
    layer_2 = Dense(16 => 16, σ),       # 272 parameters
    layer_3 = Dense(16 => 1),           # 17 parameters
)         # Total: 337 parameters,
          #        plus 0 states.

Discretization method usesPhysicsInformedNN() (PINN).

dx = 0.05
discretization = PhysicsInformedNN(chain, QuadratureTraining(; batch = 200, abstol = 1e-6, reltol = 1e-6))
NeuralPDE.PhysicsInformedNN{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}, Nothing, Nothing, NeuralPDE.Phi{Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, typeof(NeuralPDE.numeric_derivative), Bool, Nothing, Nothing, Nothing, Base.RefValue{Int64}, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}(Integrals.CubatureJLh(0), 1.0e-6, 1.0e-6, 1000, 200), nothing, nothing, NeuralPDE.Phi{Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}(Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), nothing, (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()), nothing, static(true))), NeuralPDE.numeric_derivative, false, nothing, nothing, nothing, NeuralPDE.LogOptions(50), Base.RefValue{Int64}(1), true, false, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}())

Build the PDE system and discretize it.

@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x, y)])
prob = discretize(pde_system, discretization)
OptimizationProblem. In-place: true
u0: ComponentVector{Float64}(layer_1 = (weight = [0.9341402053833008 -0.022822434082627296; 0.5611294507980347 -1.133790373802185; … ; -0.40108728408813477 0.2647797763347626; 0.17006514966487885 0.3696271479129791], bias = [0.3890267014503479, -0.33683162927627563, 0.31739190220832825, 0.016257978975772858, -0.6172624230384827, 0.5038551092147827, -0.4834558069705963, -0.09805885702371597, 0.3411020338535309, 0.6682661175727844, 0.2079056054353714, 0.08747679740190506, 0.5912749171257019, -0.022606389597058296, -0.4363529086112976, 0.2633703351020813]), layer_2 = (weight = [0.3395662009716034 0.04840264469385147 … -0.10971543937921524 -0.18132424354553223; 0.39178532361984253 -0.13160990178585052 … -0.30401715636253357 -0.01905432716012001; … ; -0.24223539233207703 -0.16307537257671356 … -0.14892414212226868 -0.1443440318107605; 0.08748767524957657 -0.18022505939006805 … 0.20491640269756317 0.011568003334105015], bias = [0.11614137887954712, -0.071880042552948, 0.0228082537651062, -0.04989159107208252, 0.008535921573638916, 0.029414892196655273, -0.11628502607345581, -0.10273224115371704, 0.1424855887889862, -0.17157211899757385, 0.01677003502845764, -0.22884902358055115, -0.10561299324035645, 0.061971962451934814, -0.15749436616897583, -0.23537498712539673]), layer_3 = (weight = [0.29577794671058655 -0.3470419645309448 … -0.2772166430950165 -0.07964729517698288], bias = [0.23852750658988953]))

Callback function to record the loss

lossrecord = Float64[]
callback = function (p, l)
    push!(lossrecord, l)
    return false
end
#1 (generic function with 1 method)

Solve the problem. It may take a long time.

opt = OptimizationOptimJL.LBFGS(linesearch = LineSearches.BackTracking())
@time res = Optimization.solve(prob, opt, callback = callback, maxiters=1000)
1956.384686 seconds (7.91 G allocations: 769.076 GiB, 4.94% gc time, 6.38% compilation time: <1% of which was recompilation)
retcode: Success
u: ComponentVector{Float64}(layer_1 = (weight = [1.4558018131509771 0.19130913622636125; 1.694534992012362 -1.9154887011977877; … ; -0.5939346370784021 -0.13968970911203565; -0.7631696381343832 -0.24809732472045246], bias = [0.6225438974703379, -0.6518779002084304, 0.15010675730086812, 0.007134071410347166, -0.9775588245263419, 0.18837476914066334, 0.0695263722893591, -0.005811719658896807, -1.4398841599977124, -0.19891573064136814, -1.025436407174718, 0.5340386724610634, 0.46299493043790846, 1.8699066989619924, -0.6652312949533997, 0.48467956353266123]), layer_2 = (weight = [0.5852939744667928 -0.3350109074940791 … -0.10737683171893422 -0.10406415802193829; -0.0623823090127244 0.5208843811849904 … -0.43494870688953474 -0.4328650279546308; … ; -0.587983085028573 0.5806775368910666 … -0.41355659488599467 -0.5885765059001633; -0.08409067456432404 0.031207582935413283 … 0.06609754817476282 -0.332319158193281], bias = [0.15555133237001878, -0.1501428701040884, 0.29328246788208334, 0.017204461228555505, 0.014014245485549399, -0.5510268008712657, -0.23258902787995137, -0.30211186347042457, 0.032668604487068514, -0.3647292722520537, 0.04445748642382343, -0.3016413503966761, -0.16870827856541906, -0.004964133865243553, -0.3536283236728614, -0.2695073794143062]), layer_3 = (weight = [0.0008485529800311195 1.0836402188349452 … 0.9738024658368176 0.7844117110825809], bias = [-0.8145810545281266]))
plot(lossrecord, xlabel="Iters", yscale=:log10, ylabel="Loss", lab=false)
_images/d756ccaa6b0af91f70dea26be44ef5b2cd44719a54120c84002b15b174fc0a8e.png

Plot the predicted solution of the PDE and compare it with the analytical solution to see the relative error.

xs, ys = [DomainSets.infimum(d.domain):dx/10:DomainSets.supremum(d.domain) for d in domains]
analytic_sol_func(x,y) = (sinpi(x)*sinpi(y))/(2pi^2)

phi = discretization.phi
u_predict = reshape([first(phi([x, y], res.u)) for x in xs for y in ys], (length(xs), length(ys)))
u_real = reshape([analytic_sol_func(x, y) for x in xs for y in ys], (length(xs), length(ys)))
diff_u = abs.(u_predict .- u_real)
201×201 Matrix{Float64}:
 0.00087976   0.000812343  0.000746898  …  0.00168281  0.0017513   0.00182119
 0.000916653  0.000849472  0.000784233     0.00166387  0.00173084  0.00179916
 0.000952188  0.000885249  0.000820225     0.00164388  0.00170934  0.0017761
 0.000986365  0.000919678  0.000854877     0.00162287  0.00168683  0.00175204
 0.00101919   0.000952763  0.000888193     0.00160088  0.00166333  0.001727
 0.00105066   0.000984506  0.000920178  …  0.00157791  0.00163888  0.00170101
 0.00108079   0.00101491   0.000950835     0.00155401  0.0016135   0.0016741
 0.00110958   0.00104399   0.000980169     0.00152919  0.00158721  0.00164629
 0.00113702   0.00107173   0.00100818      0.00150349  0.00156004  0.0016176
 0.00116313   0.00109815   0.00103489      0.00147694  0.00153202  0.00158808
 ⋮                                      ⋱                          ⋮
 0.00114035   0.00117403   0.00120498      0.00169273  0.0017512   0.0018099
 0.00126041   0.00129231   0.00132146      0.00174402  0.00180464  0.00186554
 0.00138352   0.00141356   0.00144085      0.00179553  0.00185834  0.00192147
 0.00150972   0.00153783   0.00156318   …  0.00184724  0.00191229  0.0019777
 0.00163902   0.00166513   0.00168847      0.00189916  0.00196649  0.00203421
 0.00177146   0.00179549   0.00181673      0.00195128  0.00202093  0.00209101
 0.00190706   0.00192893   0.00194801      0.00200361  0.00207562  0.0021481
 0.00204585   0.00206548   0.00208232      0.00205614  0.00213055  0.00220548
 0.00218784   0.00220516   0.00221968   …  0.00210886  0.00218573  0.00226315
p1 = plot(xs, ys, u_real, linetype=:contourf, title = "analytic");
p2 = plot(xs, ys, u_predict, linetype=:contourf, title = "predicted");
p3 = plot(xs, ys, diff_u, linetype=:contourf, title = "error");
plot(p1, p2, p3)
_images/3f96d755a40766d47fdb8a77afd081ddca6d84ccc60b6e9db7d3efc50ba3ccb9.png

This notebook was generated using Literate.jl.