Solving PDEs with ModelingToolkit and NeuralPDE

Solving PDEs with ModelingToolkit and NeuralPDE#

Solving Poisson PDE Systems (https://docs.sciml.ai/NeuralPDE/stable/tutorials/pdesystem/)

\[ \partial^{2}_{x}u(x,y) + \partial^{2}_{y}u(x,y) = -\sin (\pi x) \sin (\pi y) \]

with boundary conditions

\[\begin{split} \begin{align} u(0, y) &= 0 \\ u(1, y) &= 0 \\ u(x, 0) &= 0 \\ u(x, 1) &= 0 \\ \end{align} \end{split}\]

where

\(x ∈ [0, 1], y ∈ [0, 1]\)

using NeuralPDE
using Lux
using Optimization
using OptimizationOptimJL
using ModelingToolkit
using ModelingToolkit: Interval
using LineSearches
using Plots
@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2
Differential(y) ∘ Differential(y)

2D PDE

eq  = Dxx(u(x, y)) + Dyy(u(x, y)) ~ -sinpi(x) * sinpi(y)
\[ \begin{equation} \frac{\mathrm{d}}{\mathrm{d}y} \frac{\mathrm{d}}{\mathrm{d}y} u\left( x, y \right) + \frac{\mathrm{d}}{\mathrm{d}x} \frac{\mathrm{d}}{\mathrm{d}x} u\left( x, y \right) = - sinpi\left( x \right) sinpi\left( y \right) \end{equation} \]

Boundary conditions

bcs = [
    u(0, y) ~ 0.0,
    u(1, y) ~ 0.0,
    u(x, 0) ~ 0.0,
    u(x, 1) ~ 0.0
]
\[\begin{split} \begin{align} u\left( 0, y \right) &= 0 \\ u\left( 1, y \right) &= 0 \\ u\left( x, 0 \right) &= 0 \\ u\left( x, 1 \right) &= 0 \end{align} \end{split}\]

Space domains

domains = [
    x  Interval(0.0, 1.0),
    y  Interval(0.0, 1.0)
]
2-element Vector{Symbolics.VarDomainPairing}:
 Symbolics.VarDomainPairing(x, 0.0 .. 1.0)
 Symbolics.VarDomainPairing(y, 0.0 .. 1.0)

Build a neural network for the PDE solver. Input: 2 dimensions. Hidden layers: 16 neurons * 2 layers. Output: single output u(x, y)

dim = 2
chain = Lux.Chain(Dense(dim, 16, Lux.σ), Dense(16, 16, Lux.σ), Dense(16, 1))
Chain(
    layer_1 = Dense(2 => 16, σ),        # 48 parameters
    layer_2 = Dense(16 => 16, σ),       # 272 parameters
    layer_3 = Dense(16 => 1),           # 17 parameters
)         # Total: 337 parameters,
          #        plus 0 states.

Discretization method usesPhysicsInformedNN() (PINN).

dx = 0.05
discretization = PhysicsInformedNN(chain, QuadratureTraining(; batch = 200, abstol = 1e-6, reltol = 1e-6))
NeuralPDE.PhysicsInformedNN{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}, Nothing, NeuralPDE.Phi{Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, typeof(NeuralPDE.numeric_derivative), Bool, Nothing, Nothing, Nothing, Base.RefValue{Int64}, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}(Integrals.CubatureJLh(0), 1.0e-6, 1.0e-6, 1000, 200), nothing, NeuralPDE.Phi{Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}(Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), nothing, (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()), nothing, static(true))), NeuralPDE.numeric_derivative, false, nothing, nothing, nothing, NeuralPDE.LogOptions(50), Base.RefValue{Int64}(1), true, false, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}())

Build the PDE system and discretize it.

@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x, y)])
prob = discretize(pde_system, discretization)
OptimizationProblem. In-place: true
u0: ComponentVector{Float64}(layer_1 = (weight = [0.5869758725166321 0.0244861152023077; -0.4771135151386261 0.6855577826499939; … ; 0.4659199118614197 0.08501782268285751; -0.7747207283973694 0.7692113518714905], bias = [0.5417302846908569, -0.4564187526702881, -0.2542036473751068, -0.220811128616333, -0.3857945501804352, -0.5105694532394409, 0.4004567563533783, 0.37597888708114624, -0.6411595344543457, -0.6650269627571106, 0.29071715474128723, -0.6618012189865112, 0.26341596245765686, -0.37949806451797485, -0.6829063296318054, -0.37867283821105957]), layer_2 = (weight = [-0.27307379245758057 0.36483046412467957 … 0.010638033039867878 -0.20033040642738342; 0.08216672390699387 0.07268670946359634 … 0.06822088360786438 0.32328811287879944; … ; 0.09694723784923553 -0.010210523381829262 … -0.23975993692874908 -0.14241652190685272; 0.3228052258491516 0.0800042375922203 … 0.10043834894895554 0.17431218922138214], bias = [-0.23151051998138428, -0.15507978200912476, -0.004869669675827026, -0.18325543403625488, 0.23126086592674255, -0.19684937596321106, -0.17874962091445923, 0.05201733112335205, -0.1493186056613922, -0.2069937288761139, -0.2160131335258484, -0.2274875044822693, -0.21806800365447998, -0.15929701924324036, 0.027058273553848267, -0.05517527461051941]), layer_3 = (weight = [-0.38336122035980225 -0.22084592282772064 … 0.1324567198753357 -0.11991795897483826], bias = [-0.23035147786140442]))

Callback function to record the loss

lossrecord = Float64[]
callback = function (p, l)
    push!(lossrecord, l)
    return false
end
#1 (generic function with 1 method)

Solve the problem. It may take a long time.

opt = OptimizationOptimJL.LBFGS(linesearch = LineSearches.BackTracking())
res = Optimization.solve(prob, opt, callback = callback, maxiters=1000)
retcode: Success
u: ComponentVector{Float64}(layer_1 = (weight = [1.3538963385738172 -0.36128213773904255; 0.34988906304691064 0.13488300841590525; … ; 0.5059383326901322 -0.654459961248944; -1.6500142585170088 1.9480409004677441], bias = [0.33184765116530546, -0.24463665333740772, -1.7801611382251783, 0.767775484908977, -0.3767177527514505, -1.425440476170818, -0.38316939221858687, 0.9264719978391547, -0.9509084441279919, -2.3034164021648404, 0.5755366351967754, -0.5185757520337971, 1.8938092245947018, -0.851227542192573, -1.358391206498328, -1.0924527022342607]), layer_2 = (weight = [-0.6485380254605813 0.48886111593156023 … 0.18054890383723432 0.09394181615679628; 0.07160883212178085 0.1527609129140246 … 0.2983414072219048 0.7817549937488277; … ; 0.5350100370532321 -0.2751467362833984 … -0.258276525069587 -0.46416017207963667; 0.2912790472878487 0.0750729390411293 … 0.17590879692967745 0.17611477771672523], bias = [-0.37175097324831075, -0.0543901252361046, -0.14392826548353913, -0.3994075559643143, -0.2431466795267203, 0.8744921643220649, 0.26744838771823554, 0.6576596301721563, -0.5935540159346793, -0.3970846804784519, -0.19285794233854764, -0.4924942133859435, -0.5357582341220465, -0.4983881461575693, 0.13832842353831315, -0.08107354236968813]), layer_3 = (weight = [0.6024985594394653 0.8236415451938989 … -0.9319929818862632 0.3095054715462086], bias = [0.27279611413692173]))
plot(lossrecord, xlabel="Iters", yscale=:log10, ylabel="Loss", lab=false)
_images/562c8e4686fb96640327e87cf8a24f0d1c996eff88c4fc0f3ea9514c2de066d3.png

Plot the predicted solution of the PDE and compare it with the analytical solution to see the relative error.

xs, ys = [infimum(d.domain):dx/10:supremum(d.domain) for d in domains]
analytic_sol_func(x,y) = (sinpi(x)*sinpi(y))/(2pi^2)

phi = discretization.phi
u_predict = reshape([first(phi([x, y], res.u)) for x in xs for y in ys], (length(xs), length(ys)))
u_real = reshape([analytic_sol_func(x, y) for x in xs for y in ys], (length(xs), length(ys)))
diff_u = abs.(u_predict .- u_real)
201×201 Matrix{Float64}:
 0.00295547   0.00296547   0.00297235   …  0.00399473  0.00417398  0.00435562
 0.00279751   0.0028101    0.00281958      0.00395365  0.00412898  0.00430663
 0.00264294   0.00265803   0.00267002      0.00391169  0.00408316  0.00425688
 0.00249175   0.00250927   0.00252367      0.00386887  0.00403654  0.00420638
 0.00234394   0.00236379   0.00238053      0.0038252   0.00398912  0.00415515
 0.0021995    0.00222159   0.00224059   …  0.00378069  0.00394092  0.00410318
 0.00205841   0.00208267   0.00210383      0.00373535  0.00389194  0.0040505
 0.00192066   0.00194701   0.00197026      0.0036892   0.0038422   0.0039971
 0.00178625   0.0018146    0.00183987      0.00364225  0.00379172  0.00394301
 0.00165517   0.00168544   0.00171263      0.00359451  0.00374049  0.00388823
 ⋮                                      ⋱                          ⋮
 0.000235535  0.000196415  0.000158067     0.00131519  0.00132678  0.00133762
 0.000185834  0.000147352  0.000109641     0.00140985  0.00142387  0.0014372
 0.000135126  9.72849e-5   6.02159e-5      0.00150597  0.00152249  0.00153835
 8.34284e-5   4.62303e-5   9.80455e-6   …  0.00160356  0.00162263  0.00164109
 3.07578e-5   5.79654e-6   4.15779e-5      0.00170264  0.0017243   0.00174541
 2.28683e-5   5.87793e-5   9.39165e-5      0.00180321  0.00182752  0.00185133
 7.74321e-5   0.000112702  0.000147196     0.00190528  0.0019323   0.00195886
 0.000132916  0.000167546  0.000201401     0.00200886  0.00203865  0.00206803
 0.0001893    0.000223297  0.000256515  …  0.00211398  0.00214658  0.00217883
p1 = plot(xs, ys, u_real, linetype=:contourf, title = "analytic");
p2 = plot(xs, ys, u_predict, linetype=:contourf, title = "predicted");
p3 = plot(xs, ys, diff_u, linetype=:contourf, title = "error");
plot(p1, p2, p3)
_images/4fb40f0055231500d3fbc301fc4d2674c3a36053ae3c1963cb00c2da3a5ea9b1.png

This notebook was generated using Literate.jl.