Solving PDEs with ModelingToolkit and NeuralPDE#
Solving Poisson PDE Systems (https://docs.sciml.ai/NeuralPDE/stable/tutorials/pdesystem/)
\[
\partial^{2}_{x}u(x,y) + \partial^{2}_{y}u(x,y) = -\sin (\pi x) \sin (\pi y)
\]
with boundary conditions
\[\begin{split}
\begin{align}
u(0, y) &= 0 \\
u(1, y) &= 0 \\
u(x, 0) &= 0 \\
u(x, 1) &= 0 \\
\end{align}
\end{split}\]
where
\(x ∈ [0, 1], y ∈ [0, 1]\)
using NeuralPDE
using Lux
using Optimization
using OptimizationOptimJL
using ModelingToolkit
using DomainSets
using LineSearches
using Plots
2D PDE
@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2
eq = Dxx(u(x, y)) + Dyy(u(x, y)) ~ -sinpi(x) * sinpi(y)
\[ \begin{equation}
\frac{\mathrm{d}}{\mathrm{d}y} \frac{\mathrm{d}}{\mathrm{d}y} u\left( x, y \right) + \frac{\mathrm{d}}{\mathrm{d}x} \frac{\mathrm{d}}{\mathrm{d}x} u\left( x, y \right) = - sinpi\left( x \right) sinpi\left( y \right)
\end{equation}
\]
Boundary conditions
bcs = [
u(0, y) ~ 0.0,
u(1, y) ~ 0.0,
u(x, 0) ~ 0.0,
u(x, 1) ~ 0.0
]
\[\begin{split} \begin{align}
u\left( 0, y \right) &= 0 \\
u\left( 1, y \right) &= 0 \\
u\left( x, 0 \right) &= 0 \\
u\left( x, 1 \right) &= 0
\end{align}
\end{split}\]
Space domains
domains = [
x ∈ DomainSets.Interval(0.0, 1.0),
y ∈ DomainSets.Interval(0.0, 1.0)
]
2-element Vector{Symbolics.VarDomainPairing}:
Symbolics.VarDomainPairing(x, 0.0 .. 1.0)
Symbolics.VarDomainPairing(y, 0.0 .. 1.0)
Build a neural network for the PDE solver.
Input: 2 dimensions.
Hidden layers: 16 neurons * 2 layers.
Output: single output u(x, y)
dim = 2
chain = Lux.Chain(Dense(dim, 16, Lux.σ), Dense(16, 16, Lux.σ), Dense(16, 1))
Chain(
layer_1 = Dense(2 => 16, σ), # 48 parameters
layer_2 = Dense(16 => 16, σ), # 272 parameters
layer_3 = Dense(16 => 1), # 17 parameters
) # Total: 337 parameters,
# plus 0 states.
Discretization method usesPhysicsInformedNN() (PINN).
dx = 0.05
discretization = PhysicsInformedNN(chain, QuadratureTraining(; batch = 200, abstol = 1e-6, reltol = 1e-6))
NeuralPDE.PhysicsInformedNN{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}, Nothing, Nothing, NeuralPDE.Phi{LuxCore.StatefulLuxLayerImpl.StatefulLuxLayer{Val{true}, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, typeof(NeuralPDE.numeric_derivative), Bool, Nothing, Nothing, Nothing, Base.RefValue{Int64}, Base.Pairs{Symbol, Union{}, Nothing, @NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}(Integrals.CubatureJLh(0), 1.0e-6, 1.0e-6, 1000, 200), nothing, nothing, NeuralPDE.Phi{LuxCore.StatefulLuxLayerImpl.StatefulLuxLayer{Val{true}, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}(LuxCore.StatefulLuxLayerImpl.StatefulLuxLayer{Val{true}, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), nothing, (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()), nothing, Val{true}())), NeuralPDE.numeric_derivative, false, nothing, nothing, nothing, NeuralPDE.LogOptions(50), Base.RefValue{Int64}(1), true, false, Base.Pairs{Symbol, Union{}, Nothing, @NamedTuple{}}())
Build the PDE system and discretize it.
@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x, y)])
prob = discretize(pde_system, discretization)
OptimizationProblem. In-place: true
u0: ComponentVector{Float64}(layer_1 = (weight = [0.15085755288600922 0.3330223858356476; -0.47521695494651794 1.1505005359649658; … ; -1.0813267230987549 -0.47586578130722046; -0.8368085026741028 -1.003372073173523], bias = [-0.17491330206394196, -0.5283055901527405, 0.4526379406452179, 0.6943503618240356, -0.042776186019182205, 0.2785818874835968, -0.6338478922843933, -0.2016696333885193, -0.12813375890254974, -0.21502874791622162, -0.17075686156749725, 0.18276821076869965, -0.6351709365844727, -0.2111792266368866, 0.19066298007965088, 0.09729557484388351]), layer_2 = (weight = [0.3847564458847046 -0.33984994888305664 … -0.10121949762105942 -0.12234307825565338; -0.1924043446779251 0.39004653692245483 … 0.3551304042339325 0.02210393361747265; … ; -0.28199058771133423 -0.37118351459503174 … -0.2917744815349579 0.36292076110839844; -0.10931910574436188 -0.3799401819705963 … -0.11122550815343857 -0.24360381066799164], bias = [-0.18440091609954834, -0.03628697991371155, -0.08698192238807678, -0.03946423530578613, -0.22390538454055786, 0.10289207100868225, -0.03069937229156494, 0.041701048612594604, -0.19779109954833984, -0.2069903314113617, 0.07696342468261719, 0.1734919250011444, -0.24408957362174988, 0.24971702694892883, 0.09568807482719421, 0.18232762813568115]), layer_3 = (weight = [0.2957412004470825 0.2782362401485443 … 0.3455447256565094 -0.256458580493927], bias = [-0.06425982713699341]))
Callback function to record the loss
lossrecord = Float64[]
callback = function (p, l)
push!(lossrecord, l)
return false
end
#2 (generic function with 1 method)
Solve the problem. It may take a long time.
opt = OptimizationOptimJL.LBFGS(linesearch = LineSearches.BackTracking())
@time res = Optimization.solve(prob, opt, callback = callback, maxiters=1000)
1001.294878 seconds (3.81 G allocations: 258.139 GiB, 2.56% gc time, 12.32% compilation time: <1% of which was recompilation)
retcode: Success
u: ComponentVector{Float64}(layer_1 = (weight = [0.2962605175336599 0.26756123988175834; 1.7978217079365684 2.195575286316816; … ; -1.1645824120858523 -0.2509130754114894; -2.302114749457812 -2.083252717001458], bias = [-0.12566355949074437, -3.075762350881104, 1.8772121885110564, 0.7488987211080904, 1.160214644140881, 0.2512906998014296, -0.8133868856544006, -0.1663113364785983, -1.077641795299074, -0.15059061824074121, -0.4686302097578575, 0.0649772844590238, -0.6996540201678398, 0.11944110012598814, 0.0070799684905897805, 1.1647829758871207]), layer_2 = (weight = [0.5831718221268326 0.15153156146904279 … 0.01591590282709803 -0.027841365723507785; -0.035459248867177065 0.5367881285497531 … 0.3357911683847589 -0.024753842827574046; … ; -0.38403171949167053 -0.535701837943974 … -0.22887185192385606 0.1294832903842534; 0.05755506983844111 -0.09434153873398247 … -0.020332112375641653 0.0002445423228118879], bias = [-0.07804606084159493, 0.053975038240347425, -0.022234123729514117, -0.08005938999492432, -0.3153317725792346, -0.06599697334301355, -0.13595111599015774, 0.10223746309191624, -0.44312952561483254, -0.33112620815273713, 0.13517583460456764, 0.17377499409290917, -0.33658693734251627, -0.4593528289285441, 0.0571705617205981, 0.15954156604704445]), layer_3 = (weight = [0.6092098566429133 0.6754807565214831 … -0.255842422021002 0.5005503203922393], bias = [0.16548350862636937]))
plot(lossrecord, xlabel="Iters", yscale=:log10, ylabel="Loss", lab=false)
Plot the predicted solution of the PDE and compare it with the analytical solution to see the relative error.
xs, ys = [DomainSets.infimum(d.domain):dx/10:DomainSets.supremum(d.domain) for d in domains]
analytic_sol_func(x,y) = (sinpi(x)*sinpi(y))/(2pi^2)
phi = discretization.phi
u_predict = reshape([first(phi([x, y], res.u)) for x in xs for y in ys], (length(xs), length(ys)))
u_real = reshape([analytic_sol_func(x, y) for x in xs for y in ys], (length(xs), length(ys)))
diff_u = abs.(u_predict .- u_real)
201×201 Matrix{Float64}:
0.00246032 0.00242708 0.00239317 … 0.00328576 0.00337288 0.00345989
0.00235618 0.00232482 0.00229278 0.00317733 0.00326144 0.00334542
0.00225266 0.00222318 0.00219301 0.00306857 0.00314968 0.00323063
0.0021498 0.00212219 0.00209388 0.00295951 0.00303762 0.00311554
0.00204763 0.00202188 0.00199544 0.00285018 0.0029253 0.0030002
0.00194618 0.00192229 0.00189769 … 0.00274062 0.00281275 0.00288463
0.00184548 0.00182344 0.00180069 0.00263086 0.00270001 0.00276887
0.00174556 0.00172536 0.00170445 0.00252092 0.0025871 0.00265295
0.00164645 0.00162809 0.001609 0.00241084 0.00247405 0.00253691
0.00154818 0.00153165 0.00151437 0.00230065 0.00236091 0.00242078
⋮ ⋱ ⋮
0.0079343 0.00767661 0.00742007 0.00855655 0.00877929 0.00900118
0.00805893 0.00779575 0.00753376 0.008761 0.00898984 0.00921784
0.00818259 0.00791389 0.00764641 0.00896555 0.00920053 0.0094347
0.00830523 0.00803097 0.00775796 … 0.00917016 0.00941132 0.00965169
0.00842679 0.00814694 0.00786836 0.00937478 0.00962215 0.00986875
0.00854722 0.00826173 0.00797756 0.00957933 0.00983296 0.0100858
0.00866644 0.00837529 0.00808548 0.00978378 0.0100437 0.0103029
0.00878442 0.00848757 0.00819208 0.00998807 0.0102543 0.0105198
0.00890108 0.00859849 0.00829731 … 0.0101921 0.0104647 0.0107366
p1 = plot(xs, ys, u_real, linetype=:contourf, title = "analytic");
p2 = plot(xs, ys, u_predict, linetype=:contourf, title = "predicted");
p3 = plot(xs, ys, diff_u, linetype=:contourf, title = "error");
plot(p1, p2, p3)
Runtime environment#
using Pkg
Pkg.status()
Status `~/work/jl-pde/jl-pde/Project.toml`
[b0b7db55] ComponentArrays v0.15.30
⌃ [aae7a2af] DiffEqFlux v4.4.0
[5b8099bc] DomainSets v0.7.16
[7da242da] Enzyme v0.13.108
[f6369f11] ForwardDiff v1.3.0
[d3d80556] LineSearches v7.5.1
⌃ [b2108857] Lux v1.21.1
[94925ecb] MethodOfLines v0.11.9
[961ee093] ModelingToolkit v10.31.0
[315f7962] NeuralPDE v5.20.0
[8913a72c] NonlinearSolve v4.12.0
⌅ [7f7a1694] Optimization v4.8.0
⌃ [36348300] OptimizationOptimJL v0.4.5
⌃ [42dfb2eb] OptimizationOptimisers v0.3.11
⌃ [500b13db] OptimizationPolyalgorithms v0.3.1
[1dea7af3] OrdinaryDiffEq v6.103.0
[91a5bcdd] Plots v1.41.2
[ce78b400] SimpleUnPack v1.1.0
[37e2e46d] LinearAlgebra v1.12.0
[9a3f8284] Random v1.11.0
Info Packages marked with ⌃ and ⌅ have new versions available. Those with ⌃ may be upgradable, but those with ⌅ are restricted by compatibility constraints from upgrading. To see why use `status --outdated`
using InteractiveUtils
InteractiveUtils.versioninfo()
Julia Version 1.12.2
Commit ca9b6662be4 (2025-11-20 16:25 UTC)
Build Info:
Official https://julialang.org release
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 4 × AMD EPYC 7763 64-Core Processor
WORD_SIZE: 64
LLVM: libLLVM-18.1.7 (ORCJIT, znver3)
GC: Built with stock GC
Threads: 4 default, 1 interactive, 4 GC (on 4 virtual cores)
Environment:
JULIA_CPU_TARGET = generic;icelake-server,clone_all;znver3,clone_all
JULIA_CONDAPKG_OFFLINE = true
JULIA_CONDAPKG_BACKEND = Null
JULIA_CI = true
LD_LIBRARY_PATH = /opt/hostedtoolcache/Python/3.13.9/x64/lib
JULIA_NUM_THREADS = auto
This notebook was generated using Literate.jl.