Solving PDEs with ModelingToolkit and NeuralPDE

Solving PDEs with ModelingToolkit and NeuralPDE#

Solving Poisson PDE Systems (https://docs.sciml.ai/NeuralPDE/stable/tutorials/pdesystem/)

\[ \partial^{2}_{x}u(x,y) + \partial^{2}_{y}u(x,y) = -\sin (\pi x) \sin (\pi y) \]

with boundary conditions

\[\begin{split} \begin{align} u(0, y) &= 0 \\ u(1, y) &= 0 \\ u(x, 0) &= 0 \\ u(x, 1) &= 0 \\ \end{align} \end{split}\]

where

\(x ∈ [0, 1], y ∈ [0, 1]\)

using NeuralPDE
using Lux
using Optimization
using OptimizationOptimJL
using ModelingToolkit
using ModelingToolkit: Interval
using LineSearches
using Plots
@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2
Differential(y) ∘ Differential(y)

2D PDE

eq  = Dxx(u(x, y)) + Dyy(u(x, y)) ~ -sinpi(x) * sinpi(y)
\[ \begin{equation} \frac{\mathrm{d}}{\mathrm{d}y} \frac{\mathrm{d}}{\mathrm{d}y} u\left( x, y \right) + \frac{\mathrm{d}}{\mathrm{d}x} \frac{\mathrm{d}}{\mathrm{d}x} u\left( x, y \right) = - sinpi\left( x \right) sinpi\left( y \right) \end{equation} \]

Boundary conditions

bcs = [
    u(0, y) ~ 0.0,
    u(1, y) ~ 0.0,
    u(x, 0) ~ 0.0,
    u(x, 1) ~ 0.0
]
\[\begin{split} \begin{align} u\left( 0, y \right) &= 0 \\ u\left( 1, y \right) &= 0 \\ u\left( x, 0 \right) &= 0 \\ u\left( x, 1 \right) &= 0 \end{align} \end{split}\]

Space domains

domains = [
    x  Interval(0.0, 1.0),
    y  Interval(0.0, 1.0)
]
2-element Vector{Symbolics.VarDomainPairing}:
 Symbolics.VarDomainPairing(x, 0.0 .. 1.0)
 Symbolics.VarDomainPairing(y, 0.0 .. 1.0)

Build a neural network for the PDE solver. Input: 2 dimensions. Hidden layers: 16 neurons * 2 layers. Output: single output u(x, y)

dim = 2
chain = Lux.Chain(Dense(dim, 16, Lux.σ), Dense(16, 16, Lux.σ), Dense(16, 1))
Chain(
    layer_1 = Dense(2 => 16, σ),        # 48 parameters
    layer_2 = Dense(16 => 16, σ),       # 272 parameters
    layer_3 = Dense(16 => 1),           # 17 parameters
)         # Total: 337 parameters,
          #        plus 0 states.

Discretization method usesPhysicsInformedNN() (PINN).

dx = 0.05
discretization = PhysicsInformedNN(chain, QuadratureTraining(; batch = 200, abstol = 1e-6, reltol = 1e-6))
NeuralPDE.PhysicsInformedNN{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}, Nothing, NeuralPDE.Phi{Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, typeof(NeuralPDE.numeric_derivative), Bool, Nothing, Nothing, Nothing, Base.RefValue{Int64}, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}(Integrals.CubatureJLh(0), 1.0e-6, 1.0e-6, 1000, 200), nothing, NeuralPDE.Phi{Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}(Lux.StatefulLuxLayer{Static.True, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), nothing, (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()), nothing, static(true))), NeuralPDE.numeric_derivative, false, nothing, nothing, nothing, NeuralPDE.LogOptions(50), Base.RefValue{Int64}(1), true, false, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}())

Build the PDE system and discretize it.

@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x, y)])
prob = discretize(pde_system, discretization)
OptimizationProblem. In-place: true
u0: ComponentVector{Float64}(layer_1 = (weight = [0.7211504578590393 1.0633022785186768; 0.3953341245651245 -1.0660637617111206; … ; 0.39587724208831787 0.30855509638786316; 0.7018358707427979 0.27205222845077515], bias = [-0.1904573142528534, 0.6402586102485657, -0.22349943220615387, 0.00925123319029808, 0.10249270498752594, 0.010613841004669666, -0.6817166209220886, 0.13048849999904633, 0.5535287857055664, 0.6168819069862366, -0.5315372347831726, -0.5106109976768494, -0.5420505404472351, -0.3348582983016968, -0.28316766023635864, -0.6088863611221313]), layer_2 = (weight = [0.24366451799869537 0.11896821856498718 … 0.3117446303367615 0.09959891438484192; -0.35381031036376953 0.07259900867938995 … -0.13937145471572876 -0.04006155952811241; … ; 0.26518499851226807 0.08663089573383331 … -0.22086279094219208 -0.2574300467967987; 0.18342988193035126 -0.028297094628214836 … 0.3042537271976471 0.15481317043304443], bias = [0.19795280694961548, -0.08523258566856384, -0.1612652838230133, -0.18727782368659973, 0.06617027521133423, 0.20642507076263428, 0.23246005177497864, 0.12422344088554382, 0.021101832389831543, -0.23276349902153015, 0.05805087089538574, 0.23731476068496704, -0.18931743502616882, -0.11011427640914917, -0.16694897413253784, -0.02774372696876526]), layer_3 = (weight = [0.2929741144180298 0.15665152668952942 … 0.31367236375808716 0.14458870887756348], bias = [-0.054681748151779175]))

Callback function to record the loss

lossrecord = Float64[]
callback = function (p, l)
    push!(lossrecord, l)
    return false
end
#1 (generic function with 1 method)

Solve the problem. It may take a long time.

opt = OptimizationOptimJL.LBFGS(linesearch = LineSearches.BackTracking())
res = Optimization.solve(prob, opt, callback = callback, maxiters=1000)
retcode: Failure
u: ComponentVector{Float64}(layer_1 = (weight = [1.9710592549830965 2.041879276340113; -1.0652914981265165 -1.2256166852339994; … ; -0.1640285454882416 -0.14079330015303215; 0.9740018654823368 0.5399788456696384], bias = [-1.0514320238717456, 0.6858837281367522, -0.9395108899244969, 0.9613428451190688, 0.8582984684617212, -0.6116083582958606, 1.1647761466604032, 0.2560807152258965, 0.024150828672053908, 2.2442136956172716, -0.16208982876448993, 0.14147694693091944, -0.9454677628948536, -0.8859755594675495, 0.12594696962103322, -1.1779090537606411]), layer_2 = (weight = [3.375587553363682 1.2124504841680572 … 0.2632882692654552 -1.3833336355701207; -0.6265803642703698 -0.23539656669374256 … 0.8817809579823539 1.1494919164767063; … ; 0.7847020302001851 0.6001012695572189 … -0.40592998247985107 -0.6074505442900532; 0.7514296423946053 0.5949790113264539 … -0.10719081525178589 -0.011023524331875702], bias = [-0.4080981625628649, 0.41077254410258607, -0.0692793423695326, 0.26326268242760215, -0.39689601500845095, 0.029652171785978388, 0.24100330408796475, 0.1155832106534547, 0.89107960174975, -0.30189242201842326, 0.2288222687574647, 0.11191350423879079, -0.23965919616059178, -0.30850791699026475, -0.5925567615792634, -0.19423009255953444]), layer_3 = (weight = [2.1143850866753953 -0.9191596100313282 … -0.17095522456875423 -0.010675447750845423], bias = [-0.2348448966336686]))
plot(lossrecord, xlabel="Iters", yscale=:log10, ylabel="Loss", lab=false)
_images/46575938780a33ab5a09e7dd8c877e18183df9173570041a078ddb5eeadbac04.png

Plot the predicted solution of the PDE and compare it with the analytical solution to see the relative error.

xs, ys = [infimum(d.domain):dx/10:supremum(d.domain) for d in domains]
analytic_sol_func(x,y) = (sinpi(x)*sinpi(y))/(2pi^2)

phi = discretization.phi
u_predict = reshape([first(phi([x, y], res.u)) for x in xs for y in ys], (length(xs), length(ys)))
u_real = reshape([analytic_sol_func(x, y) for x in xs for y in ys], (length(xs), length(ys)))
diff_u = abs.(u_predict .- u_real)
201×201 Matrix{Float64}:
 0.00101183   0.00109576   0.00117804   …  0.00108489   0.00127868
 0.000950512  0.00103293   0.00111375      0.00117182   0.00136317
 0.000890546  0.000971492  0.00105087      0.00125371   0.00144258
 0.00083191   0.000911416  0.000989386     0.00133066   0.00151701
 0.000774585  0.000852682  0.000929279     0.00140277   0.00158657
 0.000718549  0.000795272  0.000870527  …  0.00147012   0.00165134
 0.000663783  0.000739163  0.000813107     0.00153282   0.00171143
 0.000610268  0.000684335  0.000756999     0.00159097   0.00176694
 0.000557983  0.000630768  0.000702182     0.00164466   0.00181796
 0.00050691   0.000578441  0.000648633     0.00169398   0.0018646
 ⋮                                      ⋱               ⋮
 0.000407447  0.000354807  0.000302782     0.00066163   0.000722875
 0.00038257   0.000329076  0.000276218     0.00066223   0.000724883
 0.000357286  0.000302913  0.000249198     0.000660435  0.000724465
 0.000331597  0.000276319  0.000221723  …  0.000656161  0.000721538
 0.000305503  0.000249296  0.000193794     0.000649329  0.00071602
 0.000279008  0.000221844  0.00016541      0.000639855  0.000707827
 0.000252114  0.000193965  0.000136571     0.000627658  0.000696875
 0.000224822  0.000165661  0.00010728      0.000612655  0.000683081
 0.000197135  0.000136933  7.75372e-5   …  0.000594763  0.00066636
p1 = plot(xs, ys, u_real, linetype=:contourf, title = "analytic");
p2 = plot(xs, ys, u_predict, linetype=:contourf, title = "predicted");
p3 = plot(xs, ys, diff_u, linetype=:contourf, title = "error");
plot(p1, p2, p3)
_images/aa26ec5c063019a981714d0b3cd779f7b04e5bdaf5918edd3133e46c75899369.png

This notebook was generated using Literate.jl.