Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Solving PDEs with ModelingToolkit and NeuralPDE

Solving Poisson PDE Systems (https://docs.sciml.ai/NeuralPDE/stable/tutorials/pdesystem/)

x2u(x,y)+y2u(x,y)=sin(πx)sin(πy)\partial^{2}_{x}u(x,y) + \partial^{2}_{y}u(x,y) = -\sin (\pi x) \sin (\pi y)

with boundary conditions

u(0,y)=0u(1,y)=0u(x,0)=0u(x,1)=0\begin{align} u(0, y) &= 0 \\ u(1, y) &= 0 \\ u(x, 0) &= 0 \\ u(x, 1) &= 0 \\ \end{align}

where

x[0,1],y[0,1]x ∈ [0, 1], y ∈ [0, 1]

using NeuralPDE
using Lux
using Optimization
using OptimizationOptimJL
using ModelingToolkit
using DomainSets
using LineSearches
using Plots

2D PDE

@parameters x y
@variables u(..)
Dxx = Differential(x)^2
Dyy = Differential(y)^2
eq  = Dxx(u(x, y)) + Dyy(u(x, y)) ~ -sinpi(x) * sinpi(y)
Differential(x, 2)(u(x, y)) + Differential(y, 2)(u(x, y)) ~ -sinpi(x)*sinpi(y)

Boundary conditions

bcs = [
    u(0, y) ~ 0.0,
    u(1, y) ~ 0.0,
    u(x, 0) ~ 0.0,
    u(x, 1) ~ 0.0
]
4-element Vector{Symbolics.Equation}: u(0, y) ~ 0.0 u(1, y) ~ 0.0 u(x, 0) ~ 0.0 u(x, 1) ~ 0.0

Space domains

domains = [
    x ∈ DomainSets.Interval(0.0, 1.0),
    y ∈ DomainSets.Interval(0.0, 1.0)
]
2-element Vector{Symbolics.VarDomainPairing}: Symbolics.VarDomainPairing(x, 0.0 .. 1.0) Symbolics.VarDomainPairing(y, 0.0 .. 1.0)

Build a neural network for the PDE solver. Input: 2 dimensions. Hidden layers: 16 neurons * 2 layers. Output: single output u(x, y)

dim = 2
chain = Lux.Chain(Dense(dim, 16, Lux.σ), Dense(16, 16, Lux.σ), Dense(16, 1))
Chain( layer_1 = Dense(2 => 16, σ), # 48 parameters layer_2 = Dense(16 => 16, σ), # 272 parameters layer_3 = Dense(16 => 1), # 17 parameters ) # Total: 337 parameters, # plus 0 states.

Discretization method usesPhysicsInformedNN() (PINN).

dx = 0.05
discretization = PhysicsInformedNN(chain, QuadratureTraining(; batch = 200, abstol = 1e-6, reltol = 1e-6))
NeuralPDE.PhysicsInformedNN{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}, Nothing, Nothing, NeuralPDE.Phi{LuxCore.StatefulLuxLayerImpl.StatefulLuxLayer{Val{true}, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, typeof(NeuralPDE.numeric_derivative), Bool, Nothing, Nothing, Nothing, Base.RefValue{Int64}, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}(Integrals.CubatureJLh(0), 1.0e-6, 1.0e-6, 1000, 200), nothing, nothing, NeuralPDE.Phi{LuxCore.StatefulLuxLayerImpl.StatefulLuxLayer{Val{true}, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}(LuxCore.StatefulLuxLayerImpl.StatefulLuxLayer{Val{true}, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}(Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 16, σ), layer_2 = Dense(16 => 16, σ), layer_3 = Dense(16 => 1)), nothing), nothing, (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()), nothing, Val{true}())), NeuralPDE.numeric_derivative, false, nothing, nothing, nothing, NeuralPDE.LogOptions(50), Base.RefValue{Int64}(1), true, false, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}())

Build the PDE system and discretize it.

@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x, y)])
prob = discretize(pde_system, discretization)
┌ Warning: `sys.indvars` is deprecated, please use `get_ivs(sys)`
│   caller = symbolic_discretize(pde_system::ModelingToolkitBase.PDESystem, discretization::NeuralPDE.PhysicsInformedNN{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}, Nothing, Nothing, NeuralPDE.Phi{LuxCore.StatefulLuxLayerImpl.StatefulLuxLayer{Val{true}, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, typeof(NeuralPDE.numeric_derivative), Bool, Nothing, Nothing, Nothing, Base.RefValue{Int64}, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}}) at discretize.jl:417
└ @ NeuralPDE ~/.julia/packages/NeuralPDE/KbHbA/src/discretize.jl:417
┌ Warning: `sys.depvars` is deprecated, please use `get_dvs(sys)`
│   caller = symbolic_discretize(pde_system::ModelingToolkitBase.PDESystem, discretization::NeuralPDE.PhysicsInformedNN{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, NeuralPDE.QuadratureTraining{Float64, Integrals.CubatureJLh}, Nothing, Nothing, NeuralPDE.Phi{LuxCore.StatefulLuxLayerImpl.StatefulLuxLayer{Val{true}, Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(NNlib.σ), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, typeof(NeuralPDE.numeric_derivative), Bool, Nothing, Nothing, Nothing, Base.RefValue{Int64}, Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}}) at discretize.jl:417
└ @ NeuralPDE ~/.julia/packages/NeuralPDE/KbHbA/src/discretize.jl:417
OptimizationProblem. In-place: true u0: ComponentVector{Float64}(layer_1 = (weight = [-0.18131788074970245 -0.754421055316925; -0.7554440498352051 -0.7960199117660522; … ; 0.2959792912006378 -0.10281096398830414; 0.5325663089752197 0.3991973102092743], bias = [0.18418602645397186, 0.4521263539791107, -0.4764629602432251, 0.09443136304616928, 0.2980925142765045, 0.23264242708683014, -0.24402940273284912, -0.5889250636100769, 0.6816770434379578, 0.32966893911361694, 0.6728875637054443, 0.6002930402755737, -0.42442625761032104, -0.2003093808889389, 0.29509031772613525, 0.4867441654205322]), layer_2 = (weight = [0.0402727834880352 0.08439702540636063 … 0.34674033522605896 -0.3011423647403717; 0.42763882875442505 0.1446574181318283 … 0.05488802120089531 -0.09186167269945145; … ; -0.24826084077358246 -0.11895459145307541 … -0.38429656624794006 -0.3560369610786438; -0.17817485332489014 -0.05100295692682266 … 0.3918687701225281 0.3569013774394989], bias = [0.07139760255813599, -0.08017456531524658, 0.23127904534339905, -0.18547475337982178, -0.032125115394592285, 0.1359405517578125, -0.00847160816192627, 0.22971606254577637, -0.04198187589645386, 0.2109212577342987, -0.1671583652496338, -0.1510871946811676, -0.005367904901504517, 0.18775197863578796, 0.09528675675392151, -0.05695366859436035]), layer_3 = (weight = [0.08825881034135818 -0.37937116622924805 … -0.4149743914604187 -0.3320120573043823], bias = [0.18902388215065002]))

Callback function to record the loss

lossrecord = Float64[]
callback = function (p, l)
    push!(lossrecord, l)
    return false
end
#1 (generic function with 1 method)

Solve the problem. It may take a long time.

opt = OptimizationOptimJL.LBFGS(linesearch = LineSearches.BackTracking())
@time res = Optimization.solve(prob, opt, callback = callback, maxiters=1000)
1373.313692 seconds (4.82 G allocations: 401.311 GiB, 2.74% gc time, 6.02% compilation time)
retcode: Success u: ComponentVector{Float64}(layer_1 = (weight = [-0.08692428992490835 0.19653317715565835; -1.3398908136861403 0.3389450513205615; … ; -0.404041913170539 -0.6676326049013144; 1.3320874171086587 1.0665849556625406], bias = [0.5713736693041314, 0.9851079676068731, -0.996892509511507, -0.04688612525772469, 0.42189803835860806, -0.5769070500865011, 1.2138102509491324, 1.9296202751577265, 1.4154248616075358, 1.3316953391665167, 1.3639492719155641, -2.3795148639745363, -0.4126606651681159, -0.17175714642233508, 0.42633348542751637, 0.4248272413849846]), layer_2 = (weight = [0.047044763313161576 0.03695793722718687 … 0.3566349915426999 -0.3024092010498924; 0.7628450236347091 0.5610039418715838 … 0.29775802723669903 0.11067720452502496; … ; 0.04074527294722927 0.3843233107267219 … -0.10028487129832167 0.3321034255924767; 0.21590520708749555 0.4986222169963048 … 0.7143844283734859 0.7105700070899973], bias = [0.10931013023785763, -0.013259585973197542, 0.3010544227155066, -0.1486422023834846, 0.23034228339505014, 0.04175232827637082, -0.03392390505862067, 0.2752976958731256, -0.0685304317327111, 0.5454716117619244, -0.14590749204631642, 0.07334594873525327, 0.2212400454831439, 0.22263622060346583, 0.18038376366500836, 0.01838654950254505]), layer_3 = (weight = [0.46090008493387613 -0.06920656940798178 … -1.0065982497258483 -0.09893006025036266], bias = [0.6222115056735076]))
plot(lossrecord, xlabel="Iters", yscale=:log10, ylabel="Loss", lab=false)
Plot{Plots.GRBackend() n=1}

Plot the predicted solution of the PDE and compare it with the analytical solution to see the relative error.

xs, ys = [DomainSets.infimum(d.domain):dx/10:DomainSets.supremum(d.domain) for d in domains]
analytic_sol_func(x,y) = (sinpi(x)*sinpi(y))/(2pi^2)

phi = discretization.phi
u_predict = reshape([first(phi([x, y], res.u)) for x in xs for y in ys], (length(xs), length(ys)))
u_real = reshape([analytic_sol_func(x, y) for x in xs for y in ys], (length(xs), length(ys)))
diff_u = abs.(u_predict .- u_real)
201×201 Matrix{Float64}: 0.00192687 0.00185194 0.0017774 0.00170326 … 0.00285542 0.00292393 0.00191843 0.00184423 0.00177042 0.00169703 0.00281681 0.00288444 0.0019084 0.00183495 0.00176192 0.0016893 0.0027774 0.00284412 0.00189682 0.00182417 0.00175193 0.00168012 0.00273721 0.00280299 0.00188374 0.0018119 0.00174049 0.00166952 0.00269628 0.0027611 0.0018692 0.00179821 0.00172765 0.00165754 … 0.00265463 0.00271847 0.00185323 0.00178312 0.00171344 0.00164422 0.00261229 0.00267514 0.00183589 0.00176667 0.0016979 0.00162959 0.0025693 0.00263114 0.00181721 0.00174892 0.00168108 0.0016137 0.00252569 0.00258649 0.00179723 0.00172988 0.001663 0.00159658 0.00248148 0.00254123 ⋮ ⋱ ⋮ 0.00105149 0.00106609 0.00107939 0.00109142 0.000556981 0.000571387 0.00112614 0.00113999 0.00115253 0.00116377 0.000595583 0.000610754 0.00120184 0.00121494 0.0012267 0.00123715 0.000633946 0.00064987 0.00127858 0.00129093 0.00130192 0.00131157 … 0.000672036 0.000688699 0.00135637 0.00136796 0.00137816 0.00138702 0.000709818 0.000727208 0.00143519 0.00144602 0.00145545 0.00146349 0.000747257 0.000765359 0.00151505 0.00152512 0.00153375 0.00154099 0.000784316 0.000803115 0.00159593 0.00160524 0.00161309 0.00161951 0.000820959 0.00084044 0.00167783 0.00168638 0.00169344 0.00169905 … 0.000857149 0.000877295
p1 = plot(xs, ys, u_real, linetype=:contourf, title = "analytic");
p2 = plot(xs, ys, u_predict, linetype=:contourf, title = "predicted");
p3 = plot(xs, ys, diff_u, linetype=:contourf, title = "error");
plot(p1, p2, p3)
Plot{Plots.GRBackend() n=3}

This notebook was generated using Literate.jl.