2023-09-06 FD Verification#
Last time#
Discuss activity
Measuring errors
Stable discretizations
Today#
Solutions and matrix properties
Arbitrary order discretization
Method of manufactured solutions
Techniques for boundary conditions
Extra: Fourier analysis of stencils
Reading for next time#
using Plots
default(linewidth=3, legendfontsize=10)
using LinearAlgebra
diff1l(x, u) = x[2:end], (u[2:end] - u[1:end-1]) ./ (x[2:end] - x[1:end-1])
diff1r(x, u) = x[1:end-1], (u[2:end] - u[1:end-1]) ./ (x[2:end] - x[1:end-1])
diff1c(x, u) = x[2:end-1], (u[3:end] - u[1:end-2]) ./ (x[3:end] - x[1:end-2])
function refinement_error(f, fprime, d)
error = []
for n in grids
x = LinRange(-3, 3, n)
xx, yy = d(x, f.(x))
push!(error, norm(yy - fprime.(xx), 1)/n)
end
error
end
refinement_error (generic function with 1 method)
Consistency + Stability = Convergence#
Consistency#
When we apply the differential operator to the exact solution, we get a small residual.
The residual converges under grid refinement.
Hopefully fast as \(h \to 0\)
Stability#
There do not exist “bad” functions that also satisfy the equation.
This gets subtle for problems like incompressible flow.
Second derivatives#
We can compute a second derivative by applying first derivatives twice.
function diff2a(x, u)
xx, yy = diff1c(x, u)
diff1c(xx, yy)
end
function diff2b(x, u)
xx, yy = diff1l(x, u)
diff1r(xx, yy)
end
diff2list = [diff2a, diff2b]
n = 10
x = LinRange(-3, 3, n)
u = - cos.(x);
fig = plot(cos, xlims=(-3, 3))
for d2 in diff2list
xx, yy = d2(x, u)
plot!(fig, xx, yy, marker=:circle, label=d2)
end
fig
How fast do these approximations converge?#
grids = 2 .^ (3:10)
hs = 1 ./ grids
function refinement_error2(f, f_xx, d2)
error = []
for n in grids
x = LinRange(-3, 3, n)
xx, yy = d2(x, f.(x))
push!(error, norm(yy - f_xx.(xx), Inf))
end
error
end
refinement_error2 (generic function with 1 method)
fig = plot(xlabel="h", xscale=:log10, ylabel="Error", yscale=:log10)
for d2 in diff2list
error = refinement_error2(x -> -cos(x), cos, d2)
plot!(fig, hs, error, marker=:circle, label=d2)
end
plot!(fig, hs, hs .^ 2, label="\$h^2\$")
Both methods are second order accurate.
The
diff2b
method is more accurate thandiff2a
(by a factor of 4)The
diff2a
method can’t compute derivatives at points adjacent the boundary.We don’t know yet whether either is stable
Differentiation matrices#
All our diff*
functions thus far have been linear in u
, therefore they can be represented as matrices.
function diff1_mat(x)
n = length(x)
D = zeros(n, n)
h = x[2] - x[1]
D[1, 1:2] = [-1/h 1/h]
for i in 2:n-1
D[i, i-1:i+1] = [-1/2h 0 1/2h]
end
D[n, n-1:n] = [-1/h 1/h]
D
end
x = LinRange(-1, 1, 5)
diff1_mat(x)
5×5 Matrix{Float64}:
-2.0 2.0 0.0 0.0 0.0
-1.0 0.0 1.0 0.0 0.0
0.0 -1.0 0.0 1.0 0.0
0.0 0.0 -1.0 0.0 1.0
0.0 0.0 0.0 -2.0 2.0
x = LinRange(-3, 3, 20)
plot(x, diff1_mat(x) * sin.(x), marker=:circle)
plot!(cos)
How accurate is this derivative matrix?#
grids = 2 .^ (2:14); hs = 1 ./ grids
fig = plot(xscale=:log10, yscale=:log10, legend=:topleft)
error = refinement_error(sin, cos, (x, u) -> (x, diff1_mat(x) * u))
plot!(fig, hs, error, marker=:circle)
plot!(fig, hs, hs, label="\$h\$")
plot!(fig, hs, hs .^ 2, label="\$h^2\$")
Can we study it as a matrix?#
function my_spy(A)
cmax = norm(vec(A), Inf)
s = max(1, ceil(120 / size(A, 1)))
spy(A, marker=(:square, s), c=:diverging_rainbow_bgymr_45_85_c67_n256, clims=(-cmax, cmax))
end
D = diff1_mat(x)
my_spy(D)
e = eigen(D)
#plot(x, real.(e.vectors[:,2]), ylim=(-1, 1))
scatter(real(e.values), imag(e.values), marker=:circle)
#cond(e.vectors)
Second derivative with Dirichlet boundary conditions#
The left endpoint in our example boundary value problem has a Dirichlet boundary condition,
This matrix is not symmetric even if \(A\) is.
function laplacian_dirichlet(x)
n = length(x)
D = zeros(n, n)
h = x[2] - x[1]
D[1, 1] = 1
for i in 2:n-1
D[i, i-1:i+1] = (1/h^2) * [-1, 2, -1]
end
D[n, n] = 1
D
end
laplacian_dirichlet (generic function with 1 method)
Laplacian as a matrix#
L = laplacian_dirichlet(x)
my_spy(L)
@show cond(L[1:end, 1:end])
eigvals(L[2:end-1, 2:end-1])
#eigvals(L)
L[1:4, 1:4]
cond(L[1:end, 1:end]) = 186.66533419312083
4×4 Matrix{Float64}:
1.0 0.0 0.0 0.0
-10.0278 20.0556 -10.0278 0.0
0.0 -10.0278 20.0556 -10.0278
0.0 0.0 -10.0278 20.0556
Solutions#
L = laplacian_dirichlet(x)
f = zero.(x)
f[5] = 1
f[end] = 0;
plot(x, f)
u = L \ f
plot(x, u)
Discrete “Green’s functions”#
x = LinRange(-3, 3, 10)
plot(x, inv(L)[:, 4])
Ln = copy(L)
Ln[n, n-1] = -1
plot(x, inv(Ln)[:, 3])
Discrete eigenfunctions#
x = LinRange(-1, 1, 40)
L = laplacian_dirichlet(x)
Lambda, V = eigen(L)
plot(Lambda, marker=:circle)
@show Lambda[1:4]
plot(x, V[:, 1:4])
Lambda[1:4] = [1.0, 1.0, 2.466067163933257, 9.848275263867125]
Outlook on our method#
Pros#
Consistent
Stable
Second order accurate (we hope)
Cons#
Only second order accurate (at best)
Worse than second order on non-uniform grids
Worse than second order at Neumann boundaries
Boundary conditions break symmetry
Interpolation by Vandermonde matrices#
We can compute a polynomial
that assumes function values \(p(x_i) = u_i\) by solving a linear system with the Vandermonde matrix.
function vander(x, k=nothing)
if k === nothing
k = length(x)
end
V = ones(length(x), k)
for j = 2:k
V[:, j] = V[:, j-1] .* x
end
V
end
vander (generic function with 2 methods)
vander(LinRange(-1, 1, 5))
5×5 Matrix{Float64}:
1.0 -1.0 1.0 -1.0 1.0
1.0 -0.5 0.25 -0.125 0.0625
1.0 0.0 0.0 0.0 0.0
1.0 0.5 0.25 0.125 0.0625
1.0 1.0 1.0 1.0 1.0
Fitting a polynomial#
k = 4
x = LinRange(-2, 2, k)
u = sin.(x)
V = vander(x)
c = V \ u
scatter(x, u, label="\$u_i\$", legend=:topleft)
plot!(x -> (vander(x, k) * c)[1,1], label="\$p(x)\$")
plot!(sin, label=sin)
Differentiating#
We’re given the coefficients \(c = V^{-1} u\) of the polynomial
function fdstencil1(source, target)
"first derivative stencil from source to target"
x = source .- target
V = vander(x)
inv(V)[2, :]' # as a row vector
end
plot([z -> fdstencil1(x, z) * u, cos], xlims=(-3,3))
scatter!(x, 0*x, label="grid points")
Arbitrary order#
function fdstencil(source, target, k)
"kth derivative stencil from source to target"
x = source .- target
V = vander(x)
rhs = zero(x)'
rhs[k+1] = factorial(k)
rhs / V
end
fdstencil(x, 0.5, 2)
1×4 adjoint(::Vector{Float64}) with eltype Float64:
0.0703125 0.351563 -0.914062 0.492187
plot([z -> fdstencil(x, z, 2) * u,
z -> -sin(z)], xlim=(-3, 3))
scatter!(x, 0*x, label="grid points")
We didn’t call inv(V)
; what’s up?#
Convergence order#
hs = 2 .^ -LinRange(-4, 10, 10)
function diff_error(u, du, h; n, k, z=0)
x = LinRange(-h, h, n) .+ .5
fdstencil(x, z, k) * u.(x) - du.(z)
end
errors = [diff_error(sin, t -> -sin(t), h, n=5, k=2, z=.5+0.1*h)
for h in hs]
plot(hs, abs.(errors), marker=:circle)
plot!(h -> h^3, label="\$h^?\$", xscale=:log10, yscale=:log10)
Observations#
When using \(n=3\) points, we fit a polynomial of degree 2 and have error \(O(h^3)\) for interpolation \(p(0)\).
Each derivative gives up one order of accuracy in general.
Centered diff on uniform grids can have extra cancellation (superconvergence)
The Vandermonde matrix is notoriously ill-conditioned with many points \(n\). We recommend using a stable algorithm from Fornberg.
High order discretization of the Laplacian#
The Poisson problem \(-u_{xx} = f\) with boundary conditions#
function poisson(x, spoints, forcing; left=(0, zero), right=(0, zero))
n = length(x)
L = zeros(n, n)
rhs = forcing.(x)
for i in 2:n-1
jleft = min(max(1, i-spoints÷2), n-spoints+1)
js = jleft : jleft + spoints - 1
L[i, js] = -fdstencil(x[js], x[i], 2)
end
L[1,1:spoints] = fdstencil(x[1:spoints], x[1], left[1])
L[n,n-spoints+1:n] = fdstencil(x[n-spoints+1:n], x[n], right[1])
rhs[1] = left[2](x[1])
rhs[n] = right[2](x[n])
L, rhs
end
poisson (generic function with 1 method)
L, b = poisson(LinRange(-1, 1, 6), 3, zero, left=(1, zero))
L
6×6 Matrix{Float64}:
-3.75 5.0 -1.25 0.0 0.0 0.0
-6.25 12.5 -6.25 0.0 0.0 0.0
0.0 -6.25 12.5 -6.25 0.0 0.0
0.0 0.0 -6.25 12.5 -6.25 0.0
0.0 0.0 0.0 -6.25 12.5 -6.25
0.0 0.0 0.0 0.0 -0.0 1.0
Method of manufactured solutions#
Problem: analytic solutions to PDEs are hard to find#
Let’s choose a smooth function with rich derivatives,
This works for nonlinear too.
x = LinRange(-2, 2, 10)
L, rhs = poisson(x, 3,
x -> 2 * tanh(x) / cosh(x)^2,
left=(0, tanh),
right=(1, x -> cosh(x)^-2))
u = L \ rhs
plot(x, u, marker=:circle, legend=:topleft)
plot!(tanh)
Convergence rate#
ns = 2 .^ (2:10)
hs = 1 ./ ns
function poisson_error(n; spoints=3)
x = LinRange(-2, 2, n)
L, rhs = poisson(x, spoints, x -> 2 * tanh(x) / cosh(x)^2,
left = (0, tanh),
right = (1, x -> cosh(x)^-2))
u = L \ rhs
norm(u - tanh.(x), Inf)
end
poisson_error (generic function with 1 method)
plot(hs, [poisson_error(n, spoints=3) for n in ns], marker=:circle)
plot!(h -> h^2, label="\$h^2\$", xscale=:log10, yscale=:log10)
Symmetry in boundary conditions: Dirichlet#
We have implemented Dirichlet conditions by modifying the first row of the matrix,
This matrix is not symmetric even if \(A\) is.
We can eliminate \(u_0\) and create a reduced system for \(u_{1:}\).
Generalize: consider a \(2\times 2\) block system
\[\begin{split} \begin{bmatrix} I & 0 \\ A_{10} & A_{11} \end{bmatrix} \begin{bmatrix} u_0 \\ u_1 \end{bmatrix} = \begin{bmatrix} f_0 \\ f_1 \end{bmatrix} .\end{split}\]
We can rearrange as
This is called “lifting” and is often done implicitly in the mathematics literature. It is convenient for linear solvers and eigenvalue solvers, but inconvenient for IO and postprocessing, as well as some nonlinear problems.
Convenient alternative: write
\[\begin{split} \begin{bmatrix} I & 0 \\ 0 & A_{11} \end{bmatrix} \begin{bmatrix} u_0 \\ u_1 \end{bmatrix} = \begin{bmatrix} f_0 \\ f_1 - A_{10} f_0 \end{bmatrix}, \end{split}\]which is symmetric and decouples the degrees of freedom associated with the boundary. This method applies cleanly to nonlinear problems.Optionally scale the identity by some scalar related to the norm of \(A_{11}\).
Symmetry in boundary conditions: Neumann#
Consider FD discretization of the Neumann boundary condition
Use a one-sided difference formula as in
\[ \frac{u_n - u_{n-1}}{h} = b . \]
an extra discretization choice
may reduce order of accuracy compared to interior discretization, lose symmetry.
Temporarily introduce a ghost value \(u_{n+1} = u(x_{n+1} = 1 + h)\) (possibly more) and define it to be a reflection of the values from inside the domain. In the case \(b=0\), this reflection is \(u_{n+i} = u_{n-i}\). More generally,
\[ u_{n+i} = u_{n-i} + 2b(x_n - x_{n-i}) . \]
After this definition of ghost values, we apply the interior discretization at the boundary. For our reference equation, we would write
which simplifies to
Fourier analysis of stencils#
Consider the plane waves \(\phi(x, \theta) = e^{i\theta x}\).
Sample \(\phi\) on a discrete grid \(x = \mathbb Z\) and apply the stencil
With \(S = \begin{bmatrix} -1 & 2 & -1 \end{bmatrix}\), we get
plot([theta -> 2 - 2*cos(theta),
theta -> theta^2],
xlims=(-pi, pi), ylim=(0, 5))