Lecture 12 Deformable Simulation 02|The Implicit Integration Methods
# The Implicit Euler Integration
# Time Step
- In simulation: The time difference between the adjacent ticks on the temporal axis for the simulation:
v[i] += h * acc
x[i] += h * v[i]
- In display: The time difference between two images displayed on the screen: for 60 Hz applications
Sub-(time)-stepping: :

The smaller , the larger (less accurate)
- For explicit integrations: too long time step may lead to explosion (as increases, diverge);
- For implicit: usually converge for all => artifact: slower converge
# Numerical Recipes for Implicit Integrations
# The Baraff and Witkin Style Solution
One iter of Newton’s Method, referred as semi-implicit
& then
Goal: Solving
Assumption: is not too far away from
Algorithm: Non-linear -> Linear ( - the deviation between the two timesteps, which is very small)
Let , then f(x_{n+1}) \approx f(x_n) + \grad_{x}f(x_n)\delta x (1st order Taylor, neglect all the minor errors)
Substitute this approx:
- (Matrix op: (2n x 2n) x (2n x 1) = (2n x 1), sim to Ax = b)
- ,
The solution is actually the location of while in implicit method the sol will be much more closed to the exact root (error is the red line)
# Reformulating the Implicit Euler Problem
To reduce the red part: add another step:
Integrating the nonlinear root finding problem over gives:
x_{n+1}=\operatorname{argmin}_{x}\left(\frac{1}{2}\left\|x-\left(x_{n}+h v_{n}\right)\right\|_{M}^{2}+h^{2} E(x)\right), \text { given } f(x)=\nabla_{x} E(x)Note: (Matrix Norm) ; (Vector Derivative) \grad_x (x^{\mathrm{T}}Ax) = (A +A^{\mathrm{T}})x
# Minimization / Non-linear Root-finding:
Let (Mass matrix is usually diagonalised, so the transpose is itself)
Then (Energy of the der of pos => The fastest energy increasing dir, the negative val is force)
For nonsingular : ( at an extreme => the derivatifve of at 0, the equation above equals to 0)
(The root of implicit Euler)
# Convex Minimization of
The general descent method:

def minimize_g():
x = x_0
while grad_g(x).norm() > EPSILON:
Determine a descent dir: dx
Line search: choose a stepsize: t > 0
Update: x = x + t * dx
Determine a descent dir: $\mathrm{d}x $
Opt 1: Gradient Descent: \mathrm{d}x = -\grad_x g(x) (use the intersect point with the x-axis)
Opt 2: Newton’s Method: (apply the second order derivative as the curvature -> second order -> use its valley (actually also decide the step)) -> usually when a func can solve 2nd order
Find the step size:
Line search: choose a step size given a
Backtracking:
def line_search(): t = t_0 while g(x) + alpha*t*grad_g(x).dot(dx) < g(x+t*dx): # this alpha indicates a line between the grad and the line w/ alpha = 0 t = t * beta return t
, (common choice: $\alpha = 0.03 $ & )
Problem
But most deformable bodies have non-convex energies: unsteady equilibrium.

# Steps
is contains 2 terms: dynamics and elastic
- Init guess: or
- While loop: while not converge:
- Descent dir: \mathrm{d}x = -\grad_x g(x) or \mathrm{d}x = -(\grad_x^2 g(x))^{-1} \grad_x g(x)
- Gradient: \grad_{x} g(x)=M\left(x-\left(x_{n}+h v_{n}\right)\right)+h^{2} \grad_{x} E(x) (linear + non-linear) (Model dep)
- Hessian (Matrix): \grad_x^2 g(x= M+ h^2\grad^2_x E(x) (Model dep)
- Line search: det the stepsize
- Update:
- Descent dir: \mathrm{d}x = -\grad_x g(x) or \mathrm{d}x = -(\grad_x^2 g(x))^{-1} \grad_x g(x)
# Definiteness-fixed Newton’s Method
For general cases:
in Step 2 (while), After compute gradient dir and Hessian, add another substep:
- Fix Hessian to positive definite:
# Definiteness-fix
Opt 1: Global Regularization
- Init: ,
flag = False
,reg = 1
while not flag:
flag
,L = factorize(~H)
Try to factorize (if success -> OK; else: add identity to let it tends to indentity (definite))- ,
reg = reg * 10
- Init: ,
Opt 2: Local Regularization (in mass-spring system)
$$ \nabla_{x}^{2} E(x)=\left[\begin{array}{cccc} K_{1} & -K_{1} & & \\ -K_{1} & K_{1}+K_{2}+K_{3} & -K_{3} & -K_{2} \\ & -K_{3} & K_{3} & \\ & -K_{2} & & K_{2} \end{array}\right] $$
- (a positive semi-definite matrix Otimes another .. matrix => also positive semi-definite)
- $ \Rightarrow \nabla_{x}^{2} g(x)=M+h^{2} \nabla_{x}^{2} E(x)>0$
Has a sufficient condition:
- (<- Eigen value decomposition)
- (<- Clamp the negative eigen values)
- (<- Construct the p.s.d. projection)
After this definiteness-fix: (Newton’s)

# Linear Solvers
# Linear Solvers for
Direct solvers
Inversion: (requires higher complexity for sparse and higher order matrices)
Factorization (usually better especially for sparse matrices):
$ A=\left{\begin{array}{cl} L U & \text {, if } A \text { is a square matrix } \ L D L^{\mathrm{T}} & \text {, if } A=A^{T} \ L ^{\mathrm{T}} & \text {, if } A=A^{T} \text { and } A>0 \end{array}\right.$
Interative solvers
- Stationary iterative linear solvers: Jacobi / Gauss-Seidel / SOR / Multigrid
- Krylov subspace methods: Conjugate Gradient (CG) / biCG / CR / MinRes / GMRes
# Factorization
For (Assume already p.s.d. proceeded)
Solve is equiv to
First solve (Forward substitution)
Then solve (Backward substitution)
For sparse matrices -> complexity not high; But not parallized (CPU backend only)
Current APIs:
[SparseMatrixBuilder] = ti.linalg.SparseMatrixBuilder()
[SparseMatrix] = [SparseMatrixBuilder].build()
[SparseMatrixSolver] = ti.linalg.SparseSolver(solver_type, ordering)
[NumpyArray] = [SparseMastrixSolver].solve([Field])
# Conjugate Gradient (CG) Method
Properties:
- Works for any symmetric positive definite matrix
- Guarantees to converge in iter for A\in\R ^{n\times n}
- Works amazingly good if the condition number (eigenvalues) of is small (when , identity matrix)
- Short codes
Demo: (Python scope - control flow)
def conjugate_gradient(A, b, x):
i = 0
r = b – A @ x
d = r
delta_new = r.dot(r)
delta_0 = delta_new
while i < i_max and delta_new/delta_0 > epsilon**2:
q = A @ d # Matrix-Vector multiplication -> expensive
alpha = delta_new / d.dot(q) # dot prod -> expensive
x = x + alpha*d
r = b - A @ x # r = r - alpha * q
delta_old = delta_new
delta_new = r.dot(r)
beta = delta_new / delta_old
d = r + beta * d
i = i + 1
return x
# Accelerate the CG Method
Reduce the time of sparse-matrix-vector multiplication of
q = A @ d
: (Piece-wise multiply)Use
@ti.kernel
-> computing(Taichi enables thread local storage automatically for this reduction problem) (Taichi TLS / CUDA Reduction Guide)
Reduce the condition number of :
The error (Upper bound)
Instead of solving => solve for ( as much as possible)
- Jacobi:
- Incomplete Cholesky:
- Multigri