Lecture 12 Deformable Simulation 02|The Implicit Integration Methods

10/24/2022, computer graphics

# The Implicit Euler Integration

# Time Step

  • In simulation: The time difference between the adjacent ticks on the temporal axis for the simulation: hsimh_{\text{sim}}
    • v[i] += h * acc
    • x[i] += h * v[i]
  • In display: The time difference between two images displayed on the screen: hdisp=1/60sh_{\text{disp}} = 1/60\ \mathrm{s} for 60 Hz applications

Sub-(time)-stepping: nsubn_{\text{sub}}: hsim=hdispnsubh_{\text{sim}} = \frac{h_{\text{disp}}} {n_{\text{sub}}}

image-20211124213141086

The smaller nsubn_{\text{sub}}, the larger hsimh_{\text{sim}} (less accurate)

  • For explicit integrations: too long time step may lead to explosion (as hh increases, diverge);
  • For implicit: usually converge for all hh => artifact: slower converge

# Numerical Recipes for Implicit Integrations

# The Baraff and Witkin Style Solution

One iter of Newton’s Method, referred as semi-implicit

  • xn+1=xn+hvn+1x_{n+1} = x_n + hv_{n+1} & vn+1=vn+hM1f(xn+1)v_{n+1} = v_n + hM^{-1} f(x_{n+1}) then xn+1=xn+hvn+h2M1f(xn+1)\Rightarrow x_{n+1} = x_n + hv_n + h^2 M^{-1}f(x_{n+1})

  • Goal: Solving xn+1=xn+hvn+h2M1f(xn+1)x_{n+1} = x_n + hv_n + h^2 M^{-1}f(x_{n+1})

  • Assumption: xn+1x_{n+1} is not too far away from xnx_n

  • Algorithm: Non-linear -> Linear (δx\delta x - the deviation between the two timesteps, which is very small)

    • Let δx=xn+1xn\delta x = x_{n+1} - x_n, then f(x_{n+1}) \approx f(x_n) + \grad_{x}f(x_n)\delta x (1st order Taylor, neglect all the minor errors)

    • Substitute this approx:

      • xn+δx=xn+hvn+h2M1(f(xn)+xf(xn)δx)x_{n}+\delta x=x_{n}+h v_{n}+h^{2} M^{-1}\left(f\left(x_{n}\right)+\nabla_{x} f\left(x_{n}\right) \delta x\right)
      • (Mh2xf(xn))δx=hMvn+h2f(xn)\left(M-h^{2} \nabla_{x} f\left(x_{n}\right)\right) \delta x=h M v_{n}+h^{2} f\left(x_{n}\right) (Matrix op: (2n x 2n) x (2n x 1) = (2n x 1), sim to Ax = b)
      • xn+1=xn+δxx_{n+1} = x_n + \delta x, vn+1=δx/hv_{n+1} = \delta x/h

      The solution is actually the location of xn+1x_{n+1} while in implicit method the sol will be much more closed to the exact root (error is the red line)

      image-20211124231048617

# Reformulating the Implicit Euler Problem

To reduce the red part: add another step:

  • Integrating the nonlinear root finding problem over xx gives:

    x_{n+1}=\operatorname{argmin}_{x}\left(\frac{1}{2}\left\|x-\left(x_{n}+h v_{n}\right)\right\|_{M}^{2}+h^{2} E(x)\right), \text { given } f(x)=\nabla_{x} E(x)

    Note: (Matrix Norm) xA=xTAx\| x\| _A = \sqrt{x^{\mathrm{T}}Ax}; (Vector Derivative) \grad_x (x^{\mathrm{T}}Ax) = (A +A^{\mathrm{T}})x

# Minimization / Non-linear Root-finding:

  • Let g(x)=12x(xn+hvn)M2+h2E(x)g(x) =\frac{1}{2}\left\|x-\left(x_{n}+h v_{n}\right)\right\|_{M}^{2}+h^{2} E(x) (Mass matrix is usually diagonalised, so the transpose is itself)

  • Then xg(xn+1)=M(xn+1(xn+hvn))h2f(xn+1)\nabla_{x} g\left(x_{n+1}\right)=M\left(x_{n+1}-\left(x_{n}+h v_{n}\right)\right)-h^{2} f\left(x_{n+1}\right) (Energy of the der of pos => The fastest energy increasing dir, the negative val is force)

  • For nonsingular MM: (gg at an extreme => the derivatifve of gg at 0, the equation above equals to 0)

    xg(xn+1)=0xn+1=(xn+hvn)+h2M1f(xn+1)\nabla_{x} g\left(x_{n+1}\right)=0 \leftrightarrow x_{n+1}=\left(x_{n}+h v_{n}\right)+h^{2} M^{-1} f\left(x_{n+1}\right) (The root of implicit Euler)

# Convex Minimization of ming(x)\min g(x)

The general descent method:

image-20211125172532811
def minimize_g():
    x = x_0
    while grad_g(x).norm() > EPSILON:
        Determine a descent dir: dx
        Line search: choose a stepsize: t > 0
        Update: x = x + t * dx
  • Determine a descent dir: $\mathrm{d}x $

    • Opt 1: Gradient Descent: \mathrm{d}x = -\grad_x g(x) (use the intersect point with the x-axis)

      image-20211125172740172
    • Opt 2: Newton’s Method: dx=(x2g(x))1xg(x)\mathrm{d} x=-\left(\nabla_{x}^{2} g(x)\right)^{-1} \nabla_{x} g(x) (apply the second order derivative as the curvature -> second order -> use its valley (actually also decide the step)) -> usually when a func can solve 2nd order

      image-20211125173055239
  • Find the step size:

    • Line search: choose a step size t>0t>0 given a dx\mathrm{d}x

    • Backtracking:

      def line_search(): 
          t = t_0
          while g(x) + alpha*t*grad_g(x).dot(dx) < g(x+t*dx): 
              # this alpha indicates a line between the grad and the line w/ alpha = 0
              t = t * beta
              
          return t
      
      image-20211125212312113

      α(0,0.5)\alpha \in (0, 0.5), β(0,1)\beta \in (0,1) (common choice: $\alpha = 0.03 $ & β=0.5\beta = 0.5)

Problem

But most deformable bodies have non-convex energies: unsteady equilibrium.

image-20211125213834479

# Steps

g(x)g(x) is contains 2 terms: dynamics and elastic

g(x)=12x(xn+hvn)M2+h2E(x)g(x)=\frac{1}{2}\left\|x-\left(x_{n}+h v_{n}\right)\right\|_{M}^{2}+h^{2} E(x)

  1. Init guess: x=xnx = x_n or x=xn+hvnx = x_n + hv_n
  2. While loop: while not converge:
    • Descent dir: \mathrm{d}x = -\grad_x g(x) or \mathrm{d}x = -(\grad_x^2 g(x))^{-1} \grad_x g(x)
      • Gradient: \grad_{x} g(x)=M\left(x-\left(x_{n}+h v_{n}\right)\right)+h^{2} \grad_{x} E(x) (linear + non-linear) (Model dep)
      • Hessian (Matrix): \grad_x^2 g(x= M+ h^2\grad^2_x E(x) (Model dep)
    • Line search: det the stepsize tt
    • Update: x=x+tdxx = x + t\cdot \mathrm{d}x

# Definiteness-fixed Newton’s Method

For general cases:

in Step 2 (while), After compute gradient dir and Hessian, add another substep:

  • Fix Hessian to positive definite: H~=fix(H)\tilde{H} = \mathrm{fix}(H)

# Definiteness-fix

H~=fix(H)\tilde{H} = \mathrm{fix}(H)

  • Opt 1: Global Regularization

    • Init: H~=H\tilde{H} = H, flag = False, reg = 1
    • while not flag:
      • flag, L = factorize(~H) Try to factorize H~=LLT\tilde{H} = LL^{\mathrm{T}} (if success -> OK; else: add identity to let it tends to indentity (definite))
      • H~=H+regI\tilde{H} = H + \mathrm{reg} * I, reg = reg * 10
  • Opt 2: Local Regularization (in mass-spring system)

    image-20211125232625721 $$ \nabla_{x}^{2} E(x)=\left[\begin{array}{cccc} K_{1} & -K_{1} & & \\ -K_{1} & K_{1}+K_{2}+K_{3} & -K_{3} & -K_{2} \\ & -K_{3} & K_{3} & \\ & -K_{2} & & K_{2} \end{array}\right] $$

    [K1K1K1K1][K2K2K2K2][K3K3K3K3]\left[\begin{array}{cc} K_{1} & -K_{1} \\ -K_{1} & K_{1} \end{array}\right]\left[\begin{array}{cc} K_{2} & -K_{2} \\ -K_{2} & K_{2} \end{array}\right]\left[\begin{array}{cc} K_{3} & -K_{3} \\ -K_{3} & K_{3} \end{array}\right]

    • x2g(x)=M+h2x2E(x)=M+h2j=1mx2Ej(x)\nabla_{x}^{2} g(x)=M+h^{2} \nabla_{x}^{2} E(x)=M+h^{2} \sum_{j=1}^{m} \nabla_{x}^{2} E_{j}(x)

      • K10[K1K1K1K1]=K1[1111]0K_{1} \geqslant 0 \Rightarrow\left[\begin{array}{cc} K_{1} & -K_{1} \\ -K_{1} & K_{1} \end{array}\right]=K_{1} \otimes\left[\begin{array}{cc} 1 & -1 \\ -1 & 1 \end{array}\right] \geqslant 0 (a positive semi-definite matrix Otimes another .. matrix => also positive semi-definite)
      • x2E(x)=x2E1(x)+x2E2(x)+x2E3(x)0\Rightarrow \nabla_{x}^{2} E(x)=\nabla_{x}^{2} E_{1}(x)+\nabla_{x}^{2} E_{2}(x)+\nabla_{x}^{2} E_{3}(x) \geqslant 0
      • $ \Rightarrow \nabla_{x}^{2} g(x)=M+h^{2} \nabla_{x}^{2} E(x)>0$

      Has a sufficient condition: K10,K20,K30K_1 \ge 0,\ K_2 \ge0, \ K_3\ge0

    • K1=k1(Il1x1x2(I(x1x2)(x1x2)Tx1x22))R2×2K_{1}=k_{1}\left(I-\frac{l_{1}}{\left\|x_{1}-x_{2}\right\|}\left(I-\frac{\left(x_{1}-x_{2}\right)\left(x_{1}-x_{2}\right)^{T}}{\left\|x_{1}-x_{2}\right\|^{2}}\right)\right) \in \mathbb{R}^{2 \times 2}

      • K1=QΛQTK_1 = Q \Lambda Q^{T} (<- Eigen value decomposition)
      • Λ~=max(0,Λ)\tilde \Lambda = \max(0,\Lambda) (<- Clamp the negative eigen values)
      • K1~=QΛ~QT\tilde{K_1} = Q\tilde{\Lambda} Q^{\mathrm{T}} (<- Construct the p.s.d. projection)

After this definiteness-fix: (Newton’s)

image-20211126011246122

# Linear Solvers

# Linear Solvers for Ax=bAx = b

  • Direct solvers

    • Inversion: x=A1bx = A^{-1} b (requires higher complexity for sparse and higher order matrices)

    • Factorization (usually better especially for sparse matrices):

      $ A=\left{\begin{array}{cl} L U & \text {, if } A \text { is a square matrix } \ L D L^{\mathrm{T}} & \text {, if } A=A^{T} \ L ^{\mathrm{T}} & \text {, if } A=A^{T} \text { and } A>0 \end{array}\right.$

  • Interative solvers

    • Stationary iterative linear solvers: Jacobi / Gauss-Seidel / SOR / Multigrid
    • Krylov subspace methods: Conjugate Gradient (CG) / biCG / CR / MinRes / GMRes

# Factorization

For A=LLTA = L L^{\mathrm{T}} (Assume already p.s.d. proceeded)

  • Solve Ax=bAx = b is equiv to LLTx=bLL^{\mathrm{T}}x = b

  • First solve Ly=bLy = b (Forward substitution)

  • Then solve LTx=yL^{\mathrm{T}} x = y (Backward substitution)

    For sparse matrices -> complexity not high; But not parallized (CPU backend only)

Current APIs:

  • [SparseMatrixBuilder] = ti.linalg.SparseMatrixBuilder()
  • [SparseMatrix] = [SparseMatrixBuilder].build()
  • [SparseMatrixSolver] = ti.linalg.SparseSolver(solver_type, ordering)
  • [NumpyArray] = [SparseMastrixSolver].solve([Field])

# Conjugate Gradient (CG) Method

Properties:

  • Works for any symmetric positive definite matrix A=AT,A>0A = A^{\mathrm{T}},\ A >0
  • Guarantees to converge in nn iter for A\in\R ^{n\times n}
  • Works amazingly good if the condition number κ=λmax/λmin\kappa = \lambda_{\max} / \lambda_{\min} (eigenvalues) of AA is small (when κ=1\kappa = 1, identity matrix)
  • Short codes

Demo: (Python scope - control flow)

def conjugate_gradient(A, b, x):
	i = 0
	r = b – A @ x
	d = r
	delta_new = r.dot(r)
	delta_0 = delta_new
	while i < i_max and delta_new/delta_0 > epsilon**2:
		q = A @ d	# Matrix-Vector multiplication -> expensive
		alpha = delta_new / d.dot(q)	# dot prod -> expensive
		x = x + alpha*d
		r = b - A @ x # r = r - alpha * q
		delta_old = delta_new
		delta_new = r.dot(r)
		beta = delta_new / delta_old
		d = r + beta * d
		i = i + 1
	return x

# Accelerate the CG Method

  • Reduce the time of sparse-matrix-vector multiplication of q = A @ d: (Piece-wise multiply)

    A=M+h2j=1mx2Ej(x)Ad=Md+h2j=1mx2Ej(x)dA =M +h^{2} \sum_{j=1}^{m} \nabla_{x}^{2} E_{j}(x) \quad \Rightarrow A d=M d+h^{2} \sum_{j=1}^{m} \nabla_{x}^{2} E_{j}(x) d

    Use @ti.kernel -> computing

    (Taichi enables thread local storage automatically for this reduction problem) (Taichi TLS / CUDA Reduction Guide)

  • Reduce the condition number of AA:

    The error (Upper bound)

    eiA2(κ1κ+1)ie0A\left\|e_{i}\right\|_{A} \leq 2\left(\frac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}\right)^{i}\left\|e_{0}\right\|_{A}

  • Instead of solving Ax=bAx = b => solve for M1Ax=M1bM^{-1} Ax = M^{-1} b (MAM\sim A as much as possible)

    • Jacobi: M=diag(A)M = \mathrm{diag}(A)
    • Incomplete Cholesky: M=L~L~TM = \tilde{L}\tilde{L}^{\mathrm{T}}
    • Multigri
Last Updated: Invalid Date