Skip to main content

SCULPTOR: Skeleton-Consistent Face Creation Using a Learned Parametric Generator

Reading NotesResearchFace ModelAbout 2 minAbout 721 words

ModelParametricSkullFaceAnatomically ConsistentShapePoseExpressionAppearanceTrait
[Madsen et al. 2018]
[Gruber et al. 2020]
[Ichim et al. 2017]
[Li et al. 2020]
[Li et al. 2017]
SCULPTOR (Ours)

4. Building LUCY

4.1. Data Acquisition and Original Usage

  • 72 individual subject head CT image pairs (pre and post-surgery)
    • image spatial resolution \pu0.48×0.48×1mm3\pu{0.48 \times 0.48 \times 1 mm^3}
  • multi-view face appearance scans

4.2. Data Labeling

  • raw CT data – specialists segment with thresholding method and morphological operations --> separated mandible, maxilla volume and the facial outer surface
  • apply ICT[1] to align multi-view scans to the facial soft tissues captured in CT
  • 29 skeleton and 15 face surface semantic landmarks for model registration

5. SCULPTOR Model

5.1. Model Formulation

  • G\mathcal{G} — geometry for both skeleton and face
  • A\mathcal{A} — face appearance
  • θ\theta — pose parameters (PCA coefficient vector of pose space)
  • β\beta — shape parameters
  • γ\gamma — trait parameters
  • ϕ\phi — expression parameters
  • α\alpha — appearance parameters
  • LBS()LBS(\cdot) — Linear Blend Skinning (LBS) function
  • W\mathcal{W}learned skinning weight ( for LBS )
  • \vbTp\vb{T}_p — person-specific head mesh with variation over the general template \vbT\overline{\vb{T}}
  • \vbT\overline{\vb{T}} — general head template (outer surface + mandible + maxilla)
  • JpJ_p — anatomical joint location for jaws
  • J\mathcal{J} — a sparse matrix that computes joint location from personalized skull vertices with shape BSB_S and trait BDB_D components (defined by experienced surgeons)
  • BB_* — component

M(θ,β,γ,ϕ,α)=\BqtyG(θ,β,γ,α),A(α) \mathcal{M}(\theta, \beta, \gamma, \phi, \alpha) = \Bqty{\mathcal{G}(\theta, \beta, \gamma, \alpha), \mathcal{A}(\alpha)}

G(θ,β,γ,α)=LBS(W,Jp(β,γ),\vbTp(β,γ,θ,ϕ)) \mathcal{G}(\theta, \beta, \gamma, \alpha) = LBS(\mathcal{W}, J_p(\beta, \gamma), \vb{T}_p(\beta, \gamma, \theta, \phi))

Jp(β,γ)=J(\vbT+BS(β;S)+BD(γ;D)) J_p(\beta, \gamma) = \mathcal{J}(\overline{\vb{T}} + B_S(\beta; \mathcal{S}) + B_D(\gamma; \mathcal{D}))

\vbTp(β,γ,θ,ϕ)=\vbT+BS(β;S)+BD(γ;D)+BP(θ;P)+BE(ϕ,E) \vb{T}_p(\beta, \gamma, \theta, \phi) = \overline{\vb{T}} + B_S(\beta; \mathcal{S}) + B_D(\gamma; \mathcal{D}) + B_P(\theta; \mathcal{P}) + B_E(\phi, \mathcal{E})

\vbT=\Bqty\vbTmdb,\vbTmxl,\vbTf \overline{\vb{T}} = \Bqty{\overline{\vb{T}}_{mdb}, \overline{\vb{T}}_{mxl}, \overline{\vb{T}}_f}

BD(γ;D)=Dγ B_D(\gamma; \mathcal{D}) = \mathcal{D} \gamma

5.2. Registration

Registration on skull

  1. skull template \vbTS=\Bqty\vbTmdb,\vbTmxl\overline{\vb{T}}_S = \Bqty{\overline{\vb{T}}_{mdb}, \overline{\vb{T}}_{mxl}} and CT skull \vbCS\vb{C}_S are roughly aligned using Procrustes rigid alignment on landmark correspondences

  2. use embedded deformation to recover skull details

    1. sample control nodes xNx \in \mathcal{N} on the template surface with interval σ\sigma

    2. v=xNw(x,v)Mv v' = \sum_{x \in \mathcal{N}} w(x, v) M v

      • MM — transformation of node xx
      • w()w(\cdot) — influence weight of node xx on vv (Radial Basis Function[2])
      • CD()CD(\cdot) — Chamfer Distance[^3] between two meshes
      • CDn()CD_n(\cdot) — computes the angle between the corresponding vertex normal, adds a normal penalty

      Erskull=Ed+λlElmk+λrEreg E_{rskull} = E_d + \lambda_l E_{lmk} + \lambda_r E_{reg}

      Ed=λdCD(\vbTS,\vbCS)+(1λd)CDn(\vbTs,\vbCS) E_d = \lambda_d CD(\overline{\vb{T}}_S', \vb{C}_S) + (1 - \lambda_d) CD_n(\overline{\vb{T}}_s', \vb{C}_S)

Registration on face

Erface=Ed(\vbTf,\vbCf)+λlElmk+λlapElap E_{rface} = E_d(\overline{\vb{T}}_f, \vb{C}_f) + \lambda_l E_{lmk} + \lambda_{lap} E_{lap}

5.3. Parameter Learning

  • train model parameters — \Bqty\vbT,S,D,W,P\Bqty{\overline{\vb{T}}, \mathcal{S}, \mathcal{D}, \mathcal{W}, \mathcal{P}}

Learning on LUCY

  • train \Bqty\vbT,S,D\Bqty{\overline{\vb{T}}, \mathcal{S}, \mathcal{D}}
  • we compute D\mathcal{D} by performing PCA on the vertex offset of pre- and post-surgery data by di=\vbTposti\vbTpreid_i = \vb{T}_{post}^i - \vb{T}_{pre}^i to model the trait component.

Learning on FaceScape

  • train \BqtyW,P\Bqty{\mathcal{W}, \mathcal{P}}

Optimization Summary

  • alternatively optimize the parameters on two different datasets

Notes


  1. P.J. Besl and Neil D. McKay. 1992. A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 2 (1992), 239–256. https://doi.org/10.1109/34.121791open in new window ↩︎

  2. Taehyun Rhee, J.P. Lewis, Ulrich Neumann, and Krishna Nayak. 2007. Soft-Tissue Deformation for In Vivo Volume Animation. In 15th Pacific Conference on Computer Graphics and Applications (PG’07). 435–438. https://doi.org/10.1109/PG.2007.46open in new window ↩︎