|
|
Projects
|
|
 
 
Face Cloning for Animation
 

Background:

Generation of realistic looking, animated human face models is one of the most interesting and challenging tasks in computer graphics today, even more so when life is to be breathed into digitized version of real individuals. However, most existing modeling strategies are based on the generic facial models which only depict the shape and features of an average human face. Since a hallmark of the individuality of the people is the range of variation in the shape of their faces, an animation that fails to reproduce this diversity deprives its characters of independent identities. To animate a scene realistically or to play out a virtual interaction believably requires reconstruction of the face of a specific person, i.e. cloning a real person’s face.
Existing Problems in Face Reconstruction from Range Data

In the task of modeling human face from real individuals for animation we are usually confronted with two conflicting goals: one is the requirement for accurate reproduction of face shape, the other is the demand for an efficient representation which can be animated easily and quickly. The goal of face cloning calls for models that are based on real measurements of the structures of the human face. Current technology allows us to acquire precise 3D geometry of a face easily by using a range scanning device. 3D models reconstructed automatically from range data can bear very good resemblance to the specific persons, especially if they are properly textured. In practice, though, it turns out that there are a number of obstacles to using the acquired geometry directly for reconstructing animatable facial models:

  • absence of functional structure for animation;
  • irregular and dense surface data that can not be used for optimal animatable model construction and real-time animation;
  • incomplete data due to projector/camera shadowing effects or bad reflective properties of the surface.
Our Method:

To address these problems, we propose a new Structure-Driven-Adaptation (SDA) method to efficient reconstruction of animated 3D faces of real human individuals. The technique is based on adapting a prototype generic facial model to the acquired surface data in an “outside-in” manner: deformation applied to the external skin layer is propagated along with the subsequent transformations to the muscles, with the final effect of morphing the underlying skull. The generic control model has a known topology and incorporates an anatomy-based layered structure hierarchy of physically-based skin, muscles, and skull. What is unique about our approach is that the layered representation is utilized not only to produce appropriate skin deformations during animation, but also to generate the model itself. Geometry and texture information of the faces of real individuals is acquired by using a laser range scanner. Starting with interactive specification of a set of anthropometric landmarks on the generic control model and scanned surface, a global alignment automatically adapts the position, size and orientation of the generic control model to align it with the scan data based on a series of measurements between a subset of landmarks. The physically-based face shape adaptation then fits positions of all vertices on the generic control model to the scan surface data. The generic mesh is modeled as a dynamic deformable surface. Deformation of the mesh results from the action of internal force which imposes surface continuity constraints and external forces which attract the surface such that it fits the data. We incorporate the effect of structural differences in muscles and skull - both to generate and animate the model. SDA transfers the muscles to the new geometry of the skin surface. A set of skull feature points is then automatically generated from the new external structural layers by SDA. These feature points are used to deform the attached mesh skull representation, using a volume morphing approach. With the adapted muscle and skull structures, the reconstructed model can be animated immediately to generate various expressions using the given muscle and jaw motion parameters.

Animating this adapted low-resolution control mesh is computationally efficient, while the reconstruction of high-resolution surface detail on the animated control model is controlled separately. A scalar displacement map represents the detail of the high-resolution geometry, providing an efficient representation of the surface shape and allowing control over level of detail. We develop an offset-envelope mapping method to automatically generate a displacement map by mapping the scan data onto the low-resolution control mesh. A hierarchical representation of the model is then constructed to approximate the scanned data-set with increasing accuracy by surface refinement using a triangular mesh subdivision scheme together with resampling of the displacement map. This mechanism enables efficient and seamless animation of the high-resolution human face geometry through the animation control over the adapted control model.

 
 
Last Updated:
2007-06-11