[Back to Imager Home Page]

3D Craniofacial Reconstruction

In forensic pathology and archeology, it is often desirable to try to determine an individual's appearance prior to death for the purposes of identification. Analysis of the skull is the primary means of accomplishing this task, through reconstruction of facial features based on their known relation to skull features. Facial reconstruction in itself is not a means of positive identification: the likenesses created are publicized in the hope of generating leads that would ultimately confirm the identity of the individual in question.

Current methods of estimating the appearance of the individual from his or her skull can be extremely time-consuming, often resulting in a single facial estimate being produced. This single likeness is not necessarily the best estimate of a person's appearance, due to the difficulty involved in determining body fat content prior to death, and of ascertaining the appearance of features such as the nose, lips and eyes. In fact, current success rates for identification resulting from facial reconstruction are on the order of 50%.

The development of a software application which can produce a three-dimensional facial reconstruction of an individual would be of benefit to various law enforcement agencies, by allowing faster, easier and more efficient generation of multiple representations of an individual. These varied likenesses would take into account differences in facial appearance due to body fat content and facial features such as the eyes, nose and lips, which are difficult to reconstruct accurately solely from cranial information. The goal is to improve the success rates for victim or suspect identification, based on the assumption that one of the multiple likenesses will most closely approximate the individual's actual appearance, thus facilitating recognition by family, friends or acquaintances.

A prototype application has been developed which makes use of current 3D graphics technology to produce a three-dimensional reconstruction of the head.

A 3D digitized model of the skull is read in by the application. Dowels that simulate tissue thickness are then interactively placed and oriented by the user.

Figure 1: Digitized Skull

Figure 2: Digitized Skull With Dowels

A generic facial model, created using hierarchical B-splines, is placed over the skull model. This generic face is then fit to the dowels, resulting in the final facial estimate.

Figure 3: Generic Face Before Fitting

Figure 4: Final Facial Approximation

This is a prototype application. Many more dowels than the standard set are needed to produce a good smooth fit. However, the addition of extra dowels will eventually be automated, so that dowel placement on the skull model will not be more time-consuming than dowel placement on an actual physical cast of the skull.

As well, the facial surface is editable after fitting, so the artist can modify such features as nose and lips. The lengths of the dowels can be automatically updated to reflect race, gender and body fat content.

Katrina Archer, karcher@acm.org