Better faces in computer animation

This Seattle Times article talks about the problems of realistic human faces in computer animation.   The biggest problem facing computer animators is known as the "uncanny valley".  As animation techniques get better, the tiny remaining errors are magnified.  Paradoxically, this makes the face seem less realistic.  I've always believed that the answer to the uncanny valley is to digitize the performance of real actors, then use that digitized performance in computer animation.  (Some directors hope that computer animation will eliminate the need for prima donna actors, but I suspect that that the most cost-effective technique will always involve actors in some form.)

Production companies have been doing full-body motion-capture for several years by attaching a few dozen reflective spots to an actor, then using cameras and computers to track those dots.  These dots are then used to build the digital motion model.  But to get past the uncanny valley for digital faces, thousands of points would need to be tracked.  The current main-stream techniques aren't practical at these scales.

The Seattle Times article reports on the technology of Mova, a silicon valley startup.  Their solution begins by sponging green fluorescent paint onto a performers face.  They record the actors performance with a set of cameras, then use a program that uses imperfections in the sponged-on paint to build up a detailed digital model of the face for later use in computer animation.  Mova claims their system has sub-millimeter accuracy which will be necessary to get past the uncanny valley.   Their web site has a few movies and Flash presentations, but only a tiny bit of the final product.  That little bit looks very good, so if Mova can deliver large quantities of this type of animation they have an excellent future in front of them.

Leave a Reply

Your email address will not be published. Required fields are marked *