Model-Based Visual Communication

Example Results (MPEG-1 video clip of an example result)

These techniques have beed applied to several video sequences depicting the head and shoulders of a single person. The motion of the head and face appearing in one video are tracked and used to create a second video depicting the computer-generated face as it follows the motions of the person in the first video. An example is illustrated in Fig. 2.

(a)
 
(b)
 
(c)
 
(d)

Figure 2.  Head tracking:  (a) frames extracted from a video of M.T.;  (b) the same five frames from a video where the computer-generated head image follows the motion of M.T.'s head;  (c) the video of (b) with the texture map removed;  (d) a computer-generated video where S.K.'s head follows the motion of M.T.'s head.

In Fig. 2(a), five frames extracted from a video of M.T. are shown. The same five frames from the computer-generated video are displayed in Fig. 2(b). The fact that the video is computer generated is more apparent in Fig. 2(c), where texture mapping has been disabled. As is mentioned in the introduction, a unique feature of this type of video coding is the ability for the person to appear differently at the receiver (decoder) than he does at the transmitter (encoder). In fact, the computer-generated video appearing at the receiver is not required to look anything like the person at the transmitter. This feature is illustrated in Fig. 2(d) where a graphics model of S.K.'s head moves in synchronization with the video of M.T.