Extraction of Human Motion in a Video Sequence

This work uses a model-based approach in acquiring the motion of human body from a video sequence for multimedia application. Compared with various motion-capture systems, using a model can achieve a flexible and low cost acquisition of human motion in the image sequence. We construct a standard 3D articulate human model that has changeable sizes, color and surface shape to help the matching of the model and the figure in an image sequence. The model can be driven either automatically by program or manually through a graphics interface. We first fit the model to some frames to obtain a personal model of the focused figure. The subsequent matching of the personal model with the images starts from matching key frames. The motion between key frames is predicted and interpolated according to the motion smoothness. The actual traces are verified by color based correlation of the model and images. We test various sequences and it shows that providing a flexible and functional model can achieve a stable extraction of human motion. The image sequence can be any sports, performances, dancing, and famous scenes recorded in old films. The obtained motion parameter sequence can be delivered to any other model to realize a virtual actor.

Fitting graphics human model with images by using the developed interface.

Video clips (QT) showing original human motion in videos and motion of graphics model driven by extracted motion parameters.

More example on longer image sequence of one minute.

My publications