Limbering up for quality animation

The Titanic rears high above the water. Hanging from the balustrade, terrified passengers look down into the icy waters as their fingers slip. Fortunately for the actors, this is where the computer graphics kick in, simulating the life like movements of limbs as bodies begin to drop into the sea.

The standard methods of computer modelling require human subjects to wear reflective patches. Cameras film the person and image analysis software tracks the movement of the patches. The computer builds up a picture of how the person’s joints move. Originally the application was applied to clinical diagnosis, but the entertainment industry regularly uses the same technology. It can simulate people moving realistically in crowd scenes, stunt shots – or falling from sinking ships.

Working with Vicon Systems, a manufacturer of human motion capture equipment, Dr Ian Reid leads a team from the Department of Engineering Science at Oxford University. The group has developed a technique to recover the same joint motion information, but without using ungainly reflectors. “The use of markers makes the computer vision component of these systems quite straightforward, but the price paid is one of convenience and generality,” explains Dr Reid. “A system that did not require markers would be less intrusive and much more flexible. You might even be able to use already existing film footage as the data source for analysis.”

The secret of Dr Reid’s new analysis technique is a combination of search algorithms. The computer guesses the pose of the person in each image, then looks for supporting evidence.

“Standard” searches usually only take one hypothesis, which is then iteratively refined on the basis of the image data. Dr Reid’s methods, however, assess multiple hypotheses; the best are selected, altered slightly, then combined together to form a new set of test poses. The process is repeated until one (or a small number) of hypotheses dominate.

After analysing many movements, Dr Reid’s team has been able to track subjects walking, running, turning – and even doing a hand stand.

A major drawback at the moment is that the computer has to be told the position and pose of the subject in the first frame. “Once the human model is set correctly we can track reasonably well, but there is currently no general, easy and reliable way – other than by hand – to work out these initial parameters. We are looking at ways to make our software smart enough to do this initialisation too. Our ultimate aim is for the computer to do all the analysis in real time.”

The research group has already begun to transfer this new technology to Oxford Metric Group, which owns Vicon. Dr Reid expects the more flexible system will keep the company’s success in motion analysis – for Titanic stars and hospital staff – well and truly buoyant.