Cartoon Work Makes Graphics Researcher Smile

Published

By Rob Knies, Managing Editor, Microsoft Research

Kids of all ages love cartoons. They appeal because of their vivid colors, their fluid motion, their whimsical world view, their idealized parallel universe. In a cartoon, a coyote that falls off a cliff and gets flattened by an anvil only hurts for a little while. In a confusing, occasionally painful world, the cartoon’s simple, forgiving existence provides comforting diversion. Who wouldn’t want to be Bugs Bunny?

Kun Zhou, a researcher and project lead for the Internet Graphics group based in Microsoft Research’s Asia lab, knows as well as anyone the fascination that animation can engender. But in Zhou’s case, that interest is leading to groundbreaking achievements in computer graphics.

Spotlight: On-demand video

AI Explainer: Foundation models ​and the next era of AI

Explore how the transformer architecture, larger models and more data, and in-context learning have helped advance AI from perception to creation.

Zhou’s Web site reveals an impressive list of publications and projects related to that field, among them a paper presented during SIGGRAPH 2005 entitled Large Mesh Deformation Using the Volumetric Graph Laplacian, co-written by former Microsoft Research Asia intern Jin Huang, John Snyder of Microsoft Research’s Redmond lab, fellow Asia-lab researcher Xinguo Liu, Hujun Bao of Zhejiang University, and Baining Guo and Harry Shum of Microsoft Research Asia.

In that paper, Zhou et al. describe the use of a technique called volumetric graph Laplacian to diminish unrealistic effects of 3D modeling present in previous work in this field.

“We present a novel technique for large deformations on 3D meshes,” Zhou says. In layman’s terms, that means making the movements of three-dimensional animations appear more real and lifelike.

“My research interests focus mainly on computer-graphics algorithms,” Zhou explains. “We are inventing technologies to generate beautiful and realistic pictures in real time for the film and gaming industries.

“One part of this research goal is realism, which takes a lot of manual work and computation time. The other part is automatic, real-time performance. The biggest challenge is how to balance these two objectives.”

Mesh deformation has been a valuable technique in cartoon modeling and animation. Many applications have been devised to help artists construct stylized body shapes and body movements. But there has been a lingering challenge with large deformations, such as those found with characters performing nonrigid, highly exaggerated movements.

Nonrigid, highly exaggerated movements? Sounds like Bart Simpson skateboarding through Springfield.

“The goal of our work,” Zhou says, “is to develop an interactive system to transfer the deformations of 2D cartoon characters to 3D objects. Previous deformation techniques often produce implausible results with unnatural volume changes and self-intersections. We need to remove these artifacts.”

Volumetric graph Laplacian, it appears, is a superior technique. But what does it mean?

“The volumetric graph Laplacian of a point in a three-dimensional space,” Zhou states, “is defined as the relative position with respect to its neighboring points. Mathematically, it is computed as the difference between the point and the weighted average of its neighboring points.

Determining how the 3D mesh model reacts to various movements is a key to creating lifelike movements. Techniques, such as Poisson mesh editing, the previous method employed by Zhou and associates, resulted in unnatural folds when an item—say, a leg—was bent, and an unrealistic loss of volume when an item was twisted.

Earlier techniques relied on manipulating surfaces to produce realistic movements. But that required providing a mesh framework for the entire interior of a model, a notoriously difficult undertaking.

“We realized,” Zhou says, “that previous surface-detail preservation is not enough for such large deformations.”

By analyzing the direction of a limited set of points within the surface framework of the mesh, Zhou was able to produce dramatically superior 3D movements.

“To use our system,” he says, “the user simply takes a 3D object and a 2D cartoon image sequence. The user specifies one or more 3D control curves on the object, and for each curve, a series of 2D curves in the cartoon image. Our algorithm will automatically transfer the 2D cartoon deformations to the 3D object.”

A video makes the advances abundantly clear. Control curves are drawn along the spine and the legs of a 3D dinosaur skeleton. The model is rotated to define the projection plane for each control curve. Then a spinal curve and a leg curve are copied from a frame of a cartoon cat in the midst of an energetic kick. Those curves are applied to the dinosaur mesh, and voilà, the dinosaur begins to kick, in a fashion nearly identical to that of the animated feline.

Such techniques can be applied any number of ways. Not much later in the video, the same dinosaur is dancing. A model cat is walking. A lioness glides at full speed. An armadillo throws a baseball. All these movements come from tracing a few curves from a cartoon image and transferring them to a mesh model. It’s a classic example of computer-generated “life” imitating art, particularly so when you consider that Zhou and his collaborators got permission to use and reproduce some famous animation cels from Disney Feature Animation.

“When I showed that the motion of some classic 2D cartoon animations, such as the famous Goofy, was transferred to arbitrary complex 3D objects, such as the Stanford armadillo, with 170,000 vertices, in a few minutes,” Zhou recalls, “my boss and colleagues were just shocked.”

To be clear, the project did not focus on deforming the 3D model into precisely the same pose as the cartoon’s.

“This is difficult,” Zhou explains, “because their shapes are so different and because cartoons are drawings that may not be reflective of the motion of 3D geometry. Instead, our goal is to transfer the quality of the cartoon’s motion to the 3D model.

“As the animations in the video show, we successfully obtain motions that are remarkably similar to the cartoon’s.”

Next up: refining the concept.

“Currently,” he says, “it’s still tedious for the user to specify corresponding curves between 2D cartoon characters and 3D objects. A more intuitive user interaction is point-based: Specifying corresponding points between 2D cartoon characters and 3D objects is much easier.”

In fact, Zhou and many of the same colleagues presented a paper entitled Subspace Gradient Domain Mesh Deformation during the SIGGRAPH 2006 conference.

But make no mistake about it: Zhou considers his hours of idling in front of televised cartoons to be time well spent.

“I am proud,” he says, “that we achieved these high-quality deformation-retargeting results for the first time in computer graphics.”

Related publications

Continue reading

See all blog posts