Beauty in motion

Norman Badler

The key to building a better virtual human, says Norman Badler, is accurately re-creating human motion. After 30 years of trying to do just that, Badler thinks he’s getting close.

When Norman Badler was in graduate school, trying to figure out a topic for his dissertation, he became enamored by vision. Specifically, he couldn’t help but be impressed by the human ability to not only see, but also to process that information in real time.

“ It struck me then that people are able to do something quite remarkable,” says Badler. “They can look at other people and at the same time describe what those people are doing.”
His dream was to give computers that same ability.

Thirty years later, computers can’t see quite yet. But the challenge of helping them do so sent Badler down another interesting career path: He creates virtual human beings.

“ In order to [make computers see] I knew I would have to create synthetic input, using graphics, and images of people, and then use the computer to analyze and describe them,” says Badler, a professor in the School of Engineering and Applied Science. “But it turns out that the problem of creating computer-based people and making them look and move like real people was itself hard enough that it became my entire career.”

Norman Badler

Since his arrival here, Badler has seen his field of choice become increasingly crowded. But thanks to his work, Penn has remained one of the leading computer design universities in the world.

Under Badler’s guidance, Penn launched the Center for Human Modeling and Simulation in 1994 and, most recently, he was instrumental in the launch of the undergraduate program in Digital Media Design, where students are busy creating entire virtual worlds. Badler is director for both the Center and the DMD program.

He also remains committed to the work he began 30 years ago—creating virtual human beings that move exactly like real humans.
The goal? To eventually enable computers—and the virtual human beings living inside them—to not only see real humans and recognize what they’re doing, but also to interact with them on a real-time basis.

Q.When you began your career in the 1970s, at the start of the computer age, I imagine there wasn’t much work being done to create virtual humans.
A.
There was very little. I remember in 1976, there was a major computer graphics conference here at Penn. It was the major national conference, and there were only 330 attendees. By about 1981, that number had grown by an order of magnitude to over 3,000 people, and in a few more years, there were over 30,000. So Penn and myself and my students were able to get in literally on the ground floor of computer graphics research.

Q.What set Penn apart?
A.
We were always very interested in motion. That meant that when we had funding, we invested it in tools that allowed us to do animation. In the 1980s, a lot of the universities that were doing computer graphics at the time were doing single images. We decided that we were not going to go the images route, but the animation route. In the long run, that paid off. Right now, we have a prominent position in the field of developing techniques for computer animation. You can accuse me of being single-minded, but that single-mindedness has left a trail of 48 Ph.D.’s. That legacy of students has been extremely successful. They’ve gone all over the place.

Q.Like where?
A.
In this room, for instance, we have movie posters from movies that our alumni have had something to do with. There are three posters in here and five more outside. We’ve had people whose successes have come not just in academia but also in the real work of making movies.

Q.What was the idea behind launching the Center for Modeling and Human Simulation?
A.
The computer graphics research lab was established in the 1970s, but after I finished my stint as department chair for computer science, I realized it needed a more recognized status—and the way to do that is to start a center. So it became a center in 1994, and it’s peak size was in 1996. The reason for that was because through the 1980s and 1990s we focused our modeling work on a particular piece of software called Jack. It was a vehicle for bringing money into the lab and it was a real product we sent out. So a larger part of the energy in the center was software development, and in 1996, we actually spun that product out to a startup company.

Q.Now that Jack has been sold off, what is the center’s main focus?
A.
We’re still interested in describing motion and using those descriptions to control motion—but not only having computers looking at people and saying what they’re doing, but also telling our virtual people what they ought to be doing. We can actually now sense people’s movement in real time, and we’ll use their movements and feed that information to the virtual characters so the virtual and real persons can interact. The real person will see the virtual person move and the virtual person will see the real person move. We’re basically trying to build a more equal playing field between what the real person sees in the virtual person and what the virtual person can see in the real person.

Q.What’s the value of that?
A.
The reason one wants to do that, and the real application here, is for training systems—to train people for emergency care, for various other kinds of social intercourse, leadership, customer satisfaction, military training, police training. All of those things require people to interact with people. If you have a live training exercise where you’re training with an actor or a professional, the cost is in finding the actors. And if this kind of training environment becomes dangerous for the actors or the participants then a virtual environment becomes the environment of choice. But it has to be a virtual environment where people are interacting with people, and that requires real-time virtual people. That’s where our primary goal lies.

Q.What are the major challenges standing between you and that goal?
A.
One of the interesting challenges was actually exposed 40 years ago by a [researcher], who called it “the uncanny valley.” He said if you created a synthetic human, and if doesn’t really look very human—maybe cartoonish—then people are very willing to accept it on its own terms. But as you make synthetic beings more and more human, you then fall into this uncanny valley where it just looks bizarre. What apparently happens is that the human perceptive systems are so keyed in to looking at real people that if you get something that looks close but isn’t perfect, our systems just tell us everything that is wrong. These supposedly very realistic figures then actually become disturbing. Nobody has yet escaped the uncanny valley.

Q.So how do you avoid getting lost in that valley?
A.
We have decided that rather than setting out on a quest for total visual realism, which would keep us in that valley, we would try for motion realism. There are actually a lot of advantages to doing that. … The direction we’re pursuing is to try to get the movement right rather than the appearance right. The work we’ve been doing there is something we believe is a viable goal.

 

 

Originally published on October 7, 2004