“Physically-Based Modeling: Past, Present, and Future” Moderated by Demetri Terzopoulos and John Platt

  • ©

Conference:


Type(s):


Entry Number: 07

Title:

    Physically-Based Modeling: Past, Present, and Future

Presenter(s)/Author(s):


Moderator(s):



Additional Information:


    Transcript of the welcoming speech:
    My name is Demetri Terzopoulos and my co-chair, John Platt, and I would like to welcome you to the panel on Physically-Based Modeling — Past, Present and Future. I’ll start by introducing the panelists; the affiliations you see listed on the screen are somewhat out of date.

    I’m Program Leader of modeling and simulation at the Schlumberger Laboratory for Computer Science in Austin, Texas, and I was formerly at Schlumberger Palo Alto Research. I’ll speak on the subject of deformable models.

    John Platt, formerly of Cal Tech, is now Principal Scientist at Synaptics in San Jose, California. He will be concentrating on constraints and control.

    Alan Barr is Assistant Professor of computer science at Cal Tech. Last year he received the computer graphics achievement award. He’ll speak about teleological modeling.

    David Zeltzer is Associate Professor of computer graphics at the MIT Media Laboratory. He will be speaking on interactive micro worlds.

    Andrew Witkin, formerly of Schlumberger Palo Alto Research, is now Associate Professor of computer science at Carnegie Mellon University. He will speak about interactive dynamics.

    Last but not least, we have with us James Blinn, who of course needs no introduction. Formerly of JPL, he is now Associate Director of the Mathematics Project at Cal Tech. He says he’ll have several random comments to make against physically-based modeling.

    I was also asked by the SIGGRAPH organizers to remind the audience that audio and video tape recording of this panel is not permitted.

    Many of you are already familiar with physically-based modeling, so I will attempt only a very simple introduction to this, in my opinion, very exciting paradigm. Physically-based techniques facilitate the creation of models capable of automatically synthesizing complex shapes and realistic motions that were, until recently, attainable only by skilled animators, if at all. Physically-based modeling adds new levels of representation to graphics objects. In addition to geometry — forces, torques, velocities, accelerations, kinetic and potential energies, heat, and other physical quantities are used to control the creation and evolution of models. Simulated physical laws govern model behavior, and animators can guide their models using physically-based control systems. Physically-based models are responsive to one another and to the simulated physical worlds that they inhabit.

    We will review some past accomplishments in physically-based modeling, look at what we are doing at present, and speculate about what may happen in the near future. The best way to get a feel for physically-based modeling is through animation, so we will be showing you lots of animation as we go along.

    I would like to talk about deformable models, which are physically-based models of nonrigid objects. I have worked on deformable models for graphics applications primarily with Kurt Fleischer and also with John Platt and Andy Witkin. Deformable models are based on the continuum mechanics of flexible materials. Using deformable models, we can model the shapes of flexible objects like cloth, plasticine, and skin, as well as their motions through space under the action of forces and subject to constraints.

    Please roll my Betacam tape. Here is an early example of deformable surfaces which are being dragged by invisible forces through an invisible viscous fluid. Next we see a carpet falling in gravity. It collides with two impenetrable geometric obstacles, a sphere and a cylinder, and must deform around them. The next clip shows another clastic model. It behaves like a cloth curtain that is suspended at the upper corners, then released.

    Here is a simulated physical world — a very simple world consisting of a room with walls and a floor. A spherical obstacle rests in the middle of the floor. You’re seeing the collision of an elastically deformable solid with the sphere. Of course, we’re also simulating gravity.

    We’ve developed inelastic models, such as the one you see here which behaves like plasticine. When the model collides with the sphere, there’s a permanent deformation. By changing a physical parameter, we obtain a fragile deformable model such as the one here. This deformable solid breaks into pieces when it hits the obstacle.

    Deformable models can be computed efficiently in parallel. This massively parallel simulation of a solid shattering over a sphere was computed on a connection machine at Thinking Machines, with the help of Carl Feynman.

    Here is a cloth-like mesh capable of tearing. We’re applying shear forces to tear the mesh. The sound you’re hearing has been generated by an audio synthesizer which was programmed by Tony Crossley so that it may be driven by the physical simulation of the deformable model. Whenever a fiber breaks, the synthesizer makes a pop. Keep watching the cloth; we get pretty vicious with it.

    Deformable models are obviously useful in computer graphics, but they are also useful for doing inverse graphics; that is to say, computer vision.

    For example, here we see an image of a garden variety squash. Using a deformable tube model, we can reconstruct a three dimensional model of the squash from its image, as shown. Once we have reconstructed the model from the image, we can rotate the model to view it from all sides. You can see, we have captured a fully three dimensional model from that single, monocular image. That’s a basic goal of computer vision.

    Kurt Fleischer, Andy Witkin, Michael Kass, and I used this deformable model based vision technique to create an animation called Cooking with Kurt. We wanted to mix live video and physically-based animation in this production. You see Kurt entering a kitchen carrying three vegetables. We captured deformable squash models from a single video frame of the real squashes sitting on the table — this particular scene right here. Now the reconstructed models are being animated using physically-based techniques. The models behave like very primitive actors; they have simple control mechanisms in them that make them hop, maintain their balance, and follow choreographed paths. The collisions and other interactions that you see are computed automatically through the physical laws, and they look quite realistic. It’s difficult to do this sort of thing by hand, even if you’re a skilled animator.

    This second tape will show you some of the physically-based modeling we’re up to now at the Schlumberger Laboratory for Computer Science. Keith Waters and I are working on interactive deformable models. We’re now able to compute and render deformable models in real time on our Silicon Graphics Iris 240 GTX computer. For example, here is a simulation of a nonlinear membrane constrained at the four corners and released in a gravitational field. Watch it bounce and wiggle around.

    Here you’re seeing a physically-based model of flesh. It’s a three dimensional lattice of masses and springs with muscles running through it. Again, this is computed and displayed in real time. You can see the muscles underneath displayed as red lines. They’re fixed in space at one end and attached to certain nodes of the lattice model at the other end. By contracting the muscles we can produce deformations in this slab of — whale blubber, if you will. We did this simulation as an initial step towards animating faces using deformable models as models of facial tissue. And of course, the muscle models make good facial muscles.

    The next clip will demonstrate real time, physically-based facial animation on our SGI computer. Here we see the lattice structure of the face. Let’s not display all of the internal nodes so that we can see the epidermis of the lattice more clearly. There. Now we’re contracting the zygomatic muscle attached to one edge of the mouth — now both zygomatics are contracting to create a smile. The muscles inside the face model are producing forces which deform the flesh to create facial expressions.

    Now the epidermis polygons are displayed with flat shading. Next we contract the brow muscles. Here the epidermis is being shaded smoothly. Finally, we relax the muscles and the face returns to normal.

    An important reason for applying the physically-based modeling approach to facial animation is realism. For instance, the facial tissue model automatically produces physically realistic phenomena such as the laugh lines around the mouth and the cheek bulges that you see here.

    Keith videotaped this animation off of our machine only last week. Our next step will be to develop control processes to coordinate the muscles so that the face model can create a wide range of expressions in response to simple commands. Keith’s prior work on facial animation, published in SIGGRAPH 87, showed how one can go about doing this using muscle model processes. Beyond muscle control processes, we’re also interested in incorporating vocoder models — that is, physically-based speech coding and generation models, so that this face can talk to you.

    The tape will end soon, so I’ll release the podium to Dr. John Platt, who will talk about constraint methods and control. Thank you.


ACM Digital Library Publication:



Overview Page: