“Everybody’s an Effect: Scalable Volumetric Crowds on Pixar’s Elemental” by Kanyuk, Ouellet, Taylan, Moon and Reeves – ACM SIGGRAPH HISTORY ARCHIVES

“Everybody’s an Effect: Scalable Volumetric Crowds on Pixar’s Elemental” by Kanyuk, Ouellet, Taylan, Moon and Reeves

  • ©

Conference:


Type(s):


Interest Area:


    Production & Animation

Title:

    Everybody's an Effect: Scalable Volumetric Crowds on Pixar's Elemental

Session/Category Title:   Crowded House: Advances in Crowd Simulation


Presenter(s)/Author(s):



Abstract:


    Crowd animation and rendering is challenging enough with hard surface models, but the world of Pixar’s Elemental takes this to a new level by immersing the viewer in a teeming metropolis populated by sentient air, fire, and water, in the form of volumetric characters. By building a new Houdini-Engine character pipeline based on blended simulation caches and extending our proprietary crowd pipeline to approximate non-skeletal deformation with blendshapes, we were able to choreograph, deform, shade, and light an absurd number of voxels. The complex underlying physical simulation and shading process called hexport [Coleman et al. 2020] we used to create the hero look of our main characters took roughly 400 cpu hours per shot, and afforded us the ability to only have about 2.5 characters on screen per shot on average. In the end, each shot on Elemental had an average of 162 additional volumetric crowd characters. Thus our challenge was to create those 162 characters with visual fidelity as close as possible to the 2.5 hero characters, despite forgoing hexport. By building a solution as a Houdini Engine [SideFX 2023] procedural, with UsdSkel [Studios 2023] deformed meshes as input, we deferred the expensive computations until render time. However, given some shots could have as many as 30,000 volumetric characters, our solution had to execute on the order of several seconds a character to even be feasible, if painful, at scale. Furthermore, IO and storage limits meant the results could not be cached on disk and needed to remain in memory at render time, thus constraining our memory footprint. Accordingly, our pipeline factored as much complexity as possible into pre-process stages, and leaned heavily on level of detail, both for inputs to the render time procedural, and in minimizing the resulting voxels.

References:


    [1] Patrick Coleman, Laura Murphy, Markus Kranzler, and Max Gilbert. 2020. Making Souls: Methods and a Pipeline for Volumetric Characters. In ACM SIGGRAPH 2020 Talks (Virtual Event, USA) (SIGGRAPH ’20). Association for Computing Machinery, New York, NY, USA, Article 28, 2 pages. https://doi.org/10.1145/3388767.3407361
    [2] Robert L. Cook, John Halstead, Maxwell Planck, and David Ryu. 2007. Stochastic Simplification of Aggregate Detail. ACM Trans. Graph. 26, 3 (jul 2007), 79–es. https://doi.org/10.1145/1276377.1276476
    [4] Paul Kanyuk, Patrick Coleman, and Jonah Laird. 2018. Mobilizing Mocap, Motion Blending, and Mayhem: Rig Interoperability for Crowd Simulation on Incredibles 2. In ACM SIGGRAPH 2018 Talks (Vancouver, British Columbia, Canada) (SIGGRAPH ’18). Association for Computing Machinery, New York, NY, USA, Article 51, 2 pages. https://doi.org/10.1145/3214745.3214803
    [5] Sasha Ouellet, Daniel Garcia, Stephen Gustafson, Matt Kuruc, Michael Lorenzen, George Nguyen, and Grace Gilbert. 2020. Rasterizing Volumes and Surfaces for Crowds on Soul. In ACM SIGGRAPH 2020 Talks (Virtual Event, USA) (SIGGRAPH ’20). Association for Computing Machinery, New York, NY, USA, Article 30, 1 pages. https://doi.org/10.1145/3388767.3407374

ACM Digital Library Publication:



Overview Page:



Submit a story:

If you would like to submit a story about this presentation, please contact us: historyarchives@siggraph.org