“Rendering Massive Virtual Worlds” by Sellers, Waveren, Cozzi, Ring, Persson, et al. …

  • ©Graham Sellers, J.M.P. van Waveren, Patrick Cozzi, Kevin Ring, Emil Persson, and Joel de Vahl


Entry Number: 23


    Rendering Massive Virtual Worlds

Course Organizer(s):



    In recent years, connectivity of systems has meant that online worlds with huge streaming data sets have become common and widely available. From applications such as map rendering and virtual globes to online gaming, users expect online content to be presented in a timely and seamless manner, and expect the volume and variety of offline content to match that which is available online. Generating, retrieving and displaying this content to users presents a number of considerable challenges. This course addresses some real-world solutions to the problems presented in the rendering of massive virtual worlds.

    Massive worlds present many daunting challenges. In our first talk, we introduce the two most prominent challenges: data management and rendering artifacts. We look at handling massive datasets like real-world terrain with out-of-core rendering, including parallelism, cache hierarchies on the client and in the cloud, and level-of-detail. Then we explore handling jittering artifacts due to performing vertex transform on large world coordinates with 32-bit precision on commodity GPUs, and handling z-fighting artifacts due to lack of depth buffer precision with large near-to-far distances. This serves as an introduction to the following talk, World-Scale Terrain Rendering.

    Rendering small and medium scale terrain is fairly straightforward these days, but rendering terrain for a detailed world the size of Earth is much more challenging. In our second talk, we discuss the design and implementation of a terrain engine that can render a zoomed out view of the entire globe all the way down to a zoomed-in view where sub-meter details are visible. We discuss processing “off the shelf” terrain data for efficient streaming and rendering, an asynchronous load pipeline for bringing chunks of terrain through a cache hierarchy, efficiently culling chunks of terrain that are below the horizon, driving terrain level-of-detail selection based on an estimate of pixel error, and more. With these techniques, we are able to achieve excellent performance even in the constrained environment of WebGL and JavaScript running inside a web browser.

    Once we have dealt with the topic of storing and streaming huge amounts of content, we must next contend with production and generation of that content. The next talk will go through the production pipeline at Avalanche Studios and the issues encountered when filling Just Cause 2 with interesting content. It is mix of horror stories, good practices, and lessons learned and applied to titles currently in production. Issues discussed will include the reliability problems for the content tool-chain, the long turn-around times we had, undocumented and poorly understood data dependencies, and all the problems that followed with these. Then we will cover how we have solved these problems for our current content pipeline. We will also talk about our approach to authoring the landscape, vegetation and locations for our large game worlds, and how we maintain productivity without sacrificing variation.

    Many of the concepts discussed to this point address efficient generation, storage, retrieval and transmission of content. Recent advances in graphics hardware allow GPUs to assist in functions such as streaming texture data, managing sparse data sets and providing reasonable visual results in cases where not all of the data is available to render a scene. In the next talk, we take a deep-dive into AMD’s partially resident texture hardware, briefly cover sparse texture extensions for OpenGL and then explore some use cases for the hardware and software features, including some live demos.

    In our final presentation, we discuss the practical challenges of integrating support for hardware virtual texturing into a real-world game engine, idTech5, which powers RAGE and a number of other titles. We will describe cases where hardware virtual texturing `just worked’, and cases where more effort was required to integrate the technology into an existing engine, whilst maintaining support for software virtual texturing without loss of performance or features.

    We assume that course participants are familiar with modern graphics rendering techniques, data compression, cache hierarchies and graphics hardware acceleration. We will discuss in some detail virtual memory systems, culling techniques, level-of-detail selection and other related techniques.