“Time-Traveling VFX: AI-Driven De-Aging in Here” by Baillie and Plaete
Conference:
Type(s):
Title:
- Time-Traveling VFX: AI-Driven De-Aging in Here
Session/Category Title:
- Time-Traveling VFX: AI-Driven De-Aging in Here
Presenter(s)/Author(s):
Abstract:
Join Production Supervisor Kevin Baillie and Metaphysic VFX Supervisor Jo Plaete for a deep dive into how artist-empowering artificial intelligence enabled unprecedented workflows on Robert Zemeckis’s new film, Here. Told from a single perspective that transcends time, Here follows characters played by Tom Hanks, Robin Wright, Paul Bettany, and Kelly Reilly across multiple decades of their lives. While these monumental age spans were crucial to the narrative, the production faced an immense technical challenge: how to maintain the actors’ performances and emotional realism while radically altering their appearances. Baillie and Plaete will explain how the project began with a competitive screen test. At first, traditional 3D and motion-capture methods were deemed impractical for the required volume of shots and the nuance demanded by the close-up facial performances. Metaphysic’s early proof of concept, transforming the 67-year-old Tom Hanks into his Big-era twenties, demonstrated that an AI-based approach could bridge massive age gaps without sacrificing the authenticity of the actors’ expressions. Once the filmmakers chose this route, the Metaphysic team rapidly scaled, bringing together AI engineers, data scientists, VFX artists, and compositors who refined the technology into a production-ready pipeline that centered on the principle of empowering filmmakers and artists. A central aspect of this pipeline was real-time on-set face swapping, in which a specialized server equipped with powerful GPUs received a direct feed from the main camera, processed it through neural networks, and returned a de-aged image to the director’s monitor. This setup gave Robert Zemeckis and the actors near-instant feedback, only a few frames of delay, allowing them to adjust on the spot. Tom Hanks and Robin Wright rehearsed in front of a “Youth Mirror” system that let them see their younger faces in real time, helping them modulate posture, eye lines, and subtle expressions. Baillie will recount how the production team integrated these tools into daily shooting schedules, and Plaete will offer insights into the technical hurdles of achieving high-fidelity performance transfer on set, highlighting how the AI’s flexibility liberated artists to iterate and refine with minimal technical friction. The session also explores how the technology matured in post-production, where higher resolutions and additional refinements were required for final shots. By working with “plate prep” and proprietary compositing workflows, the VFX team preserved the liveliness of each performance, even when rewinding several decades, while avoiding the “uncanny valley.” Attendees will learn how neural networks were trained on massive libraries of archival footage for Tom Hanks, Robin Wright, and other cast members, capturing the shifts in bone structure and skin quality across each stage of life. Plaete will describe the “visual data science” approach his team used to tune these models, emphasizing that successful outputs demanded not just code but also a keen artistic and intuitive understanding of the actors’ faces, underscoring how human creativity remains paramount when wielding artist-empowering AI. Throughout the session, both speakers will emphasize that while neural networks are powerful, they function best as a tool for artists.


