Christian Theobalt
Most Recent Affiliation(s):
- Max-Planck-Institut für Informatik, Saarland Informatics Campus, Professor of Computer Science
Location:
- Germany
Bio:
SIGGRAPH 2015
Christian Theobalt is a Professor of Computer Science at the Max-Planck-Institute for Informatics and Saarland University in Saarbrücken, Germany. Most of his research deals with algorithmic problems that lie on the boundary between the fields of Computer Vision and Computer Graphics, such as dynamic 3D scene reconstruction and marker-less motion capture, computer animation, appearance and reflectance modelling, machine learning for graphics and vision, new sensors for 3D acquisition, advanced video processing, as well as image- and physically- based rendering. He received the Otto Hahn Medal of the Max Planck Society in 2007, the EUROGRAPHICS Young Researcher Award in 2009, the German Pattern Recognition Award in 2012, and an ERC Starting Grant in 2013.
Learning Category: Jury Member:
Experience(s):
![XNect: real-time multi-person 3D motion capture with a single RGB camera](https://history.siggraph.org/wp-content/uploads/2022/07/2020-SIGGRAPH-Image-Not-Available-150x150.jpg)
Type: [Studio (SIGGRAPH Labs)]
XNect: real-time multi-person 3D motion capture with a single RGB camera
Organizer(s): [Mehta] [Sotnychenko] [Mueller] [Xu] [Elgharib] [Fua] [Seidel] [Rhodin] [Pons-Moll] [Theobalt]
[SIGGRAPH 2020]
Learning Category: Presentation(s):
![Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold](https://history.siggraph.org/wp-content/uploads/2024/02/2023-Tech-Papers-Pan_Drag-Your-GAN-Interactive-Point-based-Manipulation-on-the-Generative-Image-Manifold-150x150.jpg)
Type: [Technical Papers]
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold Presenter(s): [Pan] [Tewari] [Leimkühler] [Liu] [Meka] [Theobalt]
[SIGGRAPH 2023]
![EgoLocate: Real-time Motion Capture, Localization, and Mapping With Sparse Body-mounted Sensors](https://history.siggraph.org/wp-content/uploads/2024/02/2023-Tech-Papers-Yi_EgoLocate-Real-time-Motion-Capture-Localization-and-Mapping-with-Sparse-Body-mounted-Sensors-150x150.jpg)
Type: [Technical Papers]
EgoLocate: Real-time Motion Capture, Localization, and Mapping With Sparse Body-mounted Sensors Presenter(s): [Yi] [Zhou] [Habermann] [Golyanik] [Pan] [Theobalt] [Xu]
[SIGGRAPH 2023]
![Physics informed neural fields for smoke reconstruction with sparse data](https://history.siggraph.org/wp-content/uploads/2023/06/2022-Technical-Papers-Chu_-Physics-Informed-Neural-Fields-for-Smoke-Reconstruction-With-Sparse-Data-150x150.jpg)
Type: [Technical Papers]
Physics informed neural fields for smoke reconstruction with sparse data Presenter(s): [Chu] [Liu] [Zheng] [Frankz] [Seidel] [Theobalt] [Zayer]
[SIGGRAPH 2022]
![Advances in Neural Rendering](https://history.siggraph.org/wp-content/uploads/2022/03/2021-1-Advances-in-Neural-Rendering-150x150.jpg)
Type: [Courses]
Advances in Neural Rendering Organizer(s): [Tewari]
Presenter(s): [Sitzmann] [Fried] [Thies] [Xu] [Tretschk] [Mildenhall] [Pandey] [Orts-Escolano] [Fanello] [Guo] [Wetzstein] [Zhu] [Theobalt] [Agrawala] [Zollhöfer]
Entry No.: [01]
[SIGGRAPH 2021]
![Learning meaningful controls for fluids](https://history.siggraph.org/wp-content/uploads/2023/06/2021-Technical-Papers-Chu_Learning-Meaningful-Controls-for-Fluids-150x150.jpg)
Type: [Technical Papers]
Learning meaningful controls for fluids Presenter(s): [Chu] [Thuerey] [Seidel] [Theobalt] [Zayer]
[SIGGRAPH 2021]
![Neural monocular 3D human motion capture with physical awareness](https://history.siggraph.org/wp-content/uploads/2023/06/2021-Technical-Papers-Shimada_Neural-Monocular-3D-Human-Motion-Capture-with-Physical-Awareness-150x150.jpg)
Type: [Technical Papers]
Neural monocular 3D human motion capture with physical awareness Presenter(s): [Shimada] [Golyanik] [Xu] [Perez] [Theobalt]
[SIGGRAPH 2021]
![PhotoApp: photorealistic appearance editing of head portraits](https://history.siggraph.org/wp-content/uploads/2023/06/2021-Technical-Papers-Tewari_PhotoApp-Photorealistic-Appearance-Editing-of-Head-Portraits-150x150.jpg)
Type: [Technical Papers]
PhotoApp: photorealistic appearance editing of head portraits Presenter(s): [R.] [Tewari] [Dib] [Weyrich] [Bickel] [Seidel] [Pfister] [Matusik] [Elgharib] [Theobalt]
[SIGGRAPH 2021]
![Real-time deep dynamic characters](https://history.siggraph.org/wp-content/uploads/2023/06/2021-Technical-Papers-Habermann_Real-time-Deep-Dynamic-Characters-150x150.jpg)
Type: [Technical Papers]
Real-time deep dynamic characters Presenter(s): [Habermann] [Liu] [Xu] [Zollhoefer] [Pons-Moll] [Theobalt]
[SIGGRAPH 2021]
![Text-Based Motion Synthesis with a Hierarchical Two-Stream RNN](https://history.siggraph.org/wp-content/uploads/2022/08/2021-Poster-30-Ghosh_Text-Based-Motion-Synthesis-01-150x150.jpg)
Type: [Posters]
Text-Based Motion Synthesis with a Hierarchical Two-Stream RNN Presenter(s): [Ghosh] [Cheema] [Oguz] [Theobalt] [Slusallek]
Entry No.: [30]
[SIGGRAPH 2021]
![Vid2Curve: simultaneous camera motion estimation and thin structure reconstruction from an RGB video](https://history.siggraph.org/wp-content/uploads/2024/04/2020-Posters-Wang_Vid2Curve_-Simultaneous-Camera-Motion-Estimation-and-Thin-Structure-Reconstruction-from-an-RGB-Video-01-150x150.jpg)
Type: [Posters]
Vid2Curve: simultaneous camera motion estimation and thin structure reconstruction from an RGB video Presenter(s): [Wang] [Liu] [Chen] [Chu] [Theobalt] [Wang]
[SIGGRAPH 2020]
![Vid2Curve: simultaneous camera motion estimation and thin structure reconstruction from an RGB video](https://history.siggraph.org/wp-content/uploads/2023/02/2020-Technical-Papers-Peng_Vid2Curve-150x150.jpg)
Type: [Technical Papers]
Vid2Curve: simultaneous camera motion estimation and thin structure reconstruction from an RGB video Presenter(s): [Wang] [Liu] [Chen] [Chu] [Theobalt] [Wang]
[SIGGRAPH 2020]
![XNect: Real-time Multi-Person 3D Motion Capture with a Single RGBCamera](https://history.siggraph.org/wp-content/uploads/2022/09/2020-Technical-Papers-Mehta_XNect-150x150.jpg)
Type: [Technical Papers]
XNect: Real-time Multi-Person 3D Motion Capture with a Single RGBCamera Presenter(s): [Mehta] [Sotnychenko] [Mueller] [Xu] [Elgharib] [Fua] [Seidel] [Rhodin] [Pons-Moll] [Theobalt]
[SIGGRAPH 2020]
![Deep reflectance fields: high-quality facial reflectance field inference from color gradient illumination](https://history.siggraph.org/wp-content/uploads/2023/01/2019-Technical-Papers-Meka_Deep-Reflectance-Fields-150x150.jpg)
Type: [Technical Papers]
Deep reflectance fields: high-quality facial reflectance field inference from color gradient illumination Presenter(s): [Meka] [Häne] [Pandey] [Zollhöfer] [Fanello] [Fyffe] [Kowdle] [Yu] [Busch] [Dourgarian] [Denny] [Bouaziz] [Lincoln] [Whalen] [Harvey] [Taylor] [Izadi] [Tagliasacchi] [Debevec] [Theobalt] [Valentin] [Rhemann]
[SIGGRAPH 2019]
![LiveCap: Real-Time Human Performance Capture From Monocular Video](https://history.siggraph.org/wp-content/uploads/2022/07/2019-SIGGRAPH-Image-Not-Available-150x150.jpg)
Type: [Technical Papers]
LiveCap: Real-Time Human Performance Capture From Monocular Video Presenter(s): [Habermann] [Xu] [Zollhoefer] [Pons-Moll] [Theobalt]
[SIGGRAPH 2019]
![Neural Rendering and Reenactment of Human Actor Videos](https://history.siggraph.org/wp-content/uploads/2022/07/2019-SIGGRAPH-Image-Not-Available-150x150.jpg)
Type: [Technical Papers]
Neural Rendering and Reenactment of Human Actor Videos Presenter(s): [Liu] [Xu] [Zollhoefer] [Kim] [Bernard] [Habermann] [Wang] [Theobalt]
[SIGGRAPH 2019]
![Real-time pose and shape reconstruction of two interacting hands with a single depth camera](https://history.siggraph.org/wp-content/uploads/2023/01/2019-Technical-Papers-Mueller_Real-time-Pose-and-Shape-Reconstruction-of-Two-Interacting-Hands-With-a-Single-Depth-Camera-150x150.jpg)
Type: [Technical Papers]
Real-time pose and shape reconstruction of two interacting hands with a single depth camera Presenter(s): [Mueller] [Davis] [Bernard] [Sotnychenko] [Verschoor] [Otaduy] [Casas] [Theobalt]
[SIGGRAPH 2019]
![Text-based editing of talking-head video](https://history.siggraph.org/wp-content/uploads/2023/01/2019-Technical-Papers-Fried_Text-based-Editing-of-Talking-head-Video-150x150.jpg)
Type: [Technical Papers]
Text-based editing of talking-head video Presenter(s): [Fried] [Tewari] [Zollhöfer] [Finkelstein] [Goldman] [Genova] [Jin] [Theobalt] [Agrawala]
[SIGGRAPH 2019]
![Deep video portraits](https://history.siggraph.org/wp-content/uploads/2022/07/2018-SIGGRAPH-Image-Not-Available-150x150.jpg)
Type: [Technical Papers]
Deep video portraits Presenter(s): [Kim] [Garrido] [Tewari] [Xu] [Thies] [Niessner] [Perez] [Richardt] [Zollhöfer] [Theobalt]
Entry No.: [163]
[SIGGRAPH 2018]
![FaceVR: Real-Time Gaze-Aware Facial Reenactment in Virtual Reality](https://history.siggraph.org/wp-content/uploads/2022/07/2018-SIGGRAPH-Image-Not-Available-150x150.jpg)
Type: [Technical Papers]
FaceVR: Real-Time Gaze-Aware Facial Reenactment in Virtual Reality Presenter(s): [Thies] [Zollhöfer] [Stamminger] [Theobalt] [Nießner]
[SIGGRAPH 2018]
![Headon: real-time reenactment of human portrait videos](https://history.siggraph.org/wp-content/uploads/2023/02/2018-Technical-Papers-Thies_HeadOn-Real-time-Reenactment-of-Human-Portrait-Videos-150x150.jpg)
Type: [Technical Papers]
Headon: real-time reenactment of human portrait videos Presenter(s): [Thies] [Zollhöfer] [Theobalt] [Stamminger] [Niessner]
Entry No.: [164]
[SIGGRAPH 2018]
![MonoPerfCap: Human Performance Capture From Monocular Video](https://history.siggraph.org/wp-content/uploads/2022/07/2018-SIGGRAPH-Image-Not-Available-150x150.jpg)
Type: [Technical Papers]
MonoPerfCap: Human Performance Capture From Monocular Video Presenter(s): [Xu] [Chatterjee] [Zollhöfer] [Rhodin] [Mehta] [Seidel] [Theobalt]
[SIGGRAPH 2018]
![Opt: A Domain Specific Language for Non-Linear Least Squares Optimization in Graphics and Imaging](https://history.siggraph.org/wp-content/uploads/2022/07/2018-SIGGRAPH-Image-Not-Available-150x150.jpg)
Type: [Technical Papers]
Opt: A Domain Specific Language for Non-Linear Least Squares Optimization in Graphics and Imaging Presenter(s): [DeVito] [Mara] [Zollhöfer] [Bernstein] [Ragan-Kelley] [Theobalt] [Hanrahan] [Fisher] [Niessner]
[SIGGRAPH 2018]
![BundleFusion: Real-Time Globally Consistent 3D Reconstruction Using On-the-Fly Surface Reintegration](https://history.siggraph.org/wp-content/uploads/2023/02/2017-Technical-Papers-Dai_BundleFusion-Real-Time-Globally-Consistent-3D-Reconstruction-Using-On-the-Fly-Surface-Reintegration-150x150.jpg)
Type: [Technical Papers]
BundleFusion: Real-Time Globally Consistent 3D Reconstruction Using On-the-Fly Surface Reintegration Presenter(s): [Dai] [Nießner] [Zollhöfer] [Izadi] [Theobalt]
[SIGGRAPH 2017]
![VNect: real-time 3D human pose estimation with a single RGB camera](https://history.siggraph.org/wp-content/uploads/2023/02/2017-Technical-Papers-Mehta_VNect_-Real-time-3D-Human-Pose-Estimation-with-a-Single-RGB-Camera-150x150.jpg)
Type: [Technical Papers]
VNect: real-time 3D human pose estimation with a single RGB camera Presenter(s): [Mehta] [Sridhar] [Sotnychenko] [Rhodin] [Shafiei] [Seidel] [Xu] [Casas] [Theobalt]
[SIGGRAPH 2017]
![Lightweight eye capture using a parametric model](https://history.siggraph.org/wp-content/uploads/2023/02/2016-Technical-Papers-Berard_Lightweight-Eye-Capture-Using-a-Parametric-Model-150x150.jpg)
Type: [Technical Papers]
Lightweight eye capture using a parametric model Presenter(s): [Garrido] [Zollhöfer] [Casas] [Valgaerts] [Varanasi] [Perez] [Theobalt]
[SIGGRAPH 2016]
![Live intrinsic video](https://history.siggraph.org/wp-content/uploads/2023/02/2016-Technical-Papers-Meka_Live-Intrinsic-Video-150x150.jpg)
Type: [Technical Papers]
Live intrinsic video Presenter(s): [Meka] [Zollhoefer] [Richardt] [Theobalt]
[SIGGRAPH 2016]
![Shading-based refinement on volumetric signed distance functions](https://history.siggraph.org/wp-content/uploads/2023/04/2015-Technical-Papers-Zollhofer_Shading-based-Refinement-on-Volumetric-Signed-Distance-Functions-150x150.jpg)
Type: [Technical Papers]
Shading-based refinement on volumetric signed distance functions Presenter(s): [Zollhöfer] [Dai] [Innmann] [Wu] [Stamminger] [Theobalt] [Nießner]
[SIGGRAPH 2015]
![User-Centric Computational Videography](https://history.siggraph.org/wp-content/uploads/2022/01/2015-24-User-Centric-Computational-Videography-150x150.jpg)
Type: [Courses]
User-Centric Computational Videography Organizer(s): [Richardt]
Presenter(s): [Richardt] [Tompkin] [Bai] [Theobalt]
Entry No.: [24]
[SIGGRAPH 2015]
![Real-time non-rigid reconstruction using an RGB-D camera](https://history.siggraph.org/wp-content/uploads/2023/02/2014-Technical-Papers-Zollhofer_Real-time-Non-rigid-Reconstruction-using-an-RGB-D-Camera-150x150.jpg)
Type: [Technical Papers]
Real-time non-rigid reconstruction using an RGB-D camera Presenter(s): [Zollhöfer] [Nießner] [Izadi] [Rehmann] [Zach] [Fisher] [Wu] [Fitzgibbon] [Loop] [Theobalt] [Stamminger]
[SIGGRAPH 2014]
![High detail marker based 3D reconstruction by enforcing multiview constraints](https://history.siggraph.org/wp-content/uploads/2024/03/2012-Posters-Neumann_High-detail-marker-based-3D-reconstruction-01-150x150.jpg)
Type: [Posters]
High detail marker based 3D reconstruction by enforcing multiview constraints Presenter(s): [Neumann] [Wacker] [Varanasi] [Theobalt] [Magnor]
[SIGGRAPH 2012]
![Videoscapes: exploring sparse, unstructured video collections](https://history.siggraph.org/wp-content/uploads/2023/03/2012-Technical-Papers-Tompkin_Videoscapes_-Exploring-Sparse-Unstructured-Video-Collections-150x150.jpg)
Type: [Technical Papers]
Videoscapes: exploring sparse, unstructured video collections Presenter(s): [Tompkin] [Kim] [Kautz] [Theobalt]
[SIGGRAPH 2012]
![Video-based characters: creating new human performances from a multi-view video database](https://history.siggraph.org/wp-content/uploads/2023/04/2011-Technical-Papers-Xu_Video-based-Characters-–-Creating-New-Human-Performances-from-a-Multi-view-Video-Database-150x150.jpg)
Type: [Technical Papers]
Video-based characters: creating new human performances from a multi-view video database Presenter(s): [Xu] [Liu] [Stoll] [Tompkin] [Bharaj] [Dai] [Seidel] [Kautz] [Theobalt]
[SIGGRAPH 2011]
![Performance capture from sparse multi-view video](https://history.siggraph.org/wp-content/uploads/2023/03/2008-Technical-Papers-Deaguiar_Performance-Capture-from-Sparse-Multi-view-Video-150x150.jpg)
Type: [Technical Papers]
Performance capture from sparse multi-view video Presenter(s): [de Aguiar] [Stoll] [Theobalt] [Ahmed] [Seidel] [Thrun]
[SIGGRAPH 2008]
![Eikonal rendering: efficient light transport in refractive objects](https://history.siggraph.org/wp-content/uploads/2023/05/2007-Technical-Papers-ihrke_Eikonal-Rendering-Efficient-Light-Transport-in-Refractive-Objects-150x150.jpg)
Type: [Technical Papers]
Eikonal rendering: efficient light transport in refractive objects Presenter(s): [Ihrke] [Ziegler] [Tevs] [Theobalt] [Magnor] [Seidel]
[SIGGRAPH 2007]
![GPU-based light wavefront simulation for real-time refractive object rendering](https://history.siggraph.org/wp-content/uploads/2024/05/2007-Talks-Ziegler_GPU-based-light-wavefront-simulation-for-real-time-refractive-object-rendering-150x150.jpg)
Type: [Talks (Sketches)]
GPU-based light wavefront simulation for real-time refractive object rendering Presenter(s): [Ziegler] [Theobalt] [Ihrke] [Magnor] [Tevs] [Seidel]
[SIGGRAPH 2007]
![Joint Motion and Reflectance Capture for Relightable 3D Video](https://history.siggraph.org/wp-content/uploads/2023/01/2005-Talks-Theobalt_Joint-Motion-and-Reflectance-Capture-for-Relightable-3D-Video-01-150x150.jpg)
Type: [Talks (Sketches)]
Joint Motion and Reflectance Capture for Relightable 3D Video Presenter(s): [Theobalt] [Ahmed] [de Aguiar] [Ziegler] [Lensch] [Magnor] [Seidel]
[SIGGRAPH 2005]
Role(s):
- Course Presenter
- Emerging Technologies Presenter
- Poster Presenter
- Studio (SIGGRAPH Lab) Presenter
- Talk (Sketch) Presenter
- Technical Paper Presenter
- Technical Papers Jury Member