Gordon Wetzstein
Most Recent Affiliation(s):
- Stanford University, Bauhaus-University Weimar, MIT Media Lab, Research Scientist
Other / Past Affiliation(s):
- MIT Media Lab
Bio:
SIGGRAPH 2020
Gordon Wetzstein is an Assistant Professor of Electrical Engineering and, by courtesy, of Computer Science at Stanford University. His research has widespread applications in next- generation imaging, display, wearable computing, and microscopy systems. Prior to joining Stanford, he was a Research Scientist in the Camera Culture Group at MIT. He received a Ph.D. in Computer Science from UBC. He is the recipient of an NSF CAREER Award, an Alfred P. Sloan Fellowship, an ACM SIGGRAPH Significant New Researcher Award, a Presidential Early Career Award for Scientists and Engineers, an SPIE Early Career Achievement Award, and several other awards.
SIGGRAPH 2014
Gordon is a Research Scientist in the Camera Culture Group at the MIT Media Lab. His research focuses on computational imaging and display systems as well as computational light transport. His research has been funded by DARPA, NSF, Samsung, and other grants from industry sponsors and research councils. In 2006, Gordon graduated with Honors from the Bauhaus in Weimar, Germany, and he received a Ph.D. in Computer Science from the University of British Columbia in 2011. His doctoral dissertation focuses on computational light modulation for image acquisition and display and won the Alain Fournier Ph.D. Dissertation Annual Award. He organized the IEEE 2012 and 2013 International Workshops on Computational Cameras and Displays, founded displayblocks.org as a forum for sharing computational display design instructions with the DIY community, and presented a number of courses on Computational Displays and Computational Photography at ACM SIGGRAPH. Gordon won the best paper award for ”Hand-Held Schlieren Photography with Light Field Probes” at ICCP 2011 and a Laval Virtual Award in 2005.
SIGGRAPH 2012
Gordon Wetzstein is a Postdoctoral Associate at the MIT Media Lab. His research interests include light field and high dynamic range displays, projector-camera systems, computational optics, computational photography, computer vision, computer graphics, and augmented reality. Gordon received a Diploma in Media System Science with Honors from the Bauhaus-University Weimar in 2006 and a Ph.D. in Computer Science at the University of British Columbia in 2011. His doctoral dissertation focuses on computational light modulation for image acquisition and dis- play. He is co-chairing the first workshop on Computational Cameras and Displays at CVPR 2012, is serving in the general submissions committee at SIGGRAPH 2012, has served on the program committees of IEEE ProCams 2007 and IEEE ISMAR 2010, won a Laval Virtual Award in 2005 for his work on projector-camera systems, and a best paper award for “Hand- Held Schlieren Photography with Light Field Probes” at the International Conference on Computational Photography in 2011, introducing light field probes as computational displays for computer vision and fluid mechanics applications.
Learning Category: Organizing Committee Chair:
Course Organizer:
- SIGGRAPH 2012, "Computational Displays"
- SIGGRAPH 2017, "Build Your Own VR Display: An Introduction to VR Display Systems for Hobbyists and Educators"
Experience Category: Jury Member:
Learning Category: Jury Member:
Award(s):
Experience(s):
![Neural Holographic Near-eye Displays for Virtual Reality](https://history.siggraph.org/wp-content/uploads/2024/01/2023-ETech-Choi_Neural-Holographic-Near-eye-Displays-for-Virtual-Reality-150x150.jpg)
Type: [E-Tech]
Neural Holographic Near-eye Displays for Virtual Reality
Organizer(s): [Choi] [Gopakumar] [Chao] [Lee] [Kim] [Wetzstein]
Entry No.: [12]
[SIGGRAPH 2023]
![Autofocals: Gaze-Contingent Eyeglasses for Presbyopes](https://history.siggraph.org/wp-content/uploads/2022/04/2018-ETech-03-Padmanaban_Autofocals-01-150x150.jpg)
Type: [E-Tech]
Autofocals: Gaze-Contingent Eyeglasses for Presbyopes
Organizer(s): [Padmanaban] [Konrad] [Wetzstein]
Entry No.: [03]
[SIGGRAPH 2018]
![Real-time Non-line-of-sight Imaging](https://history.siggraph.org/wp-content/uploads/2022/04/2018-ETech-15-OToole_Real-time-Non-Line-of-Sight-Imaging-01-150x150.jpg)
Type: [E-Tech]
Real-time Non-line-of-sight Imaging
Organizer(s): [O’Toole] [Lindell] [Wetzstein]
Entry No.: [15]
[SIGGRAPH 2018]
![Computational Focus-Tunable Near-eye Displays](https://history.siggraph.org/wp-content/uploads/2022/03/2016-ETech-03-Konrad_Computational-Focus-Tunable-Near-eye-Displays-01-150x150.jpg)
Type: [E-Tech]
Computational Focus-Tunable Near-eye Displays
Organizer(s): [Konrad] [Padmanaban] [Cooper] [Wetzstein]
Entry No.: [03]
[SIGGRAPH 2016]
![Doppler Time-of-Flight Imaging](https://history.siggraph.org/wp-content/uploads/2022/03/2015-ETech-09-Heide_Doppler-Time-of-Flight-Imaging-01-150x150.jpg)
Type: [E-Tech]
Doppler Time-of-Flight Imaging
Organizer(s): [Heide] [Wetzstein] [Hullin] [Heidrich]
Entry No.: [09]
[SIGGRAPH 2015]
![The Light Field Stereoscope](https://history.siggraph.org/wp-content/uploads/2022/03/2015-ETech-24-Huang_The-Light-Field-Stereoscope-01-150x150.jpg)
Type: [E-Tech]
The Light Field Stereoscope
Organizer(s): [Huang] [Luebke] [Wetzstein]
Entry No.: [24]
[SIGGRAPH 2015]
![A Compressive Light Field Projection System](https://history.siggraph.org/wp-content/uploads/2022/03/2014-ETech-02-Hirsch_A-Compressive-Light-Field-Projection-System-01-150x150.png)
Type: [E-Tech]
A Compressive Light Field Projection System
Organizer(s): [Hirsch] [Wetzstein] [Raskar]
Entry No.: [02]
[SIGGRAPH 2014]
Learning Category: Presentation(s):
![Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization](https://history.siggraph.org/wp-content/uploads/2024/02/2023-Tech-Papers-Lin_Single-Shot-Implicit-Morphable-Faces-with-Consistent-Texture-Parameterization-150x150.jpg)
Type: [Technical Papers]
Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization Presenter(s): [Lin] [Nagano] [Kautz] [Chan] [Guibas] [Wetzstein] [Khamis]
[SIGGRAPH 2023]
![Towards Attention–Aware Foveated Rendering](https://history.siggraph.org/wp-content/uploads/2024/02/2023-Tech-Papers-Krajancich_Towards-Attention–aware-Rendering-for-Virtual-and-Augmented-Reality-150x150.jpg)
Type: [Technical Papers]
Towards Attention–Aware Foveated Rendering Presenter(s): [Krajancich] [Kellnhofer] [Wetzstein]
[SIGGRAPH 2023]
![A perceptual model for eccentricity-dependent spatio-temporal flicker fusion and its applications to foveated graphics](https://history.siggraph.org/wp-content/uploads/2023/06/2021-Technical-Papers-Krjancich_A-Perceptual-Model-for-Eccentricity-dependent-Spatio-temporal-Flicker-Fusion-and-its-Applications-to-Foveated-Graphics-150x150.jpg)
Type: [Technical Papers]
A perceptual model for eccentricity-dependent spatio-temporal flicker fusion and its applications to foveated graphics Presenter(s): [Krajancich] [Kellnhofer] [Wetzstein]
[SIGGRAPH 2021]
![Acorn: adaptive coordinate networks for neural scene representation](https://history.siggraph.org/wp-content/uploads/2023/06/2021-Technical-Papers-Martel_acorn-Adaptive-Coordinate-Networks-for-Neural-Scene-Representation-150x150.jpg)
Type: [Technical Papers]
Acorn: adaptive coordinate networks for neural scene representation Presenter(s): [Martel] [Lindell] [Lin] [Chan] [Monteiro] [Wetzstein]
[SIGGRAPH 2021]
![Advances in Neural Rendering](https://history.siggraph.org/wp-content/uploads/2022/03/2021-1-Advances-in-Neural-Rendering-150x150.jpg)
Type: [Courses]
Advances in Neural Rendering Organizer(s): [Tewari]
Presenter(s): [Sitzmann] [Fried] [Thies] [Xu] [Tretschk] [Mildenhall] [Pandey] [Orts-Escolano] [Fanello] [Guo] [Wetzstein] [Zhu] [Theobalt] [Agrawala] [Zollhöfer]
Entry No.: [01]
[SIGGRAPH 2021]
![Deep Optics: Joint Design of Optics and Image Recovery Algorithms for Domain Specific Cameras](https://history.siggraph.org/wp-content/uploads/2022/03/2020-18-Deep-Optics-Joint-Design-of-Optics-and-Image-Recovery-Algorithms-for-Domain-Specific-Cameras-150x150.jpg)
Type: [Courses]
Deep Optics: Joint Design of Optics and Image Recovery Algorithms for Domain Specific Cameras Organizer(s): [Peng]
Presenter(s): [Veeraraghavan] [Heidrich] [Wetzstein]
Entry No.: [18]
[SIGGRAPH 2020]
![Gaze-Contingent Ocular Parallax Rendering for Virtual Reality](https://history.siggraph.org/wp-content/uploads/2022/07/2020-SIGGRAPH-Image-Not-Available-150x150.jpg)
Type: [Technical Papers]
Gaze-Contingent Ocular Parallax Rendering for Virtual Reality Presenter(s): [Konrad] [Angelopoulos] [Wetzstein]
[SIGGRAPH 2020]
![Autofocals: evaluating gaze-contingent eyeglasses for presbyopes](https://history.siggraph.org/wp-content/uploads/2022/09/2019-Talks-Padmanaban_Autofocals-Evaluating-Gaze-Contingent-Eyeglasses-for-Presbyopes-01-150x150.jpg)
Type: [Talks (Sketches)]
Autofocals: evaluating gaze-contingent eyeglasses for presbyopes Presenter(s): [Padmanaban] [Konrad] [Wetzstein]
Entry No.: [55]
[SIGGRAPH 2019]
![Gaze-Contingent Ocular Parallax Rendering for Virtual Reality](https://history.siggraph.org/wp-content/uploads/2022/09/2019-Talks-Konrad_Gaze-Contingent-Ocular-Parallax-Rendering-for-Virtual-Reality-01-150x150.jpg)
Type: [Talks (Sketches)]
Gaze-Contingent Ocular Parallax Rendering for Virtual Reality Presenter(s): [Konrad] [Angelopoulos] [Wetzstein]
Entry No.: [56]
[SIGGRAPH 2019]
![Non-Line-of-Sight Imaging With Partial Occluders and Surface Normals](https://history.siggraph.org/wp-content/uploads/2022/07/2019-SIGGRAPH-Image-Not-Available-150x150.jpg)
Type: [Technical Papers]
Non-Line-of-Sight Imaging With Partial Occluders and Surface Normals Presenter(s): [Heide] [O'Toole] [Zang] [Lindell] [Diamond] [Wetzstein]
[SIGGRAPH 2019]
![Wave-based non-line-of-sight imaging using fast f-k migration](https://history.siggraph.org/wp-content/uploads/2023/01/2019-Technical-Papers-Lindell_Wave-Based-Non-Line-of-Sight-Imaging-using-Fast-f−k-Migration-150x150.jpg)
Type: [Technical Papers]
Wave-based non-line-of-sight imaging using fast f-k migration Presenter(s): [Lindell] [Wetzstein] [O’Toole]
[SIGGRAPH 2019]
![Confocal Non-Line-of-Sight Imaging](https://history.siggraph.org/wp-content/uploads/2022/09/2018-Talks-Otoole_Confocal-Non-line-of-sight-Imaging-01-150x150.jpg)
Type: [Talks (Sketches)]
Confocal Non-Line-of-Sight Imaging Presenter(s): [O'Toole] [Lindell] [Wetzstein]
Entry No.: [01]
[SIGGRAPH 2018]
![End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging](https://history.siggraph.org/wp-content/uploads/2023/02/2018-Technical-Papers-Sitzmann_End-to-end-Optimization-of-Optics-and-Image-Processing-for-Achromatic-Extended-Depth-of-Field-and-Super-resolution-Imaging-150x150.jpg)
Type: [Technical Papers]
End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging Presenter(s): [Sitzmann] [Diamond] [Peng] [Dun] [Boyd] [Heidrich] [Heide] [Wetzstein]
Entry No.: [114]
[SIGGRAPH 2018]
![Single-photon 3D imaging with deep sensor fusion](https://history.siggraph.org/wp-content/uploads/2023/02/2018-Technical-Papers-Lindell_Single-Photon-3D-Imaging-with-Deep-Sensor-Fusion-150x150.jpg)
Type: [Technical Papers]
Single-photon 3D imaging with deep sensor fusion Presenter(s): [Lindell] [O’Toole] [Wetzstein]
Entry No.: [113]
[SIGGRAPH 2018]
![Accommodation-invariant computational near-eye displays](https://history.siggraph.org/wp-content/uploads/2023/02/2017-Technical-Papers-Konrad_Accommodation-invariant-Computational-Near-eye-Displays-1-150x150.jpg)
Type: [Technical Papers]
Accommodation-invariant computational near-eye displays Presenter(s): [Konrad] [Padmanaban] [Molner] [Cooper] [Wetzstein]
[SIGGRAPH 2017]
![Applications of Visual Perception to Virtual Reality Rendering](https://history.siggraph.org/wp-content/uploads/2022/02/2017-1-Applications-of-Visual-Perception-to-Virtual-Reality-Rendering-150x150.jpg)
Type: [Courses]
Applications of Visual Perception to Virtual Reality Rendering Organizer(s): [Patney]
Presenter(s): [Kim] [Zannoli] [Koulieris] [Wetzstein] [Steinicke]
Entry No.: [01]
[SIGGRAPH 2017]
![Movie editing and cognitive event segmentation in virtual reality video](https://history.siggraph.org/wp-content/uploads/2023/02/2017-Technical-Papers-Serrano_Movie-Editing-and-Cognitive-Event-Segmentation-in-Virtual-Reality-Video-150x150.jpg)
Type: [Technical Papers]
Movie editing and cognitive event segmentation in virtual reality video Presenter(s): [Serrano] [Sitzmann] [Ruiz-Borau] [Wetzstein] [Gutierrez] [Masia]
[SIGGRAPH 2017]
![Optimizing VR for all users through adaptive focus displays](https://history.siggraph.org/wp-content/uploads/2022/09/2017-Talks-Padmanaban-Optimizing-VR-for-All-Users-Through-Adaptive-Focus-Displays-01-150x150.jpg)
Type: [Talks (Sketches)]
Optimizing VR for all users through adaptive focus displays Presenter(s): [Padmanaban] [Konrad] [Cooper] [Wetzstein]
Entry No.: [77]
[SIGGRAPH 2017]
![Computational imaging with multi-camera time-of-flight systems](https://history.siggraph.org/wp-content/uploads/2023/02/2016-Technical-Papers-Shrestha_Computational-Imaging-with-Multi-Camera-Time-of-Flight-Systems-150x150.jpg)
Type: [Technical Papers]
Computational imaging with multi-camera time-of-flight systems Presenter(s): [Shrestha] [Heide] [Heidrich] [Wetzstein]
[SIGGRAPH 2016]
![ProxImaL: efficient image optimization using proximal algorithms](https://history.siggraph.org/wp-content/uploads/2023/02/2016-Technical-Papers-Kovalsky_Accelerated-Quadratic-Proxy-for-Geometric-Optimization-1-150x150.jpg)
Type: [Technical Papers]
ProxImaL: efficient image optimization using proximal algorithms Presenter(s): [Heide] [Diamond] [Nießner] [Ragan-Kelley] [Heidrich] [Wetzstein]
[SIGGRAPH 2016]
![Doppler time-of-flight imaging](https://history.siggraph.org/wp-content/uploads/2022/03/2015-ETech-09-Heide_Doppler-Time-of-Flight-Imaging-01-150x150.jpg)
Type: [Technical Papers]
Doppler time-of-flight imaging Presenter(s): [Heide] [Heidrich] [Wetzstein] [Hullin]
[SIGGRAPH 2015]
![The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues](https://history.siggraph.org/wp-content/uploads/2022/07/2015-SIGGRAPH-Image-Not-Available-150x150.jpg)
Type: [Technical Papers]
The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues Presenter(s): [Huang] [Chen] [Wetzstein]
[SIGGRAPH 2015]
![A compressive light field projection system](https://history.siggraph.org/wp-content/uploads/2023/02/2014-Technical-Papers-Hirsch_A-Compressive-Light-Field-Projection-System-150x150.jpg)
Type: [Technical Papers]
A compressive light field projection system Presenter(s): [Hirsch] [Wetzstein] [Raskar]
[SIGGRAPH 2014]
![Computational Cameras and Displays](https://history.siggraph.org/wp-content/uploads/2022/01/2014-9-Computational-Cameras-and-Displays-150x150.jpg)
Type: [Courses]
Computational Cameras and Displays Organizer(s): [O’Toole]
Presenter(s): [O’Toole] [Wetzstein]
Entry No.: [09]
[SIGGRAPH 2014]
![Eyeglasses-free display: towards correcting visual aberrations with computational light field displays](https://history.siggraph.org/wp-content/uploads/2023/02/2014-Technical-Papers-Huang_Eyeglasses-free-Display-Towards-Correcting-Visual-Aberrations-150x150.jpg)
Type: [Technical Papers]
Eyeglasses-free display: towards correcting visual aberrations with computational light field displays Presenter(s): [Huang] [Wetzstein] [Barsky] [Raskar]
[SIGGRAPH 2014]
![Focus 3D: Compressive Accommodation Display](https://history.siggraph.org/wp-content/uploads/2022/07/2014-SIGGRAPH-Image-Not-Available-150x150.jpg)
Type: [Technical Papers]
Focus 3D: Compressive Accommodation Display Presenter(s): [Maimone] [Wetzstein] [Hirsch] [Lanman] [Raskar] [Fuchs]
[SIGGRAPH 2014]
![Adaptive image synthesis for compressive displays](https://history.siggraph.org/wp-content/uploads/2023/03/2013-Technical-Papers-Heide_Adaptive-Image-Synthesis-for-Compressive-Displays-150x150.jpg)
Type: [Technical Papers]
Adaptive image synthesis for compressive displays Presenter(s): [Heide] [Wetzstein] [Raskar] [Heidrich]
[SIGGRAPH 2013]
![Compressive light field photography using overcomplete dictionaries and optimized projections](https://history.siggraph.org/wp-content/uploads/2023/03/2013-Technical-Papers-Marwah_Compressive-Light-Field-Photography-using-Overcomplete-Dictionaries-and-Optimized-Projections-150x150.jpg)
Type: [Technical Papers]
Compressive light field photography using overcomplete dictionaries and optimized projections Presenter(s): [Marwah] [Wetzstein] [Bando] [Raskar]
[SIGGRAPH 2013]
![Computational Light Field Display for Correcting Visual Aberrations](https://history.siggraph.org/wp-content/uploads/2023/01/2013-Poster-29-Huang_Computational-Light-Field-Display-for-Correcting-Visual-Aberrations-01-150x150.jpg)
Type: [Posters]
Computational Light Field Display for Correcting Visual Aberrations Presenter(s): [Huang] [Wetzstein] [Barsky] [Raskar]
Entry No.: [29]
[SIGGRAPH 2013]
![Compressive light field photography](https://history.siggraph.org/wp-content/uploads/2024/03/2012-Posters-Marwah_Compressive-light-field-photography-01-150x150.jpg)
Type: [Posters]
Compressive light field photography Presenter(s): [Marwah] [Wetzstein] [Veeraraghavan] [Raskar]
[SIGGRAPH 2012]
![Compressive light field photography](https://history.siggraph.org/wp-content/uploads/2024/05/2012-Talks-Marwah_Compressive-Light-Field-Photography-150x150.jpg)
Type: [Talks (Sketches)]
Compressive light field photography Presenter(s): [Marwah] [Wetzstein] [Veeraraghavan] [Raskar]
[SIGGRAPH 2012]
![Computational cellphone microscopy](https://history.siggraph.org/wp-content/uploads/2024/03/2012-Posters-Arpa_Computational-cellphone-microscopy-01-150x150.jpg)
Type: [Posters]
Computational cellphone microscopy Presenter(s): [Arpa] [Wetzstein] [Lanman] [Raskar]
[SIGGRAPH 2012]
![Computational retinal imaging via binocular coupling and indirect illumination](https://history.siggraph.org/wp-content/uploads/2024/01/2012-Posters-Lawson_Computational-Retinal-Imaging-via-Binocular-Coupling-and-Indirect-Illumination-01-150x150.png)
Type: [Posters]
Computational retinal imaging via binocular coupling and indirect illumination Presenter(s): [Lawson] [Boggess] [Khullar] [Olwal] [Wetzstein] [Raskar]
[SIGGRAPH 2012]
![Computational retinal imaging via binocular coupling and indirect illumination](https://history.siggraph.org/wp-content/uploads/2024/05/2012-Talks-Lawson_Computational-Retinal-Imaging-via-Binocular-Coupling-and-Indirect-Illumination-150x150.jpg)
Type: [Talks (Sketches)]
Computational retinal imaging via binocular coupling and indirect illumination Presenter(s): [Lawson] [Boggess] [Khullar] [Olwal] [Wetzstein] [Raskar]
[SIGGRAPH 2012]
![Perceptually-optimized content remapping for automultiscopic displays](https://history.siggraph.org/wp-content/uploads/2024/01/2012-Posters-Masia_Perceptually-Optimized-Content-Remapping-for-Automultiscopic-Displays-01-150x150.png)
Type: [Posters]
Perceptually-optimized content remapping for automultiscopic displays Presenter(s): [Masia] [Wetzstein] [Aliaga] [Raskar] [Gutierrez]
[SIGGRAPH 2012]
![Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting](https://history.siggraph.org/wp-content/uploads/2023/03/2012-Technical-Papers-Wetzstein_Tensor-Displays_-Compressive-Light-Field-Synthesis-using-Multilayer-Displays-with-Directional-Backlighting-150x150.jpg)
Type: [Technical Papers]
Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting Presenter(s): [Wetzstein] [Lanman] [Hirsch] [Raskar]
[SIGGRAPH 2012]
![Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays](https://history.siggraph.org/wp-content/uploads/2023/04/2011-Technical-Papers-Wetzstein_Layered-3D_-Tomographic-Image-Synthesis-for-Attenuation-based-Light-Field-and-High-Dynamic-Range-Displays-150x150.jpg)
Type: [Technical Papers]
Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays Presenter(s): [Wetzstein] [Lanman] [Heidrich] [Raskar]
[SIGGRAPH 2011]
![Diffusion Coded Photography forExtended Depth of Field.](https://history.siggraph.org/wp-content/uploads/2022/07/2010-SIGGRAPH-Image-Not-Available-150x150.jpg)
Type: [Technical Papers]
Diffusion Coded Photography forExtended Depth of Field. Presenter(s): [Grosse] [Wetzstein] [Grundhöfer] [Bimber]
[SIGGRAPH 2010]
![Adaptive coded aperture projection](https://history.siggraph.org/wp-content/uploads/2024/07/2009-Talks-Grosse_Adaptive-Coded-Aperture-Projection-150x150.jpg)
Type: [Talks (Sketches)]
Adaptive coded aperture projection Presenter(s): [Grosse] [Wetzstein] [Bimber] [Grundhöfer]
[SIGGRAPH 2009]
![HECTOR - scripting-based VR system design](https://history.siggraph.org/wp-content/uploads/2024/01/2007-Poster-Wetzstein_HECTOR-–-Scripting-Based-VR-System-Design-02-150x150.jpg)
Type: [Posters]
HECTOR - scripting-based VR system design Presenter(s): [Wetzstein] [Gollner] [Beck] [Weiszig] [Derkau] [Springer] [Fröhlich]
[SIGGRAPH 2007]
![Radiometric compensation of global illumination effects with projector-camera systems](https://history.siggraph.org/wp-content/uploads/2024/05/2006-Poster-0164-Wetzstein_Radiometric-Compensation-of-Global-Illumination-Effects-with-Projector-Camera-Systems-01-150x150.jpg)
Type: [Posters]
Radiometric compensation of global illumination effects with projector-camera systems Presenter(s): [Wetzstein] [Bimber]
[SIGGRAPH 2006]
Learning Category: Moderator:
![Étendue Expansion in Holographic Near Eye Displays Through Sparse Eye-box Generation Using Lens Array Eyepiece](https://history.siggraph.org/wp-content/uploads/2024/02/2023-Tech-Papers-Chae_Etendue-Expansion-in-Holographic-Near-Eye-Displays-through-Sparse-Eye-box-Generation-Using-Lens-Array-Eyepiece-150x150.jpg)
Type: [Technical Papers]
Étendue Expansion in Holographic Near Eye Displays Through Sparse Eye-box Generation Using Lens Array Eyepiece Presenter(s): [Chae] [Bang] [Yoo] [Jeong]
[SIGGRAPH 2023]
![OpenMPD: A Low-level Presentation Engine for Multimodal Particle-based Displays](https://history.siggraph.org/wp-content/uploads/2024/02/2023_Technical-Paper_Montano-Murillo_OpenMPD-150x150.jpg)
Type: [Technical Papers]
OpenMPD: A Low-level Presentation Engine for Multimodal Particle-based Displays Presenter(s): [Montano-Murillo] [Hirayama] [Plasencia]
[SIGGRAPH 2023]
![Perceptual Visibility Model for Temporal Contrast Changes in Periphery](https://history.siggraph.org/wp-content/uploads/2024/02/2023_Technical-Paper_Tursun_Perceptual-Visibility-Model-150x150.jpg)
Type: [Technical Papers]
Perceptual Visibility Model for Temporal Contrast Changes in Periphery Presenter(s): [Tursun] [Didyk]
[SIGGRAPH 2023]
![Perspective-correct VR Passthrough Without Reprojection](https://history.siggraph.org/wp-content/uploads/2024/02/2023-Tech-Papers-Kuo_Perspective-Correct-VR-Passthrough-Without-Reprojection-150x150.jpg)
Type: [Technical Papers]
Perspective-correct VR Passthrough Without Reprojection Presenter(s): [Kuo] [Penner] [Moczydlowski] [Ching] [Lanman] [Matsuda]
[SIGGRAPH 2023]
![Split-Lohmann Multifocal Displays](https://history.siggraph.org/wp-content/uploads/2024/02/2023-Tech-Papers-Qin_Split-Lohmann-Multifocal-Displays-02-150x150.jpg)
Type: [Technical Papers]
Split-Lohmann Multifocal Displays Presenter(s): [Qin] [Chen] [O'Toole] [Sankaranarayanan]
[SIGGRAPH 2023]
![The Statistics of Eye Movements and Binocular Disparities in VR Gaming Headsets Should Drive Headset Design](https://history.siggraph.org/wp-content/uploads/2024/02/2023_Technical-Paper_Aizenman_The-Statistics-of-Eye-Movements-and-Binocular-Disparities-150x150.jpg)
Type: [Technical Papers]
The Statistics of Eye Movements and Binocular Disparities in VR Gaming Headsets Should Drive Headset Design Presenter(s): [Aizenman] [Koulieris] [Gibaldi] [Sehgal] [Levi] [Banks]
[SIGGRAPH 2023]
![3DTV at home: eulerian-lagrangian stereo-to-multiview conversion](https://history.siggraph.org/wp-content/uploads/2023/02/2017-Technical-Papers-Kellnhofer_3DTV-at-Home_-Eulerian-Lagrangian-Stereo-to-Multiview-Conversion-1-150x150.jpg)
Type: [Technical Papers]
3DTV at home: eulerian-lagrangian stereo-to-multiview conversion Presenter(s): [Kellnhofer] [Didyk] [Wang] [Sitthi-amorn] [Freeman] [Durand] [Matusik]
[SIGGRAPH 2017]
![Hiding of phase-based stereo disparity for ghost-free viewing without glasses](https://history.siggraph.org/wp-content/uploads/2023/02/2017-Technical-Papers-Fukiage_Hiding-of-Phase-Based-Stereo-Disparity-for-Ghost-Free-Viewing-Without-Glasses-150x150.jpg)
Type: [Technical Papers]
Hiding of phase-based stereo disparity for ghost-free viewing without glasses Presenter(s): [Fukiage] [Kawabe] [Nishida]
[SIGGRAPH 2017]
![Low-cost 360 stereo photography and video capture](https://history.siggraph.org/wp-content/uploads/2023/02/2017-Technical-Papers-Matzen_Low-cost-360-stereo-photography-and-video-capture-1-150x150.jpg)
Type: [Technical Papers]
Low-cost 360 stereo photography and video capture Presenter(s): [Matzen] [Cohen] [Evans] [Kopf] [Szeliski]
[SIGGRAPH 2017]
![Mixed-primary factorization for dual-frame computational displays](https://history.siggraph.org/wp-content/uploads/2023/02/2017-Technical-Papers-Huang_Mixed-primary-Factorization-for-Dual-frame-Computational-Displays-150x150.jpg)
Type: [Technical Papers]
Mixed-primary factorization for dual-frame computational displays Presenter(s): [Huang] [Pająk] [Kim] [Kautz] [Luebke]
[SIGGRAPH 2017]
![Compressive epsilon photography for post-capture control in digital imaging](https://history.siggraph.org/wp-content/uploads/2023/02/2014-Technical-Papers-Ito_Compressive-Epsilon-Photography-for-Post-Capture-Control-in-Digital-Imaging-150x150.jpg)
Type: [Technical Papers]
Compressive epsilon photography for post-capture control in digital imaging Presenter(s): [Ito] [Tambe] [Mitra] [Sankaranarayanan] [Veeraraghavan]
[SIGGRAPH 2014]
![Learning to be a depth camera for close-range human capture and interaction](https://history.siggraph.org/wp-content/uploads/2023/02/2014-Technical-Papers-Fanello_Learning-to-be-a-Depth-Camera-150x150.jpg)
Type: [Technical Papers]
Learning to be a depth camera for close-range human capture and interaction Presenter(s): [Fanello] [Keskin] [Izadi] [Kohli] [Shotton] [Criminisi] [Kim] [Sweeney] [Kang]
[SIGGRAPH 2014]
![Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources](https://history.siggraph.org/wp-content/uploads/2023/02/2014-Technical-Papers-Maimone_Pinlight-Displays-Wide-Field-of-View-Augmented-Reality-Eyeglasses-150x150.jpg)
Type: [Technical Papers]
Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources Presenter(s): [Maimone] [Lanman] [Rathinavel] [Keller] [Luebke] [Fuchs]
[SIGGRAPH 2014]
Role(s):
- Awardee
- Course Organizer
- Course Presenter
- Courses Organizing Committee Chair/Co-Chair
- Emerging Technologies Presenter
- Poster Presenter
- Talk (Sketch) Presenter
- Technical Paper Moderator
- Technical Paper Presenter
- Technical Paper Session Moderator
- Technical Papers Jury Member
- Unified Jury Member