“Handheld mobile photography in very low light” by Liba, Murthy, Tsai, Brooks, Xue, et al. …
Conference:
Type(s):
Title:
- Handheld mobile photography in very low light
Session/Category Title: Photography in the Field
Presenter(s)/Author(s):
- Orly Liba
- Kiran Murthy
- Yun-Ta Tsai
- Tim Brooks
- Tianfan Xue
- Nikhil Karnad
- Qiurui He
- Jonathan T. Barron
- Dillon Sharlet
- Ryan Geiss
- Samuel Hasinoff
- Yael Pritch
- Marc Levoy
Moderator(s):
Abstract:
Taking photographs in low light using a mobile phone is challenging and rarely produces pleasing results. Aside from the physical limits imposed by read noise and photon shot noise, these cameras are typically handheld, have small apertures and sensors, use mass-produced analog electronics that cannot easily be cooled, and are commonly used to photograph subjects that move, like children and pets. In this paper we describe a system for capturing clean, sharp, colorful photographs in light as low as 0.3 lux, where human vision becomes monochromatic and indistinct. To permit handheld photography without flash illumination, we capture, align, and combine multiple frames. Our system employs “motion metering”, which uses an estimate of motion magnitudes (whether due to handshake or moving objects) to identify the number of frames and the per-frame exposure times that together minimize both noise and motion blur in a captured burst. We combine these frames using robust alignment and merging techniques that are specialized for high-noise imagery. To ensure accurate colors in such low light, we employ a learning-based auto white balancing algorithm. To prevent the photographs from looking like they were shot in daylight, we use tone mapping techniques inspired by illusionistic painting: increasing contrast, crushing shadows to black, and surrounding the scene with darkness. All of these processes are performed using the limited computational resources of a mobile device. Our system can be used by novice photographers to produce shareable pictures in a few seconds based on a single shutter press, even in environments so dim that humans cannot see clearly.
References:
1. Miika Aittala and Frédo Durand. 2018. Burst image deblurring using permutation invariant convolutional neural networks. ECCV (2018).Google Scholar
2. Jonathan T. Barron. 2015. Convolutional Color Constancy. ICCV (2015).Google Scholar
3. Jonathan T. Barron and Yun-Ta Tsai. 2017. Fast Fourier Color Constancy. CVPR (2017).Google Scholar
4. Giacomo Boracchi and Alessandro Foi. 2012. Modeling the Performance of Image Restoration From Motion Blur. IEEE TIP (2012).Google ScholarDigital Library
5. Tim Brooks, Ben Mildenhall, Tianfan Xue, Jiawen Chen, Dillon Sharlet, and Jonathan T. Barron. 2019. Unprocessing Images for Learned Raw Denoising. CVPR (2019).Google Scholar
6. Daniel J. Butler, Jonas Wulff, Garrett B. Stanley, and Michael J. Black. 2012. A naturalistic open source movie for optical flow evaluation. ECCV (2012).Google Scholar
7. Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. 2018. Learning to See in the Dark. CVPR (2018).Google Scholar
8. Dongliang Cheng, Dilip K. Prasad, and Michael S. Brown. 2014. Illuminant estimation for color constancy: why spatial-domain methods work and the role of the color distribution. JOSA A (2014).Google Scholar
9. Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. 2007. Image denoising by sparse 3-D transform-domain collaborative filtering. TIP (2007).Google Scholar
10. Mauricio Delbracio and Guillermo Sapiro. 2015. Hand-Held Video Deblurring Via Efficient Fourier Aggregation. TCI (2015).Google Scholar
11. Arthur P. Dempster, Nan M. Laird, and Donald B. Rubin. 1977. Maximum Likelihood from Incomplete Data Via the EM Algorithm. Journal of the Royal Statistical Society, Series B.Google Scholar
12. Alexey Dosovitskiy, Philipp Fischery, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, and Thomas Brox. 2015. FlowNet: Learning Optical Flow with Convolutional Networks. ICCV (2015).Google Scholar
13. Jana Ehmann, Lun-Cheng Chu, Sung-Fang Tsai, and Chia-Kai Liang. 2018. Real-Time Video Denoising on Mobile Phones. ICIP (2018).Google Scholar
14. Gabriel Eilertsen, Jonas Unger, and Rafal K Mantiuk. 2016. Evaluation of tone mapping operators for HDR video. High Dynamic Range Video (2016).Google Scholar
15. Sina Farsiu, M. Dirk Robinson, Michael Elad, and Peyman Milanfar. 2004. Fast and robust multiframe super resolution. IEEE TIP (2004).Google ScholarDigital Library
16. James A. Ferwerda, Sumanta N. Pattanaik, Peter Shirley, and Donald P. Greenberg. 1996. A model of visual adaptation for realistic image synthesis. Computer graphics and interactive techniques (1996).Google Scholar
17. Graham Finlayson and Roshanak Zakizadeh. 2014. Reproduction Angular Error: An Improved Performance Metric for Illuminant Estimation. BMVC (2014).Google Scholar
18. David H. Foster. 2011. Color constancy. Vision research (2011).Google Scholar
19. Peter Vincent Gehler, Carsten Rother, Andrew Blake, Tom Minka, and Toby Sharp. 2008. Bayesian color constancy revisited. CVPR (2008).Google Scholar
20. GFXBench. 2019. http://gfxbench.com/. [Online; accessed 17-May-2019].Google Scholar
21. Arjan Gijsenij, Theo Gevers, and Marcel P. Lucassen. 2009. Perceptual analysis of distance measures for color constancy algorithms. JOSA A (2009).Google Scholar
22. Arjan Gijsenij, Theo Gevers, and Joost van de Weijer. 2011. Computational Color Constancy: Survey and Experiments. IEEE TIP (2011).Google Scholar
23. Clément Godard, Kevin Matzen, and Matt Uyttendaele. 2018. Deep Burst Denoising. ECCV (2018).Google Scholar
24. Google LLC. 2016a. Android Camera2 API, http://developer.android.com/reference/android/hardware/camera2/package-summary.html.Google Scholar
25. Google LLC. 2016b. HDR+ burst photography dataset, http://www.hdrplusdata.org.Google Scholar
26. Google LLC. 2019. Handheld Mobile Photography in Very Low Light webpage, https://google.github.io/night-sight/.Google Scholar
27. Yulia Gryaditskaya, Tania Pouli, Erik Reinhard, Karol Myszkowski, and Hans-Peter Seidel. 2015. Motion Aware Exposure Bracketing for HDR Video. Computer Graphics Forum (Proc. EGSR) (2015).Google Scholar
28. Samuel W Hasinoff, Dillon Sharlet, Ryan Geiss, Andrew Adams, Jonathan T. Barron, Florian Kainz, Jiawen Chen, and Marc Levoy. 2016. Burst photography for high dynamic range and low-light imaging on mobile cameras. SIGGRAPH Asia (2016).Google Scholar
29. Hermann von Helmholtz. 1995. On the relation of optics to painting. Science and culture: Popular and philosophical essays (1995).Google Scholar
30. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017).Google Scholar
31. Michal Irani and Shmuel Peleg. 1991. Improving resolution by image registration. CVGIP: Graphical Models and Image Processing (1991).Google ScholarDigital Library
32. David E. Jacobs, Orazio Gallo, Emily A. Cooper, Kari Pulli, and Marc Levoy. 2015. Simulating the Visual Experience of Very Bright and Very Dark Scenes. ACM Trans. Graph. (2015).Google Scholar
33. Alexandre Karpenko, David Jacobs, Jongmin Baek, and Marc Levoy. 2011. Digital video stabilization and rolling shutter correction using gyroscopes. Stanford Tech Report (2011).Google Scholar
34. Adam G. Kirk and James F. O’Brien. 2011. Perceptually based tone mapping for low-light conditions. ACM Trans. Graph. (2011).Google Scholar
35. Patrick Ledda, Alan Chalmers, Tom Troscianko, and Helge Seetzen. 2005. Evaluation of tone mapping operators using a high dynamic range display. SIGGRAPH (2005).Google Scholar
36. Marc Levoy. 2012. SynthCam, https://sites.google.com/site/marclevoy/.Google Scholar
37. Ce Liu. 2009. Beyond pixels: exploring new representations and applications for motion analysis. Ph.D. Dissertation. Massachusetts Institute of Technology.Google ScholarDigital Library
38. Ziwei Liu, Lu Yuan, Xiaoou Tang, Matt Uyttendaele, and Jian Sun. 2014. Fast Burst Images Denoising. SIGGRAPH Asia (2014).Google Scholar
39. Bruce D. Lucas and Takeo Kanade. 1981. An Iterative Image Registration Technique with an Application to Stereo Vision. IJCAI (1981).Google ScholarDigital Library
40. Lindsay MacDonald. 2006. Digital heritage. Routledge.Google Scholar
41. Matteo Maggioni, Giacomo Boracchi, Alessandro Foi, and Karen Egiazarian. 2012. Video denoising, deblocking, and enhancement through separable 4-D nonlocal spatiotemporal transforms. TIP (2012).Google Scholar
42. Tom Mertens, Jan Kautz, and Frank Van Reeth. 2007. Exposure Fusion. Pacific Graphics (2007).Google Scholar
43. Ben Mildenhall, Jonathan T. Barron, Jiawen Chen, Dillon Sharlet, Ren Ng, and Robert Carroll. 2018. Burst Denoising with Kernel Prediction Networks. CVPR (2018).Google Scholar
44. Junichi Nakamura. 2016. Image sensors and signal processing for digital still cameras. CRC press.Google ScholarDigital Library
45. Shahriar Negahdaripour and C-H Yu. 1993. A generalized brightness change model for computing optical flow. ICCV (1993).Google Scholar
46. Travis Portz, Li Zhang, and Hongrui Jiang. 2011. High-quality video denoising for motion-based exposure control. IEEE International Workshop on Mobile Vision (2011).Google ScholarCross Ref
47. Dr Rawat and Singhai Jyoti. 2011. Review of Motion Estimation and Video Stabilization techniques For hand held mobile video. Signal & Image Processing: An International Journal (SIPIJ) (2011).Google Scholar
48. Jerome Revaud, Philippe Weinzaepfel, Zaid Harchaoui, and Cordelia Schmid. 2015. EpicFlow: Edge-Preserving Interpolation of Correspondences for Optical Flow. CVPR (2015).Google Scholar
49. Erik Ringaby and Per-Erik Forssén. 2014. A virtual tripod for hand-held video stacking on smartphones. ICCP (2014).Google Scholar
50. Simon Schulz, Marcus Grimm, and Rolf-Rainer Grigat. 2007. Using brightness histogram to perform optimum auto exposure. WSEAS Transactions on Systems and Control (2007).Google Scholar
51. Jae Chul Shin, Hirohisa Yaguchi, and Satoshi Shioiri. 2004. Change of color appearance in photopic, mesopic and scotopic vision. Optical review (2004).Google Scholar
52. Alvy Ray Smith. 1978. Color Gamut Transform Pairs. SIGGRAPH (1978).Google Scholar
53. Andrew Stockman and Lindsay T Sharpe. 2006. Into the twilight zone: the complexities of mesopic vision and luminous efficiency. Ophthalmic and Physiological Optics (2006).Google Scholar
54. Ray Villard and Zolta Levay. 2002. Creating Hubble’s Technicolor Universe. Sky and Telescope (2002).Google Scholar
55. Bartlomiej Wronski, Ignacio Garcia-Dorado, Manfred Ernst, Damien Kelly, Michael Krainin, Chia-Kai Liang, Marc Levoy, and Peyman Milanfar. 2019. Handheld MultiFrame Super-Resolution. ACM Trans. Graph. (Proc. SIGGRAPH) (2019).Google Scholar
56. Karel Zuiderveld. 1994. Contrast limited adaptive histogram equalization. In Graphics gems IV.Google Scholar


