“Firefly: Illumination Drones for Interactive Visualization” by Stoppel, Erga and Bruckner
Conference:
Type(s):
Interest Area:
- Research / Education
Title:
- Firefly: Illumination Drones for Interactive Visualization
Session/Category Title: IEEE TVCG Session on Advances in Data Visualization
Presenter(s)/Author(s):
Abstract:
Light specification in three dimensional scenes is a complex problem and several approaches have been presented that aim to automate this process. However, there are many scenarios where a static light setup is insufficient, as the scene content and camera position may change. Simultaneous manual control over the camera and light position imposes a high cognitive load on the user. To address this challenge, we introduce a novel approach for automatic scene illumination with Fireflies. Fireflies are intelligent virtual light drones that illuminate the scene by traveling on a closed path. The Firefly path automatically adapts to changes in the scene based on an outcome-oriented energy function. To achieve interactive performance, we employ a parallel rendering pipeline for the light path evaluations. We provide a catalog of energy functions for various application scenarios and discuss the applicability of our method on several examples.
References:
[1] Adam. https://unity3d.com/pages/adam. Accessed: 2018-03-17.
[2] U-anatomy; ufulio anatomy realistic. https://ufulio.wixsite.com/ ufulioanatomy. Accessed: 2018-03-23.
[3] Unity game engine. https://unity3d.com. Accessed: 2018-03-23.
[4] A. A. Apodaca and L. Gritz. Advanced RenderMan: Creating CGI for Motion Pictures. Morgan Kaufmann, 1999.
[5] M. Aittala. Inverse lighting and photorealistic rendering for augmented reality. The Visual Computer, 26(6):669–678, 2010.
[6] M. Borga, A. Persson, R. Lenz, S. Lindholm, and G. Lathen. Automatic tuning of spatially varying transfer functions for blood vessel visualization. IEEETransactions on Visualization and Computer Graphics, 18(12):23452354, 2012.
[7] D. Coffey, F. Korsakov, H. Hagh-Shenas, L. Thorson, A. Ellingson, D. Nuckley, and D. F. Keefe. Visualizing motion data in virtual reality: Understanding the roles of animation, interaction, and static presentation. Computer Graphics Forum, 31(3pt3):1215–1224, 2012.
[8] A. C. Costa, A. A. de Sousa, and F. N. Ferreira. Lighting design: A goal based approach using optimisation. In Proc. Eurographics Workshop on Rendering, pp. 317–328, 1999.
[9] C. de Melo and A. Paiva. Expression of emotions in virtual humans using lights, shadows, composition and filters. In Proc. Affective Computing and Intelligent Interaction, pp. 546–557, 2007.
[10] K. Doerschner, R. W. Fleming, O. Yilmaz, P. R. Schrater, B. Hartung, and D. Kersten. Visual motion and the perception of surface material. Current Biology, 21(23):2010–2016, 2011.
[11] M. S. El-Nasr and I. Horswill. Real-time lighting design for interactive narrative. In Proc. Virtual Storytelling. Using Virtual RealityTechnologies for Storytelling, pp. 12–20, 2003.
[12] F. Hunter, Steven Biver, and P. Fuqua. Light Science & Magic: An Introduction to Photographic Lighting. Focal Press, 2015.
[13] S. Freitag, B. Weyers, and T. W. Kuhlen. Automatic speed adjustment for travel through immersive virtual environments based on viewpoint quality. In Proc. 3DUI, pp. 67–70, 2016.
[14] M. E. Froese, M. Tory, G. W. Evans, and K. Shrikhande. Evaluation of static and dynamic visualization training approaches for users with different spatial abilities. IEEE Transactions on Visualization and Computer Graphics, 19(12):2810–2817, 2013.
[15] S. Gumhold. Maximum entropy light source placement. In Proc. IEEE Visualization, pp. 275–282, 2002.
[16] T. G¨unther, H. Theisel, and M. Gross. Decoupled opacity optimization for points, lines and surfaces. Computer Graphics Forum, 36(2):153–162, 2017.0
[17] M. Halle and J. Meng. Lightkit: A lighting system for effective visualization. In Proc. IEEE Visualization, pp. 48–57, 2003.
[18] M. Haller, S. Drab, and W. Hartmann. A real-time shadow approach for an augmented reality application using shadow volumes. In Proc. ACM Symposium on Virtual Reality Software and Technology, pp. 56–65, 2003.
[19] N. Joubert, M. Roberts, A. Truong, F. Berthouzoz, and P. Hanrahan. An interactive tool for designing quadrotor camera shots. ACM Transactions on Graphics, 34(6):238:1–238:11, 2015.
[20] M. Kass, A. Witkin, and D. Terzopoulos. Snakes: Active contour models. International Journal of Computer Vision, 1(4):321–331, 1988.
[21] B. E. Keiser and P. Z. Peebles. An automatic system for the control of multiple drone aircraft. IEEE Transactions on Aerospace and Electronic Systems, AES-5(3):515–524, 1969.
[22] M. Keramat and R. Kielbasa. Latin hypercube sampling monte carlo estimation of average quality index for integrated circuits. Analog Integrated Circuits and Signal Processing, 14(1):131–142, 1997.
[23] D. Kersten, P. Mamassian, and D. C. Knill. Moving cast shadows induce apparent motion in depth. Perception, 26(2):171–192, 1997.
[24] D. A. Kleffner and V. S. Ramachandran. On the perception of shape from shading. Perception & Psychophysics, 52(1):18–36, 1992.
[25] G. Klein and D. Murray. Compositing for small cameras. In Proc. IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 57–60, 2008.
[26] C. H. Lee, X. Hao, and A. Varshney. Light collages: lighting design for effective visualization. In Proc. IEEE Visualization, pp. 281–288, 2004.
[27] P. Mamassian and R. Goutcher. Prior knowledge on the illumination position. Cognition, 81(1):1–9, 2001.
[28] D. T. Nicholson, C. Chalk, W. R. J. Funnell, and S. J. Daniel. Can virtual
[29] I. K. Nikolos, K. P. Valavanis, N. C. Tsourveloudis, and A. N. Kostaras. Evolutionary algorithm based offline/online path planner for UAV navigation. IEEE Transactions on Systems, Man, and Cybernetics, 33(6):898912, 2003.
[30] B. Okumura, M. Kanbara, and N. Yokoya. Augmented reality based on estimation of defocusing and motion blurring from captured images. In Proc. IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 219–225, 2006.
[31] V. S. Ramachandran. Perception of shape from shading. Nature, 331(6152):163–166, 1988.
[32] A. Richards. How to Set Up Photography Lighting for a Home Studio. CreateSpace, 2014.
[33] S. Ruder. An overview of gradient descent optimization algorithms. CoRR, abs/1609.04747, 2016.
[34] M. Ruiz, A. Bardera, I. Boada, I. Viola, M. Feixas, and M. Sbert. Automatic transfer functions based on informational divergence. IEEE Transactions on Visualization and Computer Graphics, 17(12):1932–1941, 2011.
[35] H. H. Sch¨utt, F. Baier, and R. W. Fleming. Perception of light source distance from shading patterns. Journal of Vision, 16(3):9, 2016.
[36] R. Shacked and D. Lischinski. Automatic Lighting Design using a Perceptual Quality Metric. Computer Graphics Forum, 20(3):215–227, 2001.
[37] M. D. Shields and J. Zhang. The generalization of latin hypercube sampling. Reliability Engineering and System Safety, 148(1):96–108, 2016.
[38] M. Srikanth, K. Bala, and F. Durand. Computational rim illumination with aerial robots. In Proc. Computational Aesthetics, pp. 57–66, 2014.
[39] M. Stein. Large sample properties of simulations using latin hypercube sampling. Technometrics, 29(2):143–151, 1987.
[40] Y. Tani, K. Araki, T. Nagai, K. Koida, S. Nakauchi, and M. Kitazaki. Enhancement of glossiness perception by retinal-image motion: Additional effect of head-yoked motion parallax. PLOS ONE, 8(1):1–8, 2013.
[41] P. J. M. van Laarhoven and E. H. L. Aarts. Simulated annealing. Springer Netherlands, 1987.
[42] J. Wambecke, R. Vergne, G.-P. Bonneau, and J. Thollot. Automatic lighting design from photographic rules. In Proc. Eurographics Workshop on Intelligent Cinematography and Editing, pp. 1–8, 2016.
[43] L. Wang and A. E. Kaufman. Lighting system for visual perception enhancement in volume rendering. IEEE Transactions on Visualization and Computer Graphics, 1(19):67–80, 2013.
[44] P. Wisessing, J. Dingliana, and R. McDonnell. Perception of lighting and shading for animated virtual characters. In Proc. ACM Symposium on Applied Perception, pp. 25–29, 2016.
[45] J. Xie, Y. Zhou, W. Wu, and Z. Zhou. Automatic path planning for augmented virtual environment. In Proc. International Conference on Virtual Reality and Visualization, pp. 372–379, 2016.
[46] M. Yang, Z. Liu, and W. Li. A fast general extension algorithm of latin hypercube sampling. Journal of Statistical Computation and Simulation, 87(17):3398–3411, 2017.
[47] Y. Zhang and K.-L. Ma. Lighting design for globally illuminated volume rendering. IEEE Transactions on Visualization and Computer Graphics, 19(12):2946–2955, 2013.