“A Generative Model for Volume Rendering” by Berger, Li and Levine – ACM SIGGRAPH HISTORY ARCHIVES

“A Generative Model for Volume Rendering” by Berger, Li and Levine

  • ©

Conference:


Type(s):


Interest Area:


    Research / Education

Title:

    A Generative Model for Volume Rendering

Session/Category Title:   IEEE TVCG Session on Advances in Data Visualization


Presenter(s)/Author(s):



Abstract:


    We present a technique to synthesize and analyze volume-rendered images using generative models. We use the Generative Adversarial Network (GAN) framework to compute a model from a large collection of volume renderings, conditioned on (1) viewpoint and (2) transfer functions for opacity and color. Our approach facilitates tasks for volume analysis that are challenging to achieve using existing rendering techniques such as ray casting or texture-based methods. We show how to guide the user in transfer function editing by quantifying expected change in the output image. Additionally, the generative model transforms transfer functions into a view-invariant latent space specifically designed to synthesize volume-rendered images. We use this space directly for rendering, enabling the user to explore the space of volume-rendered images. As our model is independent of the choice of volume rendering process, we show how to analyze volume-rendered images produced by direct and global illumination lighting, for a variety of volume datasets.

References:


    [1] N.Max, “Optical models for direct volume rendering,” IEEE Transactions on Visualization and Computer Graphics, vol. 1, no. 2, pp. 99–108, 1995.

    [2] J. Kniss, G. Kindlmann, and C. Hansen, “Multidimensional transfer functions for interactive volume rendering,” IEEE Transactions on visualization and computer graphics, vol. 8, no. 3, pp. 270–285, 2002.

    [3] C. Correa and K.-L. Ma, “Size-based transfer functions: A new volume exploration technique,” IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1380–1387, 2008.

    [4] ——,“Theocclusionspectrumfor volume classification and visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 15, no. 6, pp. 1465–1472, 2009.

    [5] C. D. Correa and K.-L. Ma, “Visibility histograms and visibility-driven transfer functions,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 2, pp. 192–204, 2011.

    [6] I. Wald, G. P. Johnson, J. Amstutz, C. Brownlee, A. Knoll, J. Jeffers, J. G¨unther, and P. Navratil, “OSPRay-A CPU ray tracing framework for scientific visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, pp. 931–940, 2017.

    [7] D. J¨ onsson and A. Ynnerman, “Correlated photon mapping for interactive global illumination of time-varying volumetric data,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, pp. 901–910, 2017.

    [8] H.Pfister, B. Lorensen, C. Bajaj, G. Kindlmann, W. Schroeder, L. S. Avila, K. Raghu, R. Machiraju, and J. Lee, “The transfer function bake-off,” IEEE Computer Graphics and Applications, vol. 21, no. 3, pp. 16–22, 2001.

    [9] J. Marks, B. Andalman, P. A. Beardsley, W. Freeman, S. Gibson, J. Hodgins, T. Kang, B. Mirtich, H. Pfister, W. Ruml et al., “Design galleries: A general approach to setting parameters for computer graphics and animation,” in Proceedings of the 24th annual conference on Computer graphics and interactive techniques, 1997, pp. 389–400.

    [10] D. J¨ onsson, M. Falk, and A. Ynnerman, “Intuitive exploration of volumetric data using dynamic galleries,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 1, pp. 896–905, 2016.

    [11] R. Maciejewski, I. Woo, W. Chen, and D. Ebert, “Structuring feature space: A non-parametric method for volumetric transfer function generation,” IEEE Transactions on Visualization and Computer Graphics, vol. 15, no. 6, pp. 1473–1480, 2009.JOURNAL OFLATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 14

    [12] S. Bruckner and T. M¨oller, “Isosurface similarity maps,” in Computer Graphics Forum, vol. 29, no. 3, 2010, pp. 773–782.

    [13] B. Duffy, H. Carr, and T. M¨oller, “Integrating isosurface statistics and histograms,” IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 2, pp. 263–277, 2013.

    [14] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.

    [15] T. G. Bever and D. Poeppel, “Analysis by synthesis: a (re-) emerging program of research for language and vision,” Biolinguistics, vol. 4, no. 2-3, pp. 174–200, 2010.

    [16] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006.

    [17] P. Ljung, J. Kr¨uger, E. Groller, M. Hadwiger, C. D. Hansen, and A. Ynnerman, “State of the art in transfer functions for direct volume rendering,” in Computer Graphics Forum, vol. 35, no. 3, 2016, pp. 669691.

    [18] G. Kindlmann, R. Whitaker, T. Tasdizen, and T. Moller, “Curvature-based transfer functions for direct volume rendering: Methods and applications,” in IEEE Visualization, 2003, pp. 513–520.

    [19] C. Rezk-Salama, M. Keller, and P. Kohlmann, “High-level user interfaces for transfer function design with semantics,” IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 5, 2006.

    [20] F. de Moura Pinto and C. M. Freitas, “Design of multi-dimensional transfer functions using dimensional reduction,” in Proceedings of the 9th Joint Eurographics/IEEE VGTC conference on Visualization, 2007, pp. 131–138.

    [21] M. Haidacher, D. Patel, S. Bruckner, A. Kanitsar, and M. E. Gr¨oller, “Volume visualization based on statistical transfer-function spaces,” in IEEE Pacific Visualization Symposium, 2010, pp. 17–24.

    [22] H. Guo, N. Mao, and X. Yuan, “WYSIWYG (what you see is what you get) volume visualization,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, pp. 2106–2114, 2011.

    [23] Y. Wu and H. Qu, “Interactive transfer function design based on editing direct volume rendered images,” IEEE Transactions on Visualization and Computer Graphics, vol. 13, no. 5, 2007.

    [24] M. Ruiz, A. Bardera, I. Boada, I. Viola, M. Feixas, and M. Sbert, “Automatic transfer functions based on informational divergence,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 12, pp. 1932–1941, 2011.

    [25] C. Lundstrom, P. Ljung, and A. Ynnerman, “Local histograms for design of transfer functions in direct volume rendering,” IEEE Transactions on Visualization and Computer Graphics, vol. 12, no. 6, pp. 1570–1579, 2006.

    [26] J. M. Kniss, R. Van Uitert, A. Stephens, G.-S. Li, T. Tasdizen, and C. Hansen, “Statistically quantitative volume visualization,” in IEEE Visualization, 2005, pp. 287–294.

    [27] K. Pothkow and H.-C. Hege, “Positional uncertainty of isocontours: Condition analysis and probabilistic measures,” IEEE Transactions on Visualization and Computer Graphics, vol. 17, no. 10, pp. 1393–1406, 2011.

    [28] N. Fout and K.-L. Ma, “Fuzzy volume rendering,” IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 12, pp. 2335–2344, 2012.

    [29] H. Guo, W. Li, and X. Yuan, “Transfer function map,” in IEEE Pacific Visualization Symposium, 2014, pp. 262–266.

    [30] M. Balsa Rodr´ıguez, E. Gobbetti, J. Iglesias Guiti´an, M. Makhinya, F. Marton, R. Pajarola, and S. K. Suter, “State-of-the-Art in compressed GPU-based direct volume rendering,” in Computer Graphics Forum, vol. 33, no. 6, 2014, pp. 77–100.

    [31] E. Gobbetti, J. A. Iglesias Guiti´an, and F. Marton, “COVRA: A compression-domain output-sensitive volume rendering architecture based on a sparse representation of voxel blocks,” in Computer Graphics Forum, vol. 31, no. 3pt4, 2012, pp. 1315–1324.

    [32] X. Xu, E. Sakhaee, and A. Entezari, “Volumetric data reduction in a compressed sensing framework,” in Computer Graphics Forum, vol. 33, no. 3, 2014, pp. 111–120.

    [33] A. Tikhonova, C. D. Correa, and K.-L. Ma, “Visualization by proxy: A novel framework for deferred interaction with volume data,” IEEE Transactions on Visualization and Computer Graphics, vol. 16, no. 6, pp. 1551–1559, 2010.

    [34] J. Ahrens, S. Jourdain, P. O’Leary, J. Patchett, D. H. Rogers, and M. Petersen, “An image-based approach to extreme scale in situ visualization and analysis,” in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE, 2014,

    [35] T. He, L. Hong, A. Kaufman, and H. Pfister, “Generation of transfer functions with stochastic search techniques,” in IEEE Visualization, 1996, pp. 227–234.

    [36] F.-Y. Tzeng, E. B. Lum, and K.-L. Ma, “A novel interface for higherdimensional classification of volume data,” in IEEE Visualization, 2003, p. 66.

    [37] F.-Y. Tzeng and K.-L. Ma, “Intelligent feature extraction and tracking for visualizing large-scale 4d flow simulations,” in Proceedings of the 2005 ACM/IEEE conference on Supercomputing, 2005, p. 6.

    [38] K. P. Soundararajan and T. Schultz, “Learning probabilistic transfer functions: A comparative study of classifiers,” in Computer Graphics Forum, vol. 34, no. 3, 2015, pp. 111–120.

    [39] C. Schulz, A. Nocaj, M. El-Assady, M. Hund, C. Sch¨atzle, M. Butt, D. A. Keim, U. Brandes, and D. Weiskopf, “Generative data models for validation and evaluation of visualization techniques,” in BELIV Workshop 2016, 2016, pp. 112–124.

    [40] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.

    [41] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training GANs,” in Advances in Neural Information Processing Systems, 2016, pp. 2226–2234.

    [42] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” in Proceedings of The 33rd International Conference on Machine Learning, vol. 3, 2016.

    [43] H. Zhang, T. Xu, H. Li, S. Zhang, X. Huang, X. Wang, and D. Metaxas, “StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks,” 2017.

    [44] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context encoders: Feature learning by inpainting,” in The IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2536–2544.

    [45] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, July 2017.

    [46] A. Dosovitskiy, J. Tobias Springenberg, and T. Brox, “Learning to generate chairs with convolutional neural networks,” in The IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1538–1546.

    [47] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT Press, 2016.

    [48] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.

    [49] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.

    [50] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of the 32nd International Conference on Machine Learning, 2015, pp. 448–456.

    [51] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” arXiv preprint arXiv:1701.07875, 2017.

    [52] X. Wang and A. Gupta, “Generative image modeling using style and structure adversarial networks,” in European Conference on Computer Vision. Springer, 2016, pp. 318–335.

    [53] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in The IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.

    [54] P. Sangkloy, J. Lu, C. Fang, F. Yu, and J. Hays, “Scribbler: Controlling deep image synthesis with sketch and color,” in The IEEE Conference on Computer Vision and Pattern Recognition, July 2017.

    [55] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.

    [56] D. Rumelhart, G. Hinton, and R. Williams, “Learning internal representations by error propagation,” in Neurocomputing: foundations of research. MIT Press, 1988, pp. 673–695.

    [57] L. v. d. Maaten and G. Hinton, “Visualizing data using t-SNE,” Journal of Machine Learning Research, vol. 9, no. Nov, pp. 2579–2605, 2008.

    [58] O. Pele and M. Werman, “Fast and robust earth mover’s distances,” in Computer vision, 2009 IEEE international conference on, 2009, pp. 460467.

    [59] K. Min, L. Yang, J. Wright, L. Wu, X.-S. Hua, and Y. Ma, “Compact projection: Simple and efficient near neighbor search with practical memory requirements,” in The IEEE Conference on Computer Vision and Pattern Recognition, 2010, pp. 3477–3484.


ACM Digital Library Publication:



Overview Page:



Submit a story:

If you would like to submit a story about this presentation, please contact us: historyarchives@siggraph.org