“Neural Gaussian Scale-space Fields” – ACM SIGGRAPH HISTORY ARCHIVES

“Neural Gaussian Scale-space Fields”

  • ©

Conference:


Type(s):


Title:

    Neural Gaussian Scale-space Fields

Presenter(s)/Author(s):



Abstract:


    We present a method to learn a fully continuous Gaussian scale-space from raw data. This allows efficient and flexible anisotropic filtering and can be used to create multiscale representations across a broad range of modalities and applications.

References:


    [1]
    David H. Ackley. 1987. A Connectionist Machine for Genetic Hillclimbing. Kluwer Academic Publishers.

    [2]
    Cem Anil, James Lucas, and Roger Grosse. 2019. Sorting Out Lipschitz Function Approximation. In International Conference on Machine Learning (ICML).

    [3]
    Andreas Antoniou. 2006. Digital Signal Processing. McGraw-Hill.

    [4]
    Martin Arjovsky, Soumith Chintala, and L?on Bottou. 2017. Wasserstein Generative Adversarial Networks. In International Conference on Machine Learning (ICML).

    [5]
    Martin Arjovsky, Amar Shah, and Yoshua Bengio. 2016. Unitary Evolution Recurrent Neural Networks. In International Conference on Machine Learning (ICML).

    [6]
    Jean Babaud, Andrew P. Witkin, Michel Baudin, and Richard O. Duda. 1986. Uniqueness of the Gaussian Kernel for Scale-Space Filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence 8, 1 (1986), 26–33.

    [7]
    Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P. Srinivasan. 2021. Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. In IEEE/CVF International Conference on Computer Vision (ICCV).

    [8]
    Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. 2022. Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

    [9]
    Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. 2023. Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields. In IEEE/CVF International Conference on Computer Vision (ICCV).

    [10]
    Jens Behrmann, Will Grathwohl, Ricky TQ Chen, David Duvenaud, and J?rn-Henrik Jacobsen. 2019. Invertible Residual Networks. In International Conference on Machine Learning (ICML).

    [11]
    ?ke Bj?rck and Clazett Bowie. 1971. An Iterative Algorithm for Computing the Best Estimate of an Orthogonal Matrix. SIAM J. Numer. Anal. 8, 2 (1971), 358–364.

    [12]
    E. Oran Brigham. 1988. The fast Fourier transform and its applications. Prentice-Hall.

    [13]
    Peter J. Burt. 1981. Fast filter transform for image processing. Computer Graphics and Image Processing 16, 1 (1981), 20–51.

    [14]
    Vladimir Bychkovsky, Sylvain Paris, Eric Chan, and Fr?do Durand. 2011. Learning Photographic Global Tonal Adjustment with a Database of Input/Output Image Pairs. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

    [15]
    Yinbo Chen, Sifei Liu, and Xiaolong Wang. 2021. Learning Continuous Image Representation With Local Implicit Image Function. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

    [16]
    Yun-Chun Chen, Vladimir Kim, Noam Aigerman, and Alec Jacobson. 2023a. Neural Progressive Meshes. In ACM SIGGRAPH Conference.

    [17]
    Zhang Chen, Zhong Li, Liangchen Song, Lele Chen, Jingyi Yu, Junsong Yuan, and Yi Xu. 2023b. NeuRBF: A Neural Fields Representation with Adaptive Radial Basis Function. In IEEE/CVF International Conference on Computer Vision (ICCV).

    [18]
    Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. 2017. Parseval Networks: Improving Robustness to Adversarial Examples. In International Conference on Machine Learning (ICML).

    [19]
    Ingrid Daubechies. 1988. Orthonormal Bases of Compactly Supported Wavelets. Communications on Pure and Applied Mathematics 41, 7 (1988), 909–996.

    [20]
    Leo Dorst and Rein Van den Boomgaard. 1994. Morphological signal processing and the slope transform. Signal Processing 38, 1 (1994), 79–98.

    [21]
    Rizal Fathony, Anit Kumar Sahu, Devin Willmott, and J. Zico Kolter. 2020. Multiplicative Filter Networks. In International Conference on Learning Representations (ICLR).

    [22]
    LMJ Florack, Alfons H Salden, Bart M ter Haar Romeny, Jan J Koenderink, and Max A Viergever. 1995. Nonlinear scale-space. Image and Vision Computing 13, 4 (1995), 279–294.

    [23]
    William T. Freeman and Edward H. Adelson. 1991. The Design and Use of Steerable Filters. IEEE Transactions on Pattern Analysis and Machine Intelligence 13, 9 (1991), 891–906.

    [24]
    David Gargan and Francis Neelamkavil. 1998. Approximating Reflectance Functions Using Neural Networks. In Eurographics Workshop on Rendering Techniques.

    [25]
    Alban Gauthier, Robin Faury, Jeremy Levallois, Theo Thonat, Jean-Marc Thiery, and Tamy Boubekeur. 2022. MIPNet: Neural Normal-to-Anisotropic-Roughness MIP Mapping. ACM Transactions on Graphics 41, 6 (2022).

    [26]
    J.-M. Geusebroek, Arnold W.M. Smeulders, and Joost Van De Weijer. 2003. Fast anisotropic Gauss filtering. IEEE Transactions on Image Processing 12, 8 (2003), 938–943.

    [27]
    Adam Golinski, Mario Lezcano-Casado, and Tom Rainforth. 2019. Improving Normalizing Flows via Better Orthogonal Parameterizations. In ICML Workshop on Invertible Neural Networks and Normalizing Flows.

    [28]
    Henry Gouk, Eibe Frank, Bernhard Pfahringer, and Michael J. Cree. 2021. Regularisation of neural networks by enforcing Lipschitz continuity. Machine Learning 110 (2021), 393–416.

    [29]
    Ned Greene and Paul S. Heckbert. 1986. Creating Raster Omnimax Images from Multiple Perspective Views Using the Elliptical Weighted Average Filter. IEEE Computer Graphics and Applications 6, 6 (1986), 21–27.

    [30]
    Jens Andr? Griepentrog, Wolfgang H?ppner, Hans-Christoph Kaiser, and Joachim Rehberg. 2008. A bi-Lipschitz continuous, volume preserving map from the unit ball onto a cube. Note di Matematica 28, 1 (2008), 177–193.

    [31]
    Kanghui Guo, Gitta Kutyniok, and Demetrio Labate. 2006. Sparse Multidimensional Representations using Anisotropic Dilation and Shear Operators. Wavelets and Splines 14 (2006), 189–201.

    [32]
    Paul S. Heckbert. 1986. Survey of Texture Mapping. IEEE Computer Graphics and Applications 6, 11 (1986), 56–67.

    [33]
    Paul S. Heckbert. 1989. Fundamentals of Texture Mapping and Image Warping. Master’s thesis. University of California, Berkeley.

    [34]
    Matthias Hein and Maksym Andriushchenko. 2017. Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation. In Advances in Neural Information Processing Systems (NeurIPS).

    [35]
    Kyle Helfrich, Devin Willmott, and Qiang Ye. 2018. Orthogonal Recurrent Neural Networks with Scaled Cayley Transform. In International Conference on Machine Learning (ICML).

    [36]
    Pedro Hermosilla, Tobias Ritschel, Pere-Pau V?zquez, ?lvar Vinacua, and Timo Ropinski. 2018. Monte Carlo convolution for learning on non-uniformly sampled point clouds. ACM Transactions on Graphics 37, 6 (2018).

    [37]
    Amir Hertz, Or Perel, Raja Giryes, Olga Sorkine-Hornung, and Daniel Cohen-Or. 2021. SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization. (2021).

    [38]
    Wenbo Hu, Yuling Wang, Lin Ma, Bangbang Yang, Lin Gao, Xiao Liu, and Yuewen Ma. 2023. Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields. In IEEE/CVF International Conference on Computer Vision (ICCV).

    [39]
    Lei Huang, Xianglong Liu, Bo Lang, Adams Yu, Yongliang Wang, and Bo Li. 2018. Orthogonal Weight Normalization: Solution to Optimization Over Multiple Dependent Stiefel Manifolds in Deep Neural Networks. In AAAI Conference on Artificial Intelligence.

    [40]
    Stephanie Hyland and Gunnar R?tsch. 2017. Learning Unitary Operators with Help From u(n). In AAAI Conference on Artificial Intelligence.

    [41]
    Taizo Iijima. 1959. Basic theory of pattern observation. Technical Group on Automata and Automatic Control (1959), 3–32.

    [42]
    Arthur Jacot, Franck Gabriel, and Cl?ment Hongler. 2018. Neural Tangent Kernel: Convergence and Generalization in Neural Networks. In Advances in Neural Information Processing Systems (NeurIPS).

    [43]
    Li Jing, Yichen Shen, Tena Dubcek, John Peurifoy, Scott Skirlo, Yann LeCun, Max Tegmark, and Marin Solja?i?. 2017. Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs. In International Conference on Machine Learning (ICML).

    [44]
    Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2018. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In International Conference on Learning Representations (ICLR).

    [45]
    Tero Karras, Miika Aittala, Samuli Laine, Erik H?rk?nen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. 2021. Alias-Free Generative Adversarial Networks. In Advances in Neural Information Processing Systems (NeurIPS).

    [46]
    Seyed Mehran Kazemi, Rishab Goel, Sepehr Eghbali, Janahan Ramanan, Jaspreet Sahota, Sanjay Thakur, Stella Wu, Cathal Smyth, Pascal Poupart, and Marcus Brubaker. 2019. Time2Vec: Learning a Vector Representation of Time. arXiv preprint arXiv:1907.05321 (2019).

    [47]
    Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR).

    [48]
    Jan J. Koenderink. 1984. The structure of images. Biological Cybernetics 50, 5 (1984), 363–370.

    [49]
    Alexandr Kuznetsov, Krishna Mullia, Zexiang Xu, Milo? Ha?an, and Ravi Ramamoorthi. 2021. NeuMIP: Multi-Resolution Neural Materials. ACM Transactions on Graphics 40, 4 (2021).

    [50]
    Mario Lezcano-Casado and David Mart?nez-Rubio. 2019. Cheap Orthogonal Constraints in Neural Networks: A Simple Parametrization of the Orthogonal and Unitary Group. In International Conference on Machine Learning (ICML).

    [51]
    Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey. 2021. BARF: Bundle-Adjusting Neural Radiance Fields. In IEEE/CVF International Conference on Computer Vision (ICCV).

    [52]
    Tony Lindeberg. 1997. On the Axiomatic Foundations of Linear Scale-Space. In Gaussian Scale-Space Theory. Springer, 75–97.

    [53]
    Tony Lindeberg. 2013. Scale-Space Theory in Computer Vision. Vol. 256. Springer Science & Business Media.

    [54]
    David B. Lindell, Dave Van Veen, Jeong Joon Park, and Gordon Wetzstein. 2022. BACON: Band-limited Coordinate Networks for Multiscale Scene Representation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

    [55]
    Hsueh-Ti Derek Liu, Francis Williams, Alec Jacobson, Sanja Fidler, and Or Litany. 2022. Learning Smooth Neural Functions via Lipschitz Regularization. In ACM SIGGRAPH Conference.

    [56]
    Li Ma, Xiaoyu Li, Jing Liao, Qi Zhang, Xuan Wang, Jue Wang, and Pedro V Sander. 2022. Deblur-NeRF: Neural Radiance Fields From Blurry Images. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

    [57]
    Stephane G. Mallat. 1989. A Theory for Multiresolution Signal Decomposition: The Wavelet Representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 11, 7 (1989), 674–693.

    [58]
    David Marr and Ellen Hildreth. 1980. Theory of Edge Detection. Proceedings of the Royal Society of London, Biological Sciences 207, 1167 (1980), 187–217.

    [59]
    Alexander Mathiasen, Frederik Hvilsh?j, Jakob R?dsgaard J?rgensen, Anshul Nasery, and Davide Mottin. 2020. What if Neural Networks had SVDs?. In Advances in Neural Information Processing Systems (NeurIPS).

    [60]
    Ishit Mehta, Micha?l Gharbi, Connelly Barnes, Eli Shechtman, Ravi Ramamoorthi, and Manmohan Chandraker. 2021. Modulated Periodic Activations for Generalizable Local Functional Representations. In IEEE/CVF International Conference on Computer Vision (ICCV).

    [61]
    Zakaria Mhammedi, Andrew Hellicar, Ashfaqur Rahman, and James Bailey. 2017. Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using Householder Reflections. In International Conference on Machine Learning (ICML).

    [62]
    Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV.

    [63]
    Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. 2018. Spectral Normalization for Generative Adversarial Networks. In International Conference on Learning Representations (ICLR).

    [64]
    Thomas M?ller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. ACM Transactions on Graphics 41, 4 (2022).

    [65]
    Seungtae Nam, Daniel Rho, Jong Hwan Ko, and Eunbyung Park. 2023. Mip-Grid: Anti-aliased Grid Representations for Neural Radiance Fields. In Advances in Neural Information Processing Systems (NeurIPS).

    [66]
    Harald Niederreiter. 1992. Low-discrepancy point sets obtained by digital constructions over finite fields. Czechoslovak Mathematical Journal 42, 1 (1992), 143–166.

    [67]
    Ntumba Elie Nsampi, Adarsh Djeacoumar, Hans-Peter Seidel, Tobias Ritschel, and Thomas Leimk?hler. 2023. Neural Field Convolutions by Repeated Differentiation. ACM Transactions on Graphics 42, 6 (2023).

    [68]
    Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. 2019. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

    [69]
    Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B. Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. 2021. Nerfies: Deformable Neural Radiance Fields. In IEEE/CVF International Conference on Computer Vision (ICCV).

    [70]
    Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NeurIPS Workshop on Autodiff.

    [71]
    Hallison Paz, Tiago Novello, Vinicius Silva, Luiz Schirmer, Guilherme Schardong, and Luiz Velho. 2022. Multiresolution Neural Networks for Imaging. In Conference on Graphics, Patterns and Images (SIBGRAPI).

    [72]
    Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. 2019. On the Spectral Bias of Neural Networks. In International Conference on Machine Learning (ICML).

    [73]
    Ali Rahimi and Benjamin Recht. 2007. Random Features for Large-Scale Kernel Machines. In Advances in Neural Information Processing Systems (NeurIPS).

    [74]
    Vishwanath Saragadam, Jasper Tan, Guha Balakrishnan, Richard G. Baraniuk, and Ashok Veeraraghavan. 2022. MINER: Multiscale Implicit Neural Representation. In European Conference on Computer Vision (ECCV).

    [75]
    Shayan Shekarforoush, David Lindell, David J. Fleet, and Marcus A. Brubaker. 2022. Residual Multiplicative Filter Networks for Multiscale Reconstruction. In Advances in Neural Information Processing Systems (NeurIPS).

    [76]
    Assaf Shocher, Ben Feinstein, Niv Haim, and Michal Irani. 2020. From Discrete to Continuous Convolution Layers. arXiv preprint arXiv:2006.11120 (2020).

    [77]
    Eero P. Simoncelli and William T. Freeman. 1995. The steerable pyramid: a flexible architecture for multi-scale derivative computation. In IEEE International Conference on Image Processing (ICIP), Vol. 3. 444–447.

    [78]
    Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. 2020. Implicit Neural Representations with Periodic Activation Functions. In Advances in Neural Information Processing Systems (NeurIPS).

    [79]
    Ilya Meerovich Sobol. 1967. On the distribution of points in a cube and the approximate evaluation of integrals. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki 7, 4 (1967), 784–802.

    [80]
    Kenneth O. Stanley. 2007. Compositional Pattern Producing Networks: A Novel Abstraction of Development. Genetic Programming and Evolvable Machines 8 (2007), 131–162.

    [81]
    Jean-Luc Starck, Fionn D. Murtagh, and Albert Bijaoui. 1998. Image Processing and Data Analysis: The Multiscale Approach. Cambridge University Press.

    [82]
    Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR).

    [83]
    Towaki Takikawa, Alex Evans, Jonathan Tremblay, Thomas M?ller, Morgan McGuire, Alec Jacobson, and Sanja Fidler. 2022. Variable Bitrate Neural Fields. In ACM SIGGRAPH Conference.

    [84]
    Towaki Takikawa, Joey Litalien, Kangxue Yin, Karsten Kreis, Charles Loop, Derek Nowrouzezahrai, Alec Jacobson, Morgan McGuire, and Sanja Fidler. 2021. Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

    [85]
    Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. 2020. Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains. In Advances in Neural Information Processing Systems (NeurIPS).

    [86]
    Ayush Tewari, Justus Thies, Ben Mildenhall, Pratul Srinivasan, Edgar Tretschk, Wang Yifan, Christoph Lassner, Vincent Sitzmann, Ricardo Martin-Brualla, Stephen Lombardi, et al. 2022. Advances in Neural Rendering. Computer Graphics Forum 41, 2 (2022), 703–735.

    [87]
    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ?ukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Advances in Neural Information Processing Systems (NeurIPS).

    [88]
    Shenlong Wang, Simon Suo, Wei-Chiu Ma, Andrei Pokrovsky, and Raquel Urtasun. 2018. Deep Parametric Continuous Convolutional Neural Networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

    [89]
    Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. 2004. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Transactions on Image Processing 13, 4 (2004), 600–612.

    [90]
    Joachim Weickert. 1998. Anisotropic Diffusion in Image Processing. Teubner Stuttgart.

    [91]
    Lance Williams. 1983. Pyramidal Parametrics. In ACM SIGGRAPH Conference.

    [92]
    Andrew P. Witkin. 1987. Scale-space filtering. In Readings in Computer Vision. Elsevier, 329–332.

    [93]
    Zhijie Wu, Yuhe Jin, and Kwang Moo Yi. 2023. Neural Fourier Filter Bank. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

    [94]
    Yuanbo Xiangli, Linning Xu, Xingang Pan, Nanxuan Zhao, Anyi Rao, Christian Theobalt, Bo Dai, and Dahua Lin. 2022. BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale Scene Rendering. In European Conference on Computer Vision (ECCV).

    [95]
    Yiheng Xie, Towaki Takikawa, Shunsuke Saito, Or Litany, Shiqin Yan, Numair Khan, Federico Tombari, James Tompkin, Vincent Sitzmann, and Srinath Sridhar. 2022. Neural Fields in Visual Computing and Beyond. Computer Graphics Forum 41, 2 (2022), 641–676.

    [96]
    Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. 2019. Self-attention with Functional Time Representation Learning. In Advances in Neural Information Processing Systems (NeurIPS).

    [97]
    Dejia Xu, Peihao Wang, Yifan Jiang, Zhiwen Fan, and Zhangyang Wang. 2022. Signal Processing for Implicit Neural Representations. In Advances in Neural Information Processing Systems (NeurIPS).

    [98]
    Xingqian Xu, Zhangyang Wang, and Humphrey Shi. 2021. UltraSR: Spatial Encoding is a Missing Key for Implicit Image Function-based Arbitrary-Scale Super-Resolution. arXiv preprint arXiv:2103.12716 (2021).

    [99]
    Guandao Yang, Serge Belongie, Bharath Hariharan, and Vladlen Koltun. 2021. Geometry Processing with Neural Fields. Advances in Neural Information Processing Systems (NeurIPS) (2021).

    [100]
    Guandao Yang, Sagie Benaim, Varun Jampani, Kyle Genova, Jonathan Barron, Thomas Funkhouser, Bharath Hariharan, and Serge Belongie. 2022. Polynomial Neural Fields for Subband Decomposition and Manipulation. In Advances in Neural Information Processing Systems (NeurIPS).

    [101]
    Jiawei Yang, Marco Pavone, and Yue Wang. 2023. FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

    [102]
    Yuichi Yoshida and Takeru Miyato. 2017. Spectral Norm Regularization for Improving the Generalizability of Deep Learning. arXiv preprint arXiv:1705.10941 (2017).

    [103]
    Jiong Zhang, Qi Lei, and Inderjit Dhillon. 2018b. Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization. In International Conference on Machine Learning (ICML).

    [104]
    Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. 2018a. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

    [105]
    Yiyu Zhuang, Qi Zhang, Ying Feng, Hao Zhu, Yao Yao, Xiaoyu Li, Yan-Pei Cao, Ying Shan, and Xun Cao. 2023. Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail. In ACM SIGGRAPH Conference.


ACM Digital Library Publication:



Overview Page:



Submit a story:

If you would like to submit a story about this presentation, please contact us: historyarchives@siggraph.org