“Dataset and Metrics for Predicting Local Visible Differences” by , Giunchi, Ye, Didyk, Myszkowski, et al. …

  • ©Krzysztof Wolski, Daniele Giunchi, Nanyang Ye, Piotr Didyk, Karol Myszkowski, Radosław Mantiuk, Hans-Peter Seidel, Anthony Steed, and Rafal K. Mantiuk

Conference:


Type:


Session Title:

    Perception & Haptics

Title:

    Dataset and Metrics for Predicting Local Visible Differences

Moderator(s):



Presenter(s)/Author(s):



Abstract:


    A large number of imaging and computer graphics applications require localized information on the visibility of image distortions. Existing image quality metrics are not suitable for this task as they provide a single quality value per image. Existing visibility metrics produce visual difference maps, and are specifically designed for detecting just noticeable distortions but their predictions are often inaccurate. In this work, we argue that the key reason for this problem is the lack of large image collections with a good coverage of possible distortions that occur in different applications. To address the problem, we collect an extensive dataset of reference and distorted image pairs together with user markings indicating whether distortions are visible or not. We propose a statistical model that is designed for the meaningful interpretation of such data, which is affected by visual search and imprecision of manual marking. We use our dataset for training existing metrics and we demonstrate that their performance significantly improves. We show that our dataset with the proposed statistical model can be used to train a new CNN-based metric, which outperforms the existing solutions. We demonstrate the utility of such a metric in visually lossless JPEG compression, super-resolution and watermarking.

References:


    1. Vamsi K. Adhikarla, Marek Vinkler, Denis Sumin, Rafal K. Mantiuk, Karol Myszkowski, Hans-Peter Seidel, and Piotr Didyk. 2017. Towards a quality metric for dense light fields. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE, 3720–3729.
    2. Jyrki Alakuijala, Robert Obryk, Ostap Stoliarchuk, Zoltan Szabadka, Lode Vandevenne, and Jan Wassenberg. 2017. Guetzli: Perceptually guided JPEG encoder. arXiv:1703.04421. Retrieved from https://arxiv.org/abs/1703.04421.
    3. Md Mushfiqul Alam, Kedarnath P. Vilankar, David J. Field, and Damon M. Chandler. 2014. Local masking in natural images: A database and analysis. J. Vis. 14, 8 (Jul. 2014), 22.
    4. Seyed Ali Amirshahi, Marius Pedersen, and Stella X. Yu. 2016. Image quality assessment by comparing CNN features between images. J. Imag. Sci. Technol. 60, 6 (2016), 60410–604101.
    5. Tunç Ozan Aydin, Rafał Mantiuk, Karol Myszkowski, and Hans-Peter Seidel. 2008. Dynamic range independent image quality assessment. ACM Trans. Graph. 27, 3 (2008), 69. 
    6. Simone Bianco, Luigi Celona, Paolo Napoletano, and Raimondo Schettini. 2016. On the use of deep learning for blind image quality assessment. arXiv:1602.05531. Retrieved from https://arxiv.org/abs/1602.05531.
    7. Sebastian Bosse, Dominique Maniry, Klaus-Robert Mueller, Thomas Wiegand, and Wojciech Samek. 2016. Full-reference image quality assessment using neural networks. In International Conference on Quality of Multimedia Experience (QoMEX). IEEE.
    8. Sebastian Bosse, Dominique Maniry, Klaus-Robert Müller, Thomas Wiegand, and Wojciech Samek. 2018. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 27, 1 (2018), 206–219.
    9. Sebastian Bosse, Dominique Maniry, Thomas Wiegand, and Wojciech Samek. 2016a. A deep neural network for image quality assessment. In Proceedings of the IEEE International Conference on Image Processing (ICIP’16). IEEE, 3773–3777.
    10. Sebastian Bosse, Dominique Maniry, Thomas Wiegand, and Wojciech Samek. 2016b. Neural network-based full-reference image quality assessment. In Proceedings of the Picture Coding Symposium (PCS’16). 1–5.
    11. Martin Čadík, Robert Herzog, Rafał Mantiuk, Radosław Mantiuk, Karol Myszkowski, and Hans-Peter Seidel. 2013. Learning to predict localized distortions in rendered images. In Computer Graphics Forum, Vol. 32. 401–410.
    12. Martin Čadík, Robert Herzog, Rafał K. Mantiuk, Karol Myszkowski, and Hans-Peter Seidel. 2012. New measurements reveal weaknesses of image quality metrics in evaluating graphics artifacts. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 31, 6 (2012), 147. 
    13. Damon M. Chandler. 2013. Seven challenges in image quality assessment: Past, present, and future research. ISRN Signal Processing (2013), Article ID 905685.
    14. Scott Daly. 1993. The visible differences predictor: An algorithm for the assessment of image fidelity. In Digital Images and Human Vision. MIT Press, 179–206. 
    15. Scott J. Daly. 1992. Visible differences predictor: An algorithm for the assessment of image fidelity. In Human Vision, Visual Processing, and Digital Display III, Vol. 1666. International Society for Optics and Photonics, 2–16.
    16. Robert Herzog, Martin Čadík, Tunç O. Aydin, Kwang In Kim, Karol Myszkowski, and Hans-Peter Seidel. 2012. NoRM: No-reference image quality metric for realistic image synthesis. Comput. Graph. Forum 31, 2 (2012), 545–554. 
    17. Le Kang, Peng Ye, Yi Li, and David Doermann. 2014. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1733–1740. 
    18. Kanita Karađuzović-Hadžiabdić, Jasminka Hasić Telalović, and Rafał K. Mantiuk. 2017. Assessment of multi-exposure HDR image deghosting methods. Comput. Graph. 63 (2017), 1–17. 
    19. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (Eds.). Curran Associates, Inc., 1097–1105. 
    20. Weisi Lin and C.-C. Jay Kuo. 2011. Perceptual visual quality metrics: A survey. J. Vis. Commun. Image Represent. 22, 4 (2011), 297–312. 
    21. Jeffrey Lubin. 1995. A visual discrimination model for imaging system design and evaluation. In Vision Models for Target Detection and Recognition. World Scientific. 245–283.
    22. Rafal Mantiuk, Kil Joong Kim, Allan G. Rempel, and Wolfgang Heidrich. 2011. HDR-VDP-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions. ACM Trans. Graphics (Proc. SIGGRAPH) (2011). Article 40. 
    23. Anush Krishna Moorthy and Alan Conrad Bovik. 2010. A two-step framework for constructing blind image quality indices. IEEE Sign. Process. Letters 17, 5 (2010), 513 –516.
    24. Manish Narwaria and Weisi Lin. 2010. Objective image quality assessment based on support vector regression. IEEE Trans. Neural Netw. 21, 3 (2010), 515–519. 
    25. Sudam S. Panda, M. S. R. S. Prasad, and G. Jena. 2011. POCS based super-resolution image reconstruction using an adaptive regularization parameter. CoRR abs/1112.1484. arXiv:1112.1484. Retrieved from https://arxiv.org/abs/1112.1484.
    26. Rafał Piórkowski, Radosław Mantiuk, and Adam Siekawa. 2017. Automatic detection of game engine artifacts using full reference image quality metrics. ACM Trans. Appl. Percept. 14, 3 (2017), 14. 
    27. Nikolay Ponomarenko, Lina Jin, Oleg Ieremeiev, Vladimir Lukin, Karen Egiazarian, Jaakko Astola, Benoit Vozel, Kacem Chehdi, Marco Carli, Federica Battisti, and C.-C. Jay Kuo. 2015. Image database TID2013: Peculiarities, results and perspectives. Signal Process.: Image Commun. 30 (Jan. 2015), 57–77. 
    28. Michele A. Saad, Alan C. Bovik, and Christophe Charrier. 2012. Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Trans. Image Process. 21, 8 (2012), 3339–3352. 
    29. Hamid R. Sheikh, Muhammad F. Sabir, and Alan C. Bovik. 2006. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 15, 11 (2006), 3440–3451. 
    30. Huixuan Tang, N. Joshi, and A. Kapoor. 2011. Learning a blind measure of perceptual image quality. Proc. IEEE Comput. Vis. Pattern Recogn. (June 2011), 305–312. 
    31. Zhou Wang and Alan C. Bovik. 2006. Modern Image Quality Assessment. Morgan 8 Claypool Publishers. 
    32. Lin Zhang, Ying Shen, and Hongyu Li. 2014. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 23, 10 (2014), 4270–4281.
    33. Lin Zhang, Lei Zhang, Xuanqin Mou, and D. Zhang. 2011. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 20, 8 (2011), 2378–2386. 
    34. Xuemei Zhang and Brian A. Wandell. 1997. A spatial extension of CIELAB for digital color-image reproduction. J. Soc. Inf. Display 5, 1 (1997), 61.