“Painting style transfer for head portraits using convolutional neural networks” by Selim, Elgharib and Doyle

  • ©Ahmed Selim, Mohamed Elgharib, and Linda Doyle

Conference:


Type(s):


Title:

    Painting style transfer for head portraits using convolutional neural networks

Session/Category Title:   FACES & PORTRAITS


Presenter(s)/Author(s):


Moderator(s):



Abstract:


    Head portraits are popular in traditional painting. Automating portrait painting is challenging as the human visual system is sensitive to the slightest irregularities in human faces. Applying generic painting techniques often deforms facial structures. On the other hand portrait painting techniques are mainly designed for the graphite style and/or are based on image analogies; an example painting as well as its original unpainted version are required. This limits their domain of applicability. We present a new technique for transferring the painting from a head portrait onto another. Unlike previous work our technique only requires the example painting and is not restricted to a specific style. We impose novel spatial constraints by locally transferring the color distributions of the example painting. This better captures the painting texture and maintains the integrity of facial structures. We generate a solution through Convolutional Neural Networks and we present an extension to video. Here motion is exploited in a way to reduce temporal inconsistencies and the shower-door effect. Our approach transfers the painting style while maintaining the input photograph identity. In addition it significantly reduces facial deformations over state of the art.

References:


    1. Ashikhmin, M. 2001. Synthesizing natural textures. In ACM Symposium on Interactive 3D Graphics, 217–226. Google ScholarDigital Library
    2. Ashikhmin, N. 2003. Fast texture transfer. Computer Graphics and Applications 23, 4, 38–43. Google ScholarDigital Library
    3. Bae, S., Paris, S., and Durand, F. 2006. Two-scale tone management for photographic look. In ACM SIGGRAPH, 637–645. Google ScholarDigital Library
    4. Barrodale, I., Skea, D., Berkley, M., Kuwahara, R., and Poeckert, R. 1993. Warping digital images using thin plate splines. Pattern Recognition 26, 2, 375–376.Google ScholarCross Ref
    5. Beier, T., and Neely, S. 1992. Feature-based image metamorphosis. ACM SIGGRAPH 26, 2, 35–42. Google ScholarDigital Library
    6. Chen, H., Xu, Y.-Q., Shum, H.-Y., Zhu, S.-C., and Zheng, N.-N. 2001. Example-based facial sketch generation with non-parametric sampling. In International Conference on Computer Vision, 433–438.Google Scholar
    7. Chen, H., Liang, L., qing Xu, Y., yeung Shum, H., and ning Zheng, N. 2002. Example-based automatic portraiture. In Asian Conference on Computer Vision.Google Scholar
    8. Chen, H., Zheng, N., Liang, L., Li, Y., Xu, Y.-Q., and Shum, H.-Y. 2002. Pictoon: a personalized image-based cartoon system. In ACM Multimedia, 171–178. Google ScholarDigital Library
    9. Chen, H., Liu, Z., Rose, C., Xu, Y., Shum, H.-Y., and Salesin, D. 2004. Example-based composite sketching of human portraits. In International Symposium on Non-photorealistic Animation and Rendering, 95–153. Google ScholarDigital Library
    10. Collomosse, J., and Hall, P. 2002. Painterly rendering using image salience. In Eurographics, 122–128. Google ScholarDigital Library
    11. Collomosse, J., and Hall, P. 2005. Genetic paint: A search for salient paintings. In EvoMUSART, 437–447. Google ScholarDigital Library
    12. Cootes, T. F., Taylor, C. J., Cooper, D. H., and Graham, J. 1995. Active shape model: Their training and application. Computer Vision and Image Understanding 61, 1, 38–59. Google ScholarDigital Library
    13. Cootes, T. F., Edwards, G. J., and Taylor, C. J. 1998. Active appearance models. In IEEE Transactions on Pattern Analysis and Machine Intelligence, Springer, 484–498. Google ScholarDigital Library
    14. DiPaola, S. 2007. Painterly rendered portraits from photographs using a knowledge-based approach. In SPIE: Human Vision and Imaging.Google Scholar
    15. Efros, A. A., and Freeman, W. T. 2001. Image quilting for texture synthesis and transfer. In ACM SIGGRAPH, 341–346. Google ScholarDigital Library
    16. Efros, A., and Leung, T. 1999. Texture synthesis by non-parametric sampling. In International Conference on Computer Vision, 1033–1038. Google ScholarDigital Library
    17. Gatys, L. A., Ecker, A. S., and Bethge, M. 2015. A neural algorithm of artistic style. CoRR abs/1508.06576.Google Scholar
    18. Gooch, B., Coombe, G., and Shirley, P. 2002. Artistic vision: Painterly rendering using computer vision techniques. In International Symposium on Non-photorealistic Animation and Rendering, 83–88. Google ScholarDigital Library
    19. Gooch, B., Reinhard, E., and Gooch, A. 2004. Human facial illustrations: Creation and psychophysical evaluation. SIGGRAPH 23, 1, 27–44. Google ScholarDigital Library
    20. Haeberli, P. 1990. Paint by numbers: Abstract image representations. SIGGRAPH Computer Graphics and Interactive Techniques 24, 4, 207–214. Google ScholarDigital Library
    21. Hashimoto, R., Johan, H., and Nishita, T. 2003. Creating various styles of animations using example-based filtering. In Computer Graphics International, 312–317.Google Scholar
    22. Hays, J., and Essa, I. 2004. Image and video based painterly animation. In International Symposium on Non-photorealistic Animation and Rendering (NPAR), 113–120. Google ScholarDigital Library
    23. Hertzmann, A., Jacobs, C. E., Oliver, N., Curless, B., and Salesin, D. H. 2001. Image analogies. In SIGGRAPH, 327–340. Google ScholarDigital Library
    24. Hertzmann, A. 1998. Painterly rendering with curved brush strokes of multiple sizes. In SIGGRAPH, 453–460. Google ScholarDigital Library
    25. Hertzmann, A. 2001. Paint by relaxation. In Computer Graphics International, 47–54. Google ScholarDigital Library
    26. Johnson, J., 2015. Torch implementation of neural style algorithm. https://github.com/jcjohnson/neural-style.Google Scholar
    27. Kim, S. Y., Maciejewski, R., Isenberg, T., Andrews, W. M., Chen, W., Sousa, M. C., and Ebert, D. S. 2009. Stippling by example. In International Symposium on Non-Photorealistic Animation and Rendering, 41–50. Google ScholarDigital Library
    28. Krizhevsky, A., Sutskever, I., and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 1097–1105.Google Scholar
    29. Kyprianidis, J. E., Collomosse, J., Wang, T., and Isenberg, T. 2013. State of the art: A taxonomy of artistic stylization techniques for images and video. IEEE Transactions on Visualization and Computer Graphics 19, 5, 866–885. Google ScholarDigital Library
    30. Lee, H., Seo, S., Ryoo, S., and Yoon, K. 2010. Directional texture transfer. In International Symposium on Non-Photorealistic Animation and Rendering, 43–48. Google ScholarDigital Library
    31. Levin, A., Lischinski, D., and Weiss, Y. 2008. A closed-form solution to natural image matting. IEEE Transactions on PAMI 30, 2, 228–242. Google ScholarDigital Library
    32. Li, Y., Sharan, L., and Adelson, E. H. 2005. Compressing and companding high dynamic range images with subband architectures. SIGGRAPH 24, 3, 836–844. Google ScholarDigital Library
    33. Lin, L. 2010. Painterly animation using video semantics and feature correspondence. In NPAR, 73–80. Google ScholarDigital Library
    34. Litwinowicz, P. 1997. Processing images and video for an impressionist effect. In SIGGRAH Computer Graphics and Interactive Techniques, 407–414. Google ScholarDigital Library
    35. Liu, C., Yuen, J., Torralba, A., Sivic, J., and Freeman, W. T. 2008. Sift flow: Dense correspondence across different scenes. In European Conference on Computer Vision, 28–42. Google ScholarDigital Library
    36. Lowe, D. G. 1999. Object recognition from local scale-invariant features. In International Conference on Computer Vision, 1150–1157. Google ScholarDigital Library
    37. Lu, J., Sander, P. V., and Finkelstein, A. 2010. Interactive painterly stylization of images, videos and 3d animations. In ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, 127–134. Google ScholarDigital Library
    38. Mahendran, A., and Vedaldi, A. 2015. Understanding deep image representations by inverting them. In Computer Vision and Pattern Recognition (CVPR), 5188–5196.Google Scholar
    39. McKone, E., Kanwisher, B., and Duchaine, B. 2007. Can generic expertise explain special processing for faces? Trends in Cognitive Science 1, 8–15.Google ScholarCross Ref
    40. Meier, B. J. 1996. Painterly rendering for animation. In Computer Graphics and Interactive Techniques, 477–484. Google ScholarDigital Library
    41. Meng, M., Zhao, M., and Zhu, S.-C. 2010. Artistic paper-cut of human portraits. In ACM International Conference on Multimedia (Short Paper), 931–934. Google ScholarDigital Library
    42. O’Donovan, P., and Hertzmann, A. 2012. Anipaint: Interactive painterly animation from video. IEEE Transactions on Visualization and Computer Graphics 18, 3, 475–487. Google ScholarDigital Library
    43. Pitie, F., Kokaram, A., and Dahyot, R. 2005. N-dimensional probability density function transfer and its application to color transfer. In International Conference on Computer Vision, 1434–1439. Google ScholarDigital Library
    44. Reinhard, E., Adhikhmin, M., Gooch, B., and Shirley, P. 2001. Color transfer between images. Computer Graphics and Applications 21, 5, 34–41. Google ScholarDigital Library
    45. Saragih, J. M., Lucey, S., and Cohn, J. 2009. Face alignment through subspace constrained mean-shifts. In International Conference of Computer Vision, 1034–1041.Google Scholar
    46. Shih, Y., Paris, S., Barnes, C., Freeman, W. T., and Durand, F. 2014. Style transfer for headshot portraits. SIGGRAPH 33, 4, 148:1–148:14. Google ScholarDigital Library
    47. Simonyan, K., and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556.Google Scholar
    48. Sinha, P., Balas, B., Ostrovsky, Y., and Russell, R. 2006. Face recognition by humans: Nineteen results all computer vision researchers should know about. Proceedings of the IEEE 94, 11, 1948–1962.Google ScholarCross Ref
    49. Su, S. L., Durand, F., and Agrawala, M. 2005. De-emphasis of distracting image regions using texture power maps. In International Workshop on Texture Analysis and Synthesis in conjunction with ICCV’05, 119–124.Google Scholar
    50. Taigman, Y., Yang, M., Ranzato, M., and Wolf, L. 2014. Deepface: Closing the gap to human-level performance in face verification. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, 1701–1708. Google ScholarDigital Library
    51. Wang, X., and Tang, X. 2009. Face photo-sketch synthesis and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 31, 11, 1955–1967. Google ScholarDigital Library
    52. Wang, B., Wang, W., Yang, H., and Sun, J. 2004. Efficient example-based painting and synthesis of 2d directional texture. IEEE Transactions on Visualization and Computer Graphics 10, 3, 266–277. Google ScholarDigital Library
    53. Wang, T., Collomosse, J., Slatter, D., Cheatle, P., and Greig, D. 2010. Video stylization for digital ambient displays of home movies. In International Symposium on Non-Photorealistic Animation and Rendering, 137–146. Google ScholarDigital Library
    54. Wang, T., Collomosse, J., Hunter, A., and Greig, D. 2013. Learnable stroke models for example-based portrait painting. In British Machine Vision Conference, 36.1–36.11.Google Scholar
    55. Zeng, K., Zhao, M., Xiong, C., and Zhu, S.-C. 2009. From image parsing to painterly rendering. SIGGRAPH 29, 1 (Dec.), 2:1–2:11. Google ScholarDigital Library
    56. Zhao, M., and Zhu, S.-C. 2011. Portrait painting using active templates. In Non-Photorealistic Animation and Rendering (NPAR), 117–124. Google ScholarDigital Library


ACM Digital Library Publication:



Overview Page: