“Two-stage sketch colorization” – ACM SIGGRAPH HISTORY ARCHIVES

“Two-stage sketch colorization”

  • 2018 SA Technical Papers_Zhang_Two-stage sketch colorization

Conference:


Type(s):


Title:

    Two-stage sketch colorization

Session/Category Title:   Low-level imaging


Presenter(s)/Author(s):


Moderator(s):



Abstract:


    Sketch or line art colorization is a research field with significant market demand. Different from photo colorization which strongly relies on texture information, sketch colorization is more challenging as sketches may not have texture. Even worse, color, texture, and gradient have to be generated from the abstract sketch lines. In this paper, we propose a semi-automatic learning-based framework to colorize sketches with proper color, texture as well as gradient. Our framework consists of two stages. In the first drafting stage, our model guesses color regions and splashes a rich variety of colors over the sketch to obtain a color draft. In the second refinement stage, it detects the unnatural colors and artifacts, and try to fix and refine the result. Comparing to existing approaches, this two-stage design effectively divides the complex colorization task into two simpler and goal-clearer subtasks. This eases the learning and raises the quality of colorization. Our model resolves the artifacts such as water-color blurring, color distortion, and dull textures.We build an interactive software based on our model for evaluation. Users can iteratively edit and refine the colorization. We evaluate our learning model and the interactive system through an extensive user study. Statistics shows that our method outperforms the state-of-art techniques and industrial applications in several aspects including, the visual quality, the ability of user control, user experience, and other metrics.

References:


    1. Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. 2017. Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks. CVPR (2017).Google Scholar
    2. Qifeng Chen and Vladlen Koltun. 2017. Photographic Image Synthesis with Cascaded Refinement Networks. ICCV (2017).Google Scholar
    3. DanbooruCommunity. 2018. Danbooru2017: A Large-Scale Crowdsourced and Tagged Anime Illustration Dataset. (2018).Google Scholar
    4. Jeff Donahue, Philipp KrÃd’henbÃijhl, and Trevor Darrell. 2017. Adversarial Feature Learning. ICLR (2017).Google Scholar
    5. L. Fei-Fei. 2010. ImageNet: crowdsourcing, benchmarking & other cool things. CMU VASC Seminar (2010).Google Scholar
    6. Kevin Frans. 2017. Outline Colorization through Tandem Adversarial Networks. In Arxiv (2017).Google Scholar
    7. Chie Furusawa, Kazuyuki Hiroshiba, Keisuke Ogaki, and Yuri Odagiri. 2017. Comicolorization. In SIGGRAPH Asia 2017 Technical Briefs.Google Scholar
    8. Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2016. Image Style Transfer Using Convolutional Neural Networks. In CVPR. 2414–2423.Google Scholar
    9. Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Networks. NIPS 3 (2014), 2672–2680. Google ScholarDigital Library
    10. Christopher Hart. 2015. The Master Guide to Drawing Anime: How to Draw Original Characters from Simple Templates. Paperback.Google Scholar
    11. Paulina Hensman and Kiyoharu Aizawa. 2017. cGAN-based Manga Colorization Using a Single Training Image. In Arxiv (2017).Google ScholarCross Ref
    12. Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. 2016. Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification. ACM Transactions on Graphics 35, 4 (2016). Google ScholarDigital Library
    13. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017. Image-to-Image Translation with Conditional Adversarial Networks. CVPR (2017).Google Scholar
    14. Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. 2015. Spatial Transformer Networks. NIPS (2015). Google ScholarDigital Library
    15. Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Jiwon Kim. 2017. Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. ICML (2017).Google Scholar
    16. Diederik P Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. Computer Science (2014).Google Scholar
    17. Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. 2016. Learning Representations for Automatic Colorization. In ECCV. Springer, 577–593.Google Scholar
    18. Anat Levin, Dani Lischinski, and Yair Weiss. 2004. Colorization using optimization. In ACM Transactions on Graphics. ACM Press. Chengze Li, Xueting Liu, and Tien-Tsin Wong. 2017. Deep Extraction of Manga Structural Lines. ACM Transactions on Graphics 36, 4 (2017). Google ScholarDigital Library
    19. Jing Liao, Yuan Yao, Lu Yuan, Gang Hua, and Sing Bing Kang. 2017. Visual attribute transfer through deep image analogy. ACM Transactions on Graphics 36, 4 (jul 2017), 1–15. Google ScholarDigital Library
    20. Ming-Yu Liu, Thomas Breuel, and Jan Kautz. 2017a. Unsupervised Image-to-Image Translation Networks. NIPS (2017). Google ScholarDigital Library
    21. Yifan Liu, Zengchang Qin, Zhenbo Luo, and Hua Wang. 2017b. Auto-painter: Cartoon Image Generation from Sketch by Using Conditional Generative Adversarial Networks. In Arxiv (2017).Google Scholar
    22. Vinod Nair and Geoffrey E. Hinton. 2010. Rectified Linear Units Improve Restricted Boltzmann Machines. ICML (2010), 807–814. Google ScholarDigital Library
    23. Yingge Qu, Tien-Tsin Wong, and Pheng-Ann Heng. 2006. Manga Colorization. ACM Transactions on Graphics 25, 3 (July 2006), 1214–1220. Google ScholarDigital Library
    24. Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, and James Hays. 2017. Scribbler: Controlling Deep Image Synthesis with Sketch and Color. CVPR (2017).Google Scholar
    25. Edgar Simo-Serra, Satoshi Iizuka, and Hiroshi Ishikawa. 2018a. Mastering Sketching: Adversarial Augmentation for Structured Prediction. ACM Transactions on Graphics 37, 1 (2018). Google ScholarDigital Library
    26. Edgar Simo-Serra, Satoshi Iizuka, and Hiroshi Ishikawa. 2018b. Real-Time Data-Driven Interactive Rough Sketch Inking. ACM Transactions on Graphics (2018). Google ScholarDigital Library
    27. Edgar Simo-Serra, Satoshi Iizuka, Kazuma Sasaki, and Hiroshi Ishikawa. 2016. Learning to Simplify: Fully Convolutional Networks for Rough Sketch Cleanup. ACM Transactions on Graphics 35, 4 (2016). Google ScholarDigital Library
    28. Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. Computer Science (2014).Google Scholar
    29. Daniel Sykora, John Dingliana, and Steven Collins. 2009. LazyBrush: Flexible Painting Tool for Hand-drawn Cartoons. Computer Graphics Forum 28, 2 (2009).Google Scholar
    30. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going Deeper with Convolutions. CVPR (2015).Google Scholar
    31. TaiZan. 2016. PaintsChainer Tanpopo. PreferredNetwork (2016).Google Scholar
    32. TaiZan. 2017a. PaintsChainer Canna. PreferredNetwork (2017).Google Scholar
    33. TaiZan. 2017b. PaintsChainer Satsuki. PreferredNetwork (2017).Google Scholar
    34. Chao Wang, Haiyong Zheng, Zhibin Yu, Ziqiang Zheng, Zhaorui Gu, and Bing Zheng. 2018b. Discriminative Region Proposal Adversarial Networks for High-Quality Image-to-Image Translation. ECCV (2018).Google Scholar
    35. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018a. High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs. CVPR (2018).Google Scholar
    36. Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. 2017. DualGAN: Unsupervised Dual Learning for Image-to-Image Translation. ICCV (2017).Google Scholar
    37. Lvmin Zhang, Yi Ji, and Xin Lin. 2017a. Style Transfer for Anime Sketches with Enhanced Residual U-net and Auxiliary Classifier GAN. ACPR (2017).Google Scholar
    38. Richard Zhang, Phillip Isola, and Alexei A Efros. 2016. Colorful Image Colorization. In ECCV.Google Scholar
    39. Richard Zhang, Jun-Yan Zhu, Phillip Isola, Xinyang Geng, Angela S Lin, Tianhe Yu, and Alexei A Efros. 2017b. Real-Time User-Guided Image Colorization with Learned Deep Priors. ACM Transactions on Graphics 9, 4 (2017). Google ScholarDigital Library
    40. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. ICCV (2017).Google Scholar


ACM Digital Library Publication:



Overview Page:



Submit a story:

If you would like to submit a story about this presentation, please contact us: historyarchives@siggraph.org