Research article Special Issues

Multiscale style transfer based on a Laplacian pyramid for traditional Chinese painting

  • Received: 07 December 2022 Revised: 11 January 2023 Accepted: 06 February 2023 Published: 10 February 2023
  • Style transfer is adopted to synthesize appealing stylized images that preserve the structure of a content image but carry the pattern of a style image. Many recently proposed style transfer methods use only western oil paintings as style images to achieve image stylization. As a result, unnatural messy artistic effects are produced in stylized images when using these methods to directly transfer the patterns of traditional Chinese paintings, which are composed of plain colors and abstract objects. Moreover, most of them work only at the original image scale and thus ignore multiscale image information during training. In this paper, we present a novel effective multiscale style transfer method based on Laplacian pyramid decomposition and reconstruction, which can transfer unique patterns of Chinese paintings by learning different image features at different scales. In the first stage, the holistic patterns are transferred at low resolution by adopting a Style Transfer Base Network. Then, the details of the content and style are gradually enhanced at higher resolutions by a Detail Enhancement Network with an edge information selection (EIS) module in the second stage. The effectiveness of our method is demonstrated through the generation of appealing high-quality stylization results and a comparison with some state-of-the-art style transfer methods. Datasets and codes are available at https://github.com/toby-katakuri/LP_StyleTransferNet.

    Citation: Kunxiao Liu, Guowu Yuan, Hongyu Liu, Hao Wu. Multiscale style transfer based on a Laplacian pyramid for traditional Chinese painting[J]. Electronic Research Archive, 2023, 31(4): 1897-1921. doi: 10.3934/era.2023098

    Related Papers:

  • Style transfer is adopted to synthesize appealing stylized images that preserve the structure of a content image but carry the pattern of a style image. Many recently proposed style transfer methods use only western oil paintings as style images to achieve image stylization. As a result, unnatural messy artistic effects are produced in stylized images when using these methods to directly transfer the patterns of traditional Chinese paintings, which are composed of plain colors and abstract objects. Moreover, most of them work only at the original image scale and thus ignore multiscale image information during training. In this paper, we present a novel effective multiscale style transfer method based on Laplacian pyramid decomposition and reconstruction, which can transfer unique patterns of Chinese paintings by learning different image features at different scales. In the first stage, the holistic patterns are transferred at low resolution by adopting a Style Transfer Base Network. Then, the details of the content and style are gradually enhanced at higher resolutions by a Detail Enhancement Network with an edge information selection (EIS) module in the second stage. The effectiveness of our method is demonstrated through the generation of appealing high-quality stylization results and a comparison with some state-of-the-art style transfer methods. Datasets and codes are available at https://github.com/toby-katakuri/LP_StyleTransferNet.



    加载中


    [1] V. Kwatra, A. Schodl, I. Essa, G. Turk, A. Bobick, Graphicut textures: Image and video synthesis using graph cuts, ACM Trans. Graphics, 22 (2003), 277–286. https://doi.org/10.1145/882262.882264 doi: 10.1145/882262.882264
    [2] A. A. Efros, W. T. Freeman, Image quilting for texture synthesis and transfer, in SIGGRAPH '01: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, (2001), 341–346. https://doi.org/10.1145/383259.383296
    [3] L. Y. Wei, M. Levoy, Fast texture synthesis using tree-structured vector quantization, in SIGGRAPH '00: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, (2000), 479–488. https://doi.org/10.1145/344779.345009
    [4] L. A. Gatys, A. S. Ecker, M. Bethge, Image style transfer using convolutional neural networks, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 2414–2423. https://doi.org/10.1109/CVPR.2016.265
    [5] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, preprint, arXiv: 1409.1556. https://doi.org/10.48550/arXiv.1409.1556
    [6] J. Johnson, A. Alahi, F. Li, Perceptual losses for real-time style transfer and super-resolution, in Computer Vision-ECCV 2016, Springer, Cham, (2016), 694–711. https://doi.org/10.1007/978-3-319-46475-6_43
    [7] D. Ulyanov, V. Lebedev, A. Vedaldi, V. Lempitsky, Texture networks: feed-forward synthesis of textures and stylized images, preprint, arXiv: 1603.03417. https://doi.org/10.48550/arXiv.1603.03417
    [8] X. Huang, S. Belongie, Arbitrary style transfer in real-time with adaptive instance normalization, in 2017 IEEE International Conference on Computer Vision (ICCV), IEEE, Italy, (2017), 1510–1519. https://doi.org/10.1109/ICCV.2017.167
    [9] Y. J. Li, C. Fang, J. M. Yang, Z. Wang, X. Lu, M. Yang, Universal style transfer via feature transforms, in 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 30 (2017).
    [10] D. Y. Park, K. H. Lee, Arbitrary style transfer with style-attentional networks, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, IEEE, (2019), 5873–5881. https://doi.org/10.1109/CVPR.2019.00603
    [11] M. Sailsbury, Drawing for Illustration, Thames & Hudson, 2022.
    [12] J. Liang, H. Zeng, L. Zhang, High-resolution photorealistic image translation in real-time: A Laplacian Pyramid Translation Network, in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, (2021), 9387–9395. https://doi.org/10.1109/CVPR46437.2021.00927
    [13] X. D. Mao, Q. Li, H. R. Xie, R. Y. K. Lau, Z. Wang, S. P. Smolley, Least squares generative adversarial networks, in 2017 IEEE International Conference on Computer Vision (ICCV), IEEE, Italy, (2017), 2813–2821. https://doi.org/10.1109/ICCV.2017.304
    [14] T. C. Wang, M. Y. Liu, J. Y. Zhu, A. Tao, J. Kautz, B. Catanzaro, High-resolution image synthesis and semantic manipulation with conditional GANs, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, USA, (2018), 8798–8807. https://doi.org/10.1109/CVPR.2018.00917
    [15] N. Kolkin, J. Salavon, G. Shakhnarovich, Style transfer by relaxed optimal transport and self-similarity, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Long Beach, CA, USA, (2019), 10043–10052. https://doi.org/10.1109/CVPR.2019.01029
    [16] C. Li, M. Wand, Precomputed real-time texture synthesis with Markovian generative adversarial networks, in Computer Vision-ECCV 2016, Springer, Cham, (2016), 702–716. https://doi.org/10.1007/978-3-319-46487-9_43
    [17] X. T. Li, S. F. Liu, J. Kautz, M. H. Yang, Learning linear transformations for fast image and video style transfer, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Long Beach, CA, USA, (2019), 3804–3812. https://doi.org/10.1109/CVPR.2019.00393
    [18] C. Li, M. Wand, Combining Markov random fields and convolutional neural networks for image synthesis, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Seattle, WA, USA, (2016), 2479–2486. https://doi.org/10.1109/CVPR.2016.272
    [19] X. Wang, G. Oxholm, D. Zhang, Y. Wang, Multimodal transfer: A hierarchical deep convolutional neural network for fast artistic style transfer, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Honolulu, USA, (2017), 7178–7186. https://doi.org/10.1109/CVPR.2017.759
    [20] D. Ulyanov, A. Vedaldi, V. Lempitsky, Improved texture networks: Maximizing Quality and diversity in feed-forward stylization and texture synthesis, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Honolulu, USA, (2017), 4105–4113. https://doi.org/10.1109/CVPR.2017.437
    [21] A. Sanakoyeu, D. Kotovenko, S. Lang, B. Ommer, A style-aware content loss for real-time HD style transfer, in Computer Vision-ECCV 2018, Lecture Notes in Computer Science, Springer, Cham, 11212 (2018), 715–731. https://doi.org/10.1007/978-3-030-01237-3_43
    [22] Y. Y. Deng, F. Tang, W. M. Dong, C. Ma, X. Pan, L. Wang, et al. StyTr2: Image style transfer with transformers, in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, New Orleans, LA, USA, (2022), 11316–11326. https://doi.org/10.1109/cvpr52688.2022.01104
    [23] S. Yang, L. M. Jiang, Z. W. Liu, C. C. Loy, Pastiche master: Exemplar-based high-resolution portrait style transfer, in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, New Orleans, LA, USA, (2022), 7683–7692. https://doi.org/10.1109/cvpr52688.2022.00754
    [24] V. Dumoulin, J. Shlens, M. Kudlur, A learned representation for artistic style, preprint, arXiv: 1610.07629. https://doi.org/10.48550/arXiv.1610.07629
    [25] Z. X. Zou, T. Y. Shi, S. Qiu, Y. Yuan, Z. Shi, Stylized neural painting, in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Nashville, TN, USA, (2021), 15684–15693. https://doi.org/10.1109/cvpr46437.2021.01543
    [26] W. J. Ye, C. J. Liu, Y. H. Chen, Y. Liu, C. Liu, H. Zhou, Multi-style transfer and fusion of image's regions based on attention mechanism and instance segmentation, Signal Process. Image Commun., 110 (2023). https://doi.org/10.1016/j.image.2022.116871 doi: 10.1016/j.image.2022.116871
    [27] Z. Wang, L. Zhao, H. Chen, L. Qiu, Q. Mo, S. Lin, et al. Diversified arbitrary style transfer via deep feature perturbation, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2020), 7789–7798.
    [28] D. Y. Lin, Y. Wang, G. L. Xu, J. Li, K. Fu, Transform a simple sketch to a Chinese painting by a multiscale deep neural network, Algorithms, 11 (2018), 18. https://doi.org/10.3390/a11010004 doi: 10.3390/a11020018
    [29] B. Li, C. M. Xiong, T. F. Wu, Y. Zhou, L. Zhang, R. Chu, Neural abstract style transfer for Chinese traditional painting, in Computer Vision-ACCV 2018, Lecture Notes in Computer Science, Springer, Cham, (2018), 212–227. https://doi.org/10.1007/978-3-030-20890-5_14
    [30] T. R. Shaham, T. Dekel, T. Michaeli, SinGAN: Learning a generative model from a single natural image, in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), IEEE, (2019), 4569–4579. https://doi.org/10.1109/ICCV.2019.00467
    [31] L. Sheng, Z. Y. Lin, J. Shao, X. Wang, Avatar-Net: Multi-scale zero-shot style transfer by feature decoration, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, UT, USA, (2018), 8242–8250. https://doi.org/10.1109/CVPR.2018.00860
    [32] T. W. Lin, Z. Q. Ma, F. Li, D. L. He, X. Li, E. Ding, et al., Drafting and revision: Laplacian Pyramid network for fast high-quality artistic style transfer, in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Nashville, TN, USA, (2021), 5137–5146. https://doi.org/10.1109/CVPR46437.2021.00510
    [33] J. Fu, J. Liu, H. J. Tian, Y. Li, Y. Bao, Z. Fang, et al., Dual attention network for scene segmentation, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Long Beach, CA, USA, (2019), 3141–3149. https://doi.org/10.1109/CVPR.2019.00326
    [34] T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, et al., Microsoft COCO: Common objects in context, in Computer Vision-ECCV 2014, Lecture Notes in Computer Science, Springer, Cham, (2014), 740–755. https://doi.org/10.1007/978-3-319-10602-1_48
    [35] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980. https://doi.org/10.48550/arXiv.1412.6980
    [36] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, O. Wang, The unreasonable effectiveness of deep features as a perceptual metric, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, UT, USA, (2018), 586–595. https://doi.org/10.1109/CVPR.2018.00068
    [37] Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEE Trans. Image Process., 13 (2004), 600–612. https://doi.org/10.1109/TIP.2003.819861 doi: 10.1109/TIP.2003.819861
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1527) PDF downloads(103) Cited by(1)

Article outline

Figures and Tables

Figures(13)  /  Tables(4)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog