Research article Special Issues

Attention distraction with gradient sharpening for multi-task adversarial attack


  • Received: 08 May 2023 Revised: 19 May 2023 Accepted: 07 June 2023 Published: 14 June 2023
  • The advancement of deep learning has resulted in significant improvements on various visual tasks. However, deep neural networks (DNNs) have been found to be vulnerable to well-designed adversarial examples, which can easily deceive DNNs by adding visually imperceptible perturbations to original clean data. Prior research on adversarial attack methods mainly focused on single-task settings, i.e., generating adversarial examples to fool networks with a specific task. However, real-world artificial intelligence systems often require solving multiple tasks simultaneously. In such multi-task situations, the single-task adversarial attacks will have poor attack performance on the unrelated tasks. To address this issue, the generation of multi-task adversarial examples should leverage the generalization knowledge among multiple tasks and reduce the impact of task-specific information during the generation process. In this study, we propose a multi-task adversarial attack method to generate adversarial examples from a multi-task learning network by applying attention distraction with gradient sharpening. Specifically, we first attack the attention heat maps, which contain more generalization information than feature representations, by distracting the attention on the attack regions. Additionally, we use gradient-based adversarial example-generating schemes and propose to sharpen the gradients so that the gradients with multi-task information rather than only task-specific information can make a greater impact. Experimental results on the NYUD-V2 and PASCAL datasets demonstrate that the proposed method can improve the generalization ability of adversarial examples among multiple tasks and achieve better attack performance.

    Citation: Bingyu Liu, Jiani Hu, Weihong Deng. Attention distraction with gradient sharpening for multi-task adversarial attack[J]. Mathematical Biosciences and Engineering, 2023, 20(8): 13562-13580. doi: 10.3934/mbe.2023605

    Related Papers:

  • The advancement of deep learning has resulted in significant improvements on various visual tasks. However, deep neural networks (DNNs) have been found to be vulnerable to well-designed adversarial examples, which can easily deceive DNNs by adding visually imperceptible perturbations to original clean data. Prior research on adversarial attack methods mainly focused on single-task settings, i.e., generating adversarial examples to fool networks with a specific task. However, real-world artificial intelligence systems often require solving multiple tasks simultaneously. In such multi-task situations, the single-task adversarial attacks will have poor attack performance on the unrelated tasks. To address this issue, the generation of multi-task adversarial examples should leverage the generalization knowledge among multiple tasks and reduce the impact of task-specific information during the generation process. In this study, we propose a multi-task adversarial attack method to generate adversarial examples from a multi-task learning network by applying attention distraction with gradient sharpening. Specifically, we first attack the attention heat maps, which contain more generalization information than feature representations, by distracting the attention on the attack regions. Additionally, we use gradient-based adversarial example-generating schemes and propose to sharpen the gradients so that the gradients with multi-task information rather than only task-specific information can make a greater impact. Experimental results on the NYUD-V2 and PASCAL datasets demonstrate that the proposed method can improve the generalization ability of adversarial examples among multiple tasks and achieve better attack performance.



    加载中


    [1] A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, Commun. ACM, 6 (2017), 84-–90. https://doi.org/10.1145/3065386 doi: 10.1145/3065386
    [2] K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, preprint, arXiv: 1409.1556.
    [3] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 770–778. https://doi.org/10.1109/CVPR.2016.90
    [4] J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 7132–7141. https://doi.org/10.1109/CVPR.2018.00745
    [5] J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2015), 3431–3440. https://doi.org/10.1109/CVPR.2015.7298965
    [6] D. Eigen, C. Puhrsch, R. Fergus, Depth map prediction from a single image using a multi-scale deep network, preprint, arXiv: 1406.2283.
    [7] X. Wang, D. Fouhey, A. Gupta, Designing deep networks for surface normal estimation, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2015), 539–547. https://doi.org/10.1109/CVPR.2015.7298652
    [8] R. Caruana, Multitask learning, Mach. Learn., 28 (1997), 41–75. https://doi.org/10.1023/A:1007379606734 doi: 10.1023/A:1007379606734
    [9] S. Ruder, An overview of multi-task learning in deep neural networks, preprint, arXiv: 1706.05098.
    [10] S. Vandenhende, S. Georgoulis, W. Van Gansbeke, M. Proesmans, D. Dai, L. Van Gool, Multi-task learning for dense prediction tasks: A survey, IEEE Trans. Pattern Anal. Mach. Intell., 44 (2022), 3614–3633. https://doi.org/10.1109/TPAMI.2021.3054719 doi: 10.1109/TPAMI.2021.3054719
    [11] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, et al., Intriguing properties of neural networks, Int. Conf. Learn. Represent., 2014.
    [12] I. J. Goodfellow, J. Shlens, C. Szegedy, Explaining and harnessing adversarial examples, Int. Conf. Learn. Represent., 2015.
    [13] P. Guo, Y. Xu, B. Lin, Y. Zhang, Multi-task adversarial attack, preprint, arXiv: 2011.09824.
    [14] Y. Li, C. Chen, M. Duan, Z. Zeng, K. Li, Attention-aware encoder–decoder neural networks for heterogeneous graphs of things, IEEE Trans. Ind. Inform., 17 (2020), 2890–2898. https://doi.org/10.1109/TII.2020.3025592 doi: 10.1109/TII.2020.3025592
    [15] B. Pu, Y. Liu, N. Zhu, K. Li, K. Li, Ed-acnn: Novel attention convolutional neural network based on encoder–decoder framework for human traffic prediction, Appl. Soft. Comput., 97 (2020), 106688. https://doi.org/10.1016/j.asoc.2020.106688 doi: 10.1016/j.asoc.2020.106688
    [16] N. Silberman, D. Hoiem, P. Kohli, R. Fergus, Indoor segmentation and support inference from RGBD images, in Computer Vision – ECCV 2012, (2012), 746–760. https://doi.org/10.1007/978-3-642-33715-4_54
    [17] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, A. Zisserman, The pascal visual object classes (voc) challenge, Int. J. Comput. Vis., 88 (2010), 303–338. https://doi.org/10.1007/s11263-009-0275-4 doi: 10.1007/s11263-009-0275-4
    [18] T. Evgeniou, M. Pontil, Regularized multi-task learning, in Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, (2004), 109–117. https://doi.org/10.1145/1014052.1014067
    [19] B. Bakker, T. Heskes, Task clustering and gating for bayesian multitask learning, J. Mach. Learn. Res., 4 (2003), 83–99. https://doi.org/10.1162/153244304322765658 doi: 10.1162/153244304322765658
    [20] A. Argyriou, T. Evgeniou, M. Pontil, Convex multi-task feature learning, Mach. Learn., 73 (2008), 243–272. https://doi.org/10.1007/s10994-007-5040-8 doi: 10.1007/s10994-007-5040-8
    [21] A. Kendall, Y. Gal, R. Cipolla, Multi-task learning using uncertainty to weigh losses for scene geometry and semantics, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, (2018), 7482–7491. https://doi.org/10.1109/CVPR.2018.00781
    [22] Z. Chen, V. Badrinarayanan, C. Y. Lee, A. Rabinovich, Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks, preprint, arXiv: 1711.02257.
    [23] O. Sener, V. Koltun, Multi-task learning as multi-objective optimization, Adv. Neural Inf. Process. Syst., (2018), 525–536.
    [24] I. Misra, A. Shrivastava, A. Gupta, M. Hebert, Cross-stitch networks for multi-task learning, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 3994–4003. https://doi.org/10.1109/CVPR.2016.433
    [25] Y. Gao, J. Ma, M. Zhao, W. Liu, A. L. Yuille, Nddr-cnn: Layerwise feature fusing in multi-task cnns by neural discriminative dimensionality reduction, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 3205–3214. https://doi.org/10.1109/CVPR.2019.00332
    [26] S. Liu, E. Johns, A. J. Davison, End-to-end multi-task learning with attention, in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2019), 1871–1880. https://doi.org/10.1109/CVPR.2019.00197
    [27] M. Wallingford, H. Li, A. Achille, A. Ravichandran, C. Fowlkes, R. Bhotika, et al., Task adaptive parameter sharing for multi-task learning, in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2022), 7561–7570. https://doi.org/10.1109/CVPR52688.2022.00741
    [28] I. Kokkinos, Ubernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2017), 6129–6138. https://doi.org/10.1109/CVPR.2017.579
    [29] F. Ott, D. Rügamer, L. Heublein, B. Bischl, C. Mutschler, Joint classification and trajectory regression of online handwriting using a multi-task learning approach, in 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), (2022), 266–276. https://doi.org/10.1109/WACV51458.2022.00131
    [30] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, A. Vladu, Towards deep learning models resistant to adversarial attacks, preprint, arXiv: 1706.06083.
    [31] S. M. Moosavi-Dezfooli, A. Fawzi, P. Frossard, Deepfool: a simple and accurate method to fool deep neural networks, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016), 2574–2582, https://doi.org/10.1109/CVPR.2016.282
    [32] A. S. Rakin, Z. He, J. Li, F. Yao, C. Chakrabarti, D. Fan, T-bfa: Targeted bit-flip adversarial weight attack, IEEE Trans. Pattern Anal. Mach. Intell., 44 (2022), 7928–7939. https://doi.org/10.1109/TPAMI.2021.3112932 doi: 10.1109/TPAMI.2021.3112932
    [33] C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, A. Yuille, Adversarial examples for semantic segmentation and object detection, in 2017 IEEE International Conference on Computer Vision (ICCV), (2017), 1369–1378. https://doi.org/10.1109/ICCV.2017.153
    [34] G. Zhao, M. Zhang, J. Liu, Y. Li, J. R. Wen, Ap-gan: Adversarial patch attack on content-based image retrieval systems, Geoinformatica, 26 (2022), 347–377. https://doi.org/10.1007/s10707-020-00418-7 doi: 10.1007/s10707-020-00418-7
    [35] V. Mirjalili, A. Ross, Soft biometric privacy: Retaining biometric utility of face images while perturbing gender, in 2017 IEEE International Joint Conference on Biometrics (IJCB), (2017), 564–573. https://doi.org/10.1109/BTAS.2017.8272743
    [36] K. Yamanaka, R. Matsumoto, K. Takahashi, T. Fujii, Adversarial patch attacks on monocular depth estimation networks, IEEE Access, 8 (2020), 179094–179104. https://doi.org/10.1109/ACCESS.2020.3027372 doi: 10.1109/ACCESS.2020.3027372
    [37] W. Niu, X. Zhan, K. Li, G. Yang, R. Chen, Modeling attack process of advanced persistent threat, in International Conference on Security, Privacy and Anonymity in Computation, Communication and Storage, 10066 (2016), 383–391. https://doi.org/10.1007/978-3-319-49148-6_32
    [38] J. Chen, K. Li, S. Y. Philip, Privacy-preserving deep learning model for decentralized vanets using fully homomorphic encryption and blockchain, IEEE Trans. Intell. Transp. Syst., 23 (2021), 11633–11642. https://doi.org/10.1109/TITS.2021.3105682 doi: 10.1109/TITS.2021.3105682
    [39] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-cam: Visual explanations from deep networks via gradient-based localization, 2017 IEEE International Conference on Computer Vision (ICCV), (2017), 618–626. https://doi.org/10.1109/ICCV.2017.74 doi: 10.1109/ICCV.2017.74
    [40] R. Mottaghi, X. Chen, X. Liu, N. G. Cho, S. W. Lee, S. Fidler, et al., The role of context for object detection and semantic segmentation in the wild, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), 891–898. https://doi.org/10.1109/CVPR.2014.119
    [41] K. K. Maninis, I. Radosavovic, I. Kokkinos, Attentive single-tasking of multiple tasks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2019), 1851–1860. https://doi.org/10.1109/CVPR.2019.00195
    [42] A. Bansal, X. Chen, B. Russell, A. Gupta, D. Ramanan, Pixelnet: Representation of the pixels, by the pixels, and for the pixels, preprint, arXiv: 1702.06506.
    [43] L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, H. Adam, Encoder-decoder with atrous separable convolution for semantic image segmentation, in Computer Vision – ECCV 2018, (2018), 801–818. https://doi.org/10.1007/978-3-030-01234-2_49
    [44] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, et al., Pytorch: An imperative style, high-performance deep learning library, in Proceedings of the 33rd International Conference on Neural Information Processing Systems, (2019), 8026–8037. https://doi.org/10.5555/3454287.3455008
  • Reader Comments
  • © 2023 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(1871) PDF downloads(83) Cited by(2)

Article outline

Figures and Tables

Figures(2)  /  Tables(6)

Other Articles By Authors

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog