Skip to main content

Adversarial robustness via attention transfer.

Li, Z., Feng, C., Wu, M., Yu, H., Zheng, J. and Zhu, F., 2021. Adversarial robustness via attention transfer. Pattern Recognition Letters, 146 (June), 172 - 178.

Full text available as:

[img]
Preview
PDF
Adversarial Robustness via Attention Transfer.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

12MB

DOI: 10.1016/j.patrec.2021.03.011

Abstract

Deep neural networks are known to be vulnerable to adversarial attacks. The empirical analysis in our study suggests that attacks tend to induce diverse network architectures to shift the attention to irrelevant regions. Motivated by this observation, we propose a regularization technique which enforces the attentions to be well aligned via the knowledge transfer mechanism, thereby encouraging the robustness. Resultant model exhibits unprecedented robustness, securing 63.81% adversarial accuracy where the prior art is 51.59% on CIFAR-10 dataset under PGD attacks. In addition, we go beyond performance to analytically investigate the proposed method as an effective defense. Significantly flattened loss landscape can be observed, demonstrating the promise of the proposed method for improving robustness and thus the deployment in security-sensitive settings.

Item Type:Article
ISSN:0167-8655
Uncontrolled Keywords:Adversarial defense; Robustness; Representation learning; Visual attention; Transfer learning
Group:Faculty of Media & Communication
ID Code:35804
Deposited By: Symplectic RT2
Deposited On:20 Jul 2021 11:14
Last Modified:21 Mar 2022 01:08

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -