Skip to main content

Towards Adversarial Robustness via Feature Matching.

Li, Z., Feng, C., Zheng, J., Wu, M. and Yu, H., 2020. Towards Adversarial Robustness via Feature Matching. IEEE Access, 8, 88594 - 88603.

Full text available as:

[img]
Preview
PDF (OPEN ACCESS ARTICLE)
09089860.pdf - Published Version
Available under License Creative Commons Attribution.

1MB

DOI: 10.1109/ACCESS.2020.2993304

Abstract

Image classification systems are known to be vulnerable to adversarial attacks, which are imperceptibly perturbed but lead to spectacularly disgraceful classification. Adversarial training is one of the most effective defenses for improving the robustness of classifiers. We introduce an enhanced adversarial training approach in this work. Motivated by human's consistently accurate perception of surroundings, we explore the artificial attention of deep neural networks in the context of adversarial classification. We begin with an empirical analysis of how the attention of artificial systems will change as the model undergoes adversarial attacks. Observation is that the class-specific attention gets diverted and subsequently induces wrong prediction. To that end, we propose a regularizer encouraging the consistency in the artificial attention on the clean image and its adversarial counterpart. Our method shows improved empirical robustness over the state-of-the-art, secures 55.74% adversarial accuracy on CIFAR-10 with perturbation budget of 8/255 under the challenging untargeted attack in white-box settings. Further evaluations on CIFAR-100 also show our potential for a desirable boost in adversarial robustness for deep neural networks. Code and trained models of our work are available at: https://github.com/lizhuorong/Towards-Adversarial-Robustness-via-Feature-matching.

Item Type:Article
ISSN:2169-3536
Additional Information:This work was supported in part by the National Natural Science Foundation of China under Grant 61602413, the Natural Science Foundation of Zhejiang Province under Grant LY19F030016, and the EU H2020 project-AniAge under Grant 691215
Uncontrolled Keywords:Bio-inspired explanations, deep learning, defense, adversarial attack, learning representations.
Group:Faculty of Media & Communication
ID Code:34221
Deposited By: Symplectic RT2
Deposited On:29 Jun 2020 10:24
Last Modified:14 Mar 2022 14:22

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -