Skip to main content

Uncertainty estimation-based adversarial attacks: a viable approach for graph neural networks.

Alarab, I. and Prakoonwit, S., 2023. Uncertainty estimation-based adversarial attacks: a viable approach for graph neural networks. Soft Computing. (In Press)

Full text available as:

[img]
Preview
PDF (OPEN ACCESS ARTICLE)
s00500-023-08031-0.pdf - Published Version
Available under License Creative Commons Attribution.

2MB

DOI: 10.1007/s00500-023-08031-0

Abstract

Uncertainty estimation has received momentous consideration in applied ma- chine learning to capture model uncertainty. For instance, the MonteCarlo dropout method (MC-dropout), an approximated Bayesian approach, has gained intensive attention in producing model uncertainty due to its simplicity and efficiency. However, MC-dropout has revealed shortcomings in capturing erroneous predictions lying in the overlapping classes. Such predictions underlie noisy data points that can neither be reduced by more training data nor detected by model uncertainty. On the other hand, Monte-Carlo based on adversarial attacks (MC- AA), an outstanding method, performs perturbations on the inputs using the adversarial attack idea to capture model uncertainty. This method admittedly mitigates the shortcomings of the previous methods by capturing wrong labels in overlapping regions. Motivated by this method that was only validated with neural networks, we sought to apply MC-AA on various graph neural network models to obtain uncertainties using two public real-world graph datasets known as Elliptic and GitHub. First, we perform binary node classifications then we apply MC-AA and other recent uncertainty estimation methods to capture models’ uncertainty. Uncertainty evaluation metrics are computed to evaluate and compare the performance of model uncertainty. We highlight the efficacy of MC-AA in capturing uncertainties in graph neural networks wherein MC-AA outperforms other given methods.

Item Type:Article
ISSN:1432-7643
Uncontrolled Keywords:Uncertainty estimation; Adversarial attack; Graph neural network
Group:Faculty of Science & Technology
ID Code:38489
Deposited By: Symplectic RT2
Deposited On:26 Apr 2023 14:52
Last Modified:26 Apr 2023 14:52

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -