论文名:
Adversarial Attack on Graph Structured Data
会议/年份:ICML 2018
作者:
Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, Le Song
推荐理由:
这篇文章试图探讨了对 GNN 的对抗攻击。文中对抗的 task 有 graph classification 和 node classfication,其中 attacker 的操作被限制在对边的增删。感觉文章的值得一看的地方在于:
1.对 Graph data 上神经网络对抗攻击的研究较少;
2.Graph 上的实际 attack 操作是离散的(比如增删好友关系),这与图片对抗样本不同,一些基于梯度的方法没有办法直接应用;
3.文中对各种 settings (white box attack, practical black box attack, restrict black box attack) 讨论较全。对比了 RL 算法、遗传算法、从梯度出发的算法在这个问题上的效果。其中 RL 采用了 Q-learning,其 Q Network 是一个 GNN。
Abstract
Deep learning on graph structures has shown exciting results in various applications. However, few attentions have been paid to the robustness of suchmodels, in contrast to numerous research work for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool the model by modifying the combinatorial structure of data. We first propose a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier. Also, variants of genetic algorithms and gradient methods are presented in the scenario where prediction confidence or gradients are available. We use both synthetic and real-world data to show that, a family of Graph Neural Network models are vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers.
论文下载链接
https://www.aminer.cn/archive/adversarial-attack-on-graph-structured-data/5b67b47917c44aac1c863824

分享干货
12种Python 机器学习 & 数据挖掘 工具包(附链接)
【学术界大地震】哈佛大学撤销Piero Anversa 31篇造假论文
AMiner
发掘科技创新的原动力
点击阅读原文访问AMiner官网

