This tutorial aims to introduce the fundamentals of adversarial robustness ofdeep learning, presenting a well-structured review of up-to-date techniques toassess the vulnerability of various types of deep learning models to adversarialexamples. This tutorial will particularly highlight state-of-the-art techniques inadversarial attacks and robustness verification of deep neural networks (DNNs).We will also introduce some effective countermeasures to improve robustness ofdeep learning models, with a particular focus on generalisable adversarial train-ing. We aim to provide a comprehensive overall picture about this emergingdirection and enable the community to be aware of the urgency and importanceof designing robust deep learning models in safety-critical data analytical ap-plications, ultimately enabling the end-users to trust deep learning classifiers.We will also summarize potential research directions concerning the adversarialrobustness of deep learning, and its potential benefits to enable accountable andtrustworthy deep learning-based data analytical systems and applications.
The content of the tutorial is planned as below:
Nov 1, 2021 (CIKM'21 Tutorial Day)
Q&A Session 1: 11:00 – 11:30 (UTC, 1 Nov)
Q&A Session 2: 17:30 – 18:00 (UTC, 1 Nov)
Q&A Session 3: 21:00 – 21:30 (UTC, 1 Nov)
Pls join us via Zoom Link: https://zoom.us/j/92927096967?pwd=TzY3M0pQZzFlT2llZWMyVVVybUswZz09
Our tutorial is also archived here:
Wenjie Ruan, Xinping Yi, and Xiaowei Huang. 2021. Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (CIKM '21). Association for Computing Machinery, New York, NY, USA, 4866–4869. DOI: https://dl.acm.org/doi/10.1145/3459637.3482029
Tutorial Videos
Part-1: AttacksTutorial Slides