Speaker about the webinar:
Adversarial attacks on neural net models (usually solving computer vision tasks) have been an active machine learning research topic for more than 10 years. To mitigate the devastating effect of adversarial examples the methods of adversarial defense are proposed, being mostly empirical approaches. That said, some proposed algorithms of defense provide theoretical guarantees under any type of attack — they constitute the direction of certified robustness.
In this talk, I will present not only the classical methods of certified defense and approaches to improving it but also the essential problems of certified robustness approaches and what could be considered as the possible solution to these challenges.
Additional Information:
[1] Cohen, Jeremy M., Elan Rosenfeld, and J. Zico Kolter. «Certified adversarial robustness via randomized smoothing.»
[2] Kumar A. et al. “Curse of dimensionality on randomized smoothing for certifiable robustness”
[3] Fischer, Marc, Maximilian Baader, and Martin Vechev. «Certification of Semantic Perturbations via Randomized Smoothing.»
[4] Muravev, Nikita, and Aleksandr Petiushko. «Certified Robustness via Randomized Smoothing over Multiplicative Parameters.»
[5] Pautov, Mikhail, et al. «CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks.”
[6] https://sokcertifiedrobustness.github.io
* Aleksandr Petiushko personal webpage: https://petiushko.info
** Aleksandr Petiushko Google Scholar profile: https://scholar.google.com/citations?user=b8d5wS-QfscC
Attention! The event will be in English.
Video: https://youtu.be/N40xcQBIXDg
Slides: https://drive.google.com/file/d/1sDK9EfBUt7vBMKzFMueHR5q7ilviRFQ2/view?usp=sharing