The Conference for Machine Learning Innovation

Making Machine Learning Models Attack-Proof with Adversarial Robustness

Workshop
Join the ML Revolution!
Register until conference starts:
✓ 2 in 1 conference special
✓ 10 % Team Discount
Register Now
Join the ML Revolution!
Register until conference starts:
✓ 2 in 1 conference special
✓ 10 % Team Discount
Register Now
Join the ML Revolution!
Register until the conference starts:
✓ 2-in-1 conference special
✓ 10 % Team Discount
Register Now
Join the ML Revolution!
Register until the conference starts:
✓ 2-in-1 conference special
✓ 10 % Team Discount
Register Now
Thank you for attending!
Register Now
Thank you for attending!
Register Now
Infos

We can easily trick a classifier into making embarrassingly false predictions. When this is done systematically and intentionally, it is called an adversarial attack. Specifically, this kind of attack is called an evasion attack. In this session, we will examine an evasion use case and briefly explain other forms of attacks. Then, we explain two defense methods: spatial smoothing preprocessing and adversarial training. Lastly, we will demonstrate one robustness evaluation method and one certification method to ascertain that the model can withstand such attacks.

This Session Diese Session Take me to the current program of . Hier geht es zum aktuellen Programm von Berlin Berlin , Munich Munich or oder Singapore Singapore .

Behind the Tracks