The Conference for Machine Learning Innovation

Making Machine Learning Models Attack-Proof with Adversarial Robustness

Workshop
Join the ML Revolution!
Until June 2
✓ Save up to 226€
✓ Group discount
✓ Special discount for freelancers
Register Now
Join the ML Revolution!
Until June 2
✓ Save up to 226€
✓ Group discount
✓ Special discount for freelancers
Register Now
Thank you for attending!
Register Now
Thank you for attending!
Register Now
Join the ML Revolution!
Register until conference starts:
✓ 2 in 1 conference special
✓ 10 % Team Discount
Register Now
Join the ML Revolution!
Register until conference starts:
✓ 2 in 1 conference special
✓ 10 % Team Discount
Register Now
Infos

We can easily trick a classifier into making embarrassingly false predictions. When this is done systematically and intentionally, it is called an adversarial attack. Specifically, this kind of attack is called an evasion attack. In this session, we will examine an evasion use case and briefly explain other forms of attacks. Then, we explain two defense methods: spatial smoothing preprocessing and adversarial training. Lastly, we will demonstrate one robustness evaluation method and one certification method to ascertain that the model can withstand such attacks.

Take me to the full program of Zum vollständigen Programm von Munich Munich .

This Session belongs to the Diese Session gehört zum Programm vom MunichMunich program. Take me to the program of . Hier geht es zum Programm von Singapore Singapore .

This Session belongs to the Diese Session gehört zum Programm vom MunichMunich program. Take me to the program of . Hier geht es zum Programm von Berlin Berlin .

This Session Diese Session belongs to the gehört zum Programm von MunichMunich program. Take me to the current program of . Hier geht es zum aktuellen Programm von Munich Munich , Singapore Singapore or oder Berlin Berlin .

Behind the Tracks