The Conference for Machine Learning Innovation

Let’s Talk About Security Issues in Today’s ML Systems

Shorttalk
Join the ML Revolution!
Until March 17:
✓ ML Intro Day for free
✓ Save more than 500€
✓ 2-in-1 conference special
Register Now
Join the ML Revolution!
Until March 17:
✓ ML Intro Day for free
✓ Save more than 500€
✓ 2-in-1 conference special
Register Now
Thank you for attending!
Register Now
Thank you for attending!
Register Now
Join the ML Revolution!
Register until conference starts:
✓ 2 in 1 conference special
✓ 10 % Team Discount
Register Now
Join the ML Revolution!
Register until conference starts:
✓ 2 in 1 conference special
✓ 10 % Team Discount
Register Now
Infos

As machine learning (ML) based approaches continue to achieve great results and their use becomes more widespread, it becomes increasingly more important to examine their behavior in adversarial settings. Unfortunately, ML models have been shown to be vulnerable to so-called adversarial examples, inputs to ML models that are intentionally designed to cause them to malfunction. Despite the ongoing research efforts there is no reliable solution so far, meaning that today’s state of the art learning-based approaches remain vulnerable.

In this talk, we will take a look at things an ML practitioner should know when it comes to security issues in ML systems, with a focus on vulnerabilities at test time.

This Session Diese Session Take me to the current program of . Hier geht es zum aktuellen Programm von Munich Munich , Singapore Singapore or oder Berlin Berlin .

Behind the Tracks