The Conference for Machine Learning Innovation

Let’s Talk About Security Issues in Today’s ML Systems

Shorttalk
Join the ML Revolution!
Register until March 11:
✓Save more than 500 € and get ML Intro Day for free
✓ Workshop day for free
✓10 % Team Discount
Register Now
Join the ML Revolution!
Register until March 11:
✓Save more than 500 € and get ML Intro Day for free
✓ Workshop day for free
✓10 % Team Discount
Register Now
Join the ML Revolution!
Register until December 12:
✓ML Intro Day for free
✓Raspberry Pi or C64 Mini for free
✓Save up to $580
Register Now
Join the ML Revolution!
Register until December 12:
✓ML Intro Day for free
✓Raspberry Pi or C64 Mini for free
✓Save up to $580
Register Now
Join the ML Revolution!
Register until November 7th:
✓Save up to € 210
✓10% Team Discount
Register Now
Join the ML Revolution!
Register until November 7th:
✓Save up to € 210
✓10% Team Discount
Register Now
Infos

As machine learning (ML) based approaches continue to achieve great results and their use becomes more widespread, it becomes increasingly more important to examine their behavior in adversarial settings. Unfortunately, ML models have been shown to be vulnerable to so-called adversarial examples, inputs to ML models that are intentionally designed to cause them to malfunction. Despite the ongoing research efforts there is no reliable solution so far, meaning that today’s state of the art learning-based approaches remain vulnerable.

In this talk, we will take a look at things an ML practitioner should know when it comes to security issues in ML systems, with a focus on vulnerabilities at test time.

This Session Diese Session Take me to the current program of . Hier geht es zum aktuellen Programm von Online Edition Online Edition , Munich Munich , Singapore Singapore or oder Berlin Berlin .

Behind the Tracks