The Conference for Machine Learning Innovation

Everything you need to know about Security Issues in today’s ML Systems

Shorttalk
Join the ML Revolution!
Until Conference starts:
✓Special discount for Freelancers
✓10% Team Discount
Register Now
Join the ML Revolution!
Until Conference starts:
✓Special discount for Freelancers
✓10% Team Discount
Register Now
Join the ML Revolution!
Register until December 12:
✓ML Intro Day for free
✓Raspberry Pi or C64 Mini for free
✓Save up to $580
Register Now
Join the ML Revolution!
Register until December 12:
✓ML Intro Day for free
✓Raspberry Pi or C64 Mini for free
✓Save up to $580
Register Now
Join the ML Revolution!
Register until March 5:
✓ML Intro Day for free
✓Save up to 500 €
✓10 % Team Discount
Register Now
Join the ML Revolution!
Register until March 5:
✓ML Intro Day for free
✓Save up to 500 €
✓10 % Team Discount
Register Now
Infos
Tuesday, December 10 2019
17:25 - 17:45
Room:
Saal A

As machine learning (ML) based approaches continue to achieve great results and their use becomes more widespread, it becomes increasingly more important to examine their behavior in adversarial settings. In this talk, we will take a look at everything an ML practitioner should know when it comes to security issues in ML systems. At the end of the talk, you will know what is and what isn’t possible, what you should and what you shouldn’t worry about. We will start with a general overview of security issues in ML systems (eg. poisoning, evasion, inversion attacks), and then focus on vulnerabilities at test time (adversarial examples). We will see what adversarial examples are, what negative consequences they might cause, and take a look at existing attacks on ML systems. We will cover attacks on ML as a service (Google Cloud, AWS), attacks on state of the art face recognition systems, attacks on autonomous vehicles, attacks on voice assistants (Apple Siri, Google Now, and Amazon Echo) and more.

Behind the Tracks