The Conference for Machine Learning Innovation

Everything you need to know about Security Issues in today’s ML Systems

Shorttalk
Join the ML Revolution! Register till August 22: ✓ML Intro Day for free ✓Save more than €500 ✓ 10% Team Discount Register Now

As machine learning (ML) based approaches continue to achieve great results and their use becomes more widespread, it becomes increasingly more important to examine their behavior in adversarial settings. In this talk, we will take a look at everything an ML practitioner should know when it comes to security issues in ML systems. At the end of the talk, you will know what is and what isn’t possible, what you should and what you shouldn’t worry about. We will start with a general overview of security issues in ML systems (eg. poisoning, evasion, inversion attacks), and then focus on vulnerabilities at test time (adversarial examples). We will see what adversarial examples are, what negative consequences they might cause, and take a look at existing attacks on ML systems. We will cover attacks on ML as a service (Google Cloud, AWS), attacks on state of the art face recognition systems, attacks on autonomous vehicles, attacks on voice assistants (Apple Siri, Google Now, and Amazon Echo) and more.

Behind the Tracks