Attacks on machine learning systems include a wide range of different approaches and do not end with the notorious Adversarial examples. Attacks can change the logic of the system (Adversarial examples and reprogramming) to obtain data from AI systems (so-called Membership inference or Model Extraction attacks) or, conversely, to inject data into the system (Poisoning, Backdoor, Trojan). Unfortunately, the silver bullet from these attacks has not been invented and is unlikely to be, but we will show you how to approach the security assessment of AI algorithms correctly and what metrics to look at, what approaches to protection can be applied and where is the best place to apply and how to eventually get the maximum protection for reasonable investment of resources.
* AI Security vs traditional Cybersecurity
* Who should care about AI Security: Industries
* Why should we care about AI Security: Threats, Initiatives, Research
* What is AI Security: AI Objects, Applications, ML tasks
* How to break AI: Different attacks
* When to protect AI: Approaches to protect AI
* Step by step AI Security project
* Where are we going?