Cracking Open the Black Box of Neural Networks

Keynote
This talk originates from the archive. To the CURRENT program
Join the ML Revolution! Early Bird ends November 8! Register now and save up to € 210! Team Discount Register with 3+ colleagues and get 10 % off! Register Now
Infos
Tuesday, June 19 2018
09:00 - 09:45
Room:
Cuvilliés

Through my own YouTube channel on Machine Learning (https://www.youtube.com/c/ArxivInsights) I recently committed to staying on track with the latest developments in AI and Machine Learning and find the most interesting new trends that will greately impact the future of the field. One of my current focusses is on "How Neural Nets Learn". Here I engage with things like feature visualisation and various ways to fool neural networks into making obvious mistakes called ‘adversarial attacks’. These topics are already crucial for many existing ML applications in place today, but will become even more relevant in future domains like self-driving cars, AI-assisted medical care, drug discovery and many more. Large amounts of research groups are currently focussing on tackling the ‘black box problem’ which is that neural nets are currently uninterpretable: there is no way of knowing why they make certain predictions. In this talk I want to shed some light upon this black box of neural nets and what future progress we can expect!

Behind the Tracks