Machine Learning is often hyped, but how does it work? We will show you hands-on how you can do data inspection, prediction, build a simple recommender system and so on. Using realistic datasets and partially programmed code we will make you accustomed to machine learning concepts such as regression, classification, over-fitting, cross-validation and many more. This tutorial is accessible for anyone with some basic Python knowledge and eager to learn the core concepts of machine learning. We make a use of a iPython/Jupyter Notebook running on a dedicated server, so nothing but a laptop with internet connection is required to participate.
In this workshop, we will dive into the world of deep learning. After introducing the basics of Google’s Deep Learning library ‘TensorFlow’ we will continue with hands on coding exercises.
You will learn how to create a neural network that is able to classify images of handwritten digits (MNIST) into their respective classes (Image Classification). While starting with a very basic, shallow network, we gradually add depth and introduce convolutional layers to improve the performance of our model. Finally, we will have a look at natural language processing with recurrent models. For this workshop we only require participants to have a basic knowledge of Python in order to work on the hands-on coding exercises. If you do not have any coding experience you are still welcome to join the workshop as all the solutions will be shared at the end of the workshop. For the workshop we will use free Google Cloud credits to spin up a virtual machine, but to activate the credits everybody should bring a valid visa card (there will be absolutely no costs, and you can cancel your account directly after the workshop if desired!)
If you have been following the news in the last couple of months, you will have noticed that people like Facebook founder Mark Zuckerberg and SpaceX and Tesla founder Elon Musk don’t agree on the dangers of AI. At Facebook they bring stories of the promised land while Elon Musk’s OpenAi non-profit seems to bring a sense of impending doom. But how can these people have such wildly different opinions? And what is actually happening in the world of machine learning and AI that can lead to these opposing views?
In this talk I will dive deeper into where these views come from, what the most realistic scenario’s are, and what the impact of machine learning and AI might be on our job market and society. Finally I will give you my two cents on how to prepare for the future ahead.
You might have received an unwanted email at some point. We all have. According to some studies, between 80 and 90 percent of all email is spam. Those of us with accounts at established email providers can rely on their hosts' filters to keep their inboxes manageable. What if you're hosting your email on your own, though? Off-the-shelf open source solutions are there when you need them, but that's not where the fun is. Combining existing tools, building your own classifiers, and seeing them work in practice is far more exciting. Let me tell you a story, one with rabbits.
Many companies are starting to realize that the combination of IoT data and AI software is rapidly transforming business models everywhere. And while countless AI startups are emerging in every sector and China is transitioning to become a global digital leader, many giant companies are subjected to a rigid test of natural selection: “Who will survive the AI boom?” At ML6, one of our core focuses is leveraging IoT data for industrial optimization. In this talk we will give an overview of the enormous potential that recent breakthroughs in Reinforcement Learning can bring to a wide range of industrial control tasks. We will look at the entire production process going from energy consumption/generation to predictive maintenance to raw material usage and final product quality.
We describe in detail how a deep learning algorithm works and how it can be applied to financial time series analysis. Presenting our four-step procedure of auto-encoding, calibration, validation, and verification, we solve examples from index replication and portfolio creation. We also discuss some of the detail to be kept in mind when creating a machine learning based systematic trading strategy.
This session presents two business applications of Machine Learning to finance: Credit Risk Prediction and Online Payment Fraud Detection. Credit risk analysis is important to financial institutions that provide loans to businesses and individuals. Credit loans and finances have risk of being defaulted or delinquent. To understand risk levels of credit users, credit providers normally collect vast amount of information on borrowers. This session presents how statistical predictive analytic techniques can be used to analyze and determine risk levels involved on credits, and approve or reject credit applications accordingly.
Fraud detection is one of the earliest industrial applications of anomaly detection and machine learning. This session introduces best practices and design guidelines for building an online payment fraud detection mechanism in Azure Machine Learning.
When Microsoft released its AI chatbot Tay on Twitter in March 2016, Tay was supposed to learn to chat as "an average American teenage girl". However, she quickly became sexist, racist, and anti-Semitic. Microsoft turned out to be overly naïve about the intentions of users who had to "train" Tay.
In designing and developing technology, there are always risks involved. Next to calculating a design’s hard impacts, in terms of safety and health, also "soft" impacts, such as undesirable consequences, deserve our attention. Today, logging off has become an illusion in our always-on world. Enormous data sets are generated every day. Algorithms not only have an important share in how we see the world, they also predict our future behavior and even influence our thoughts. Yet, neither algorithms nor data sets are ever neutral; on the contrary, users’ and developers’ biases slip into them. And still we rely on them on a daily basis. How can we anticipate and reduce undesirable consequences and pitfalls?
Deep Learning is all the hype these days, beating another record most every week. And it's not just for Google, Facebook, Microsoft & co. - it can work just fine with not-so-big-data and moderate resources, too. Deep learning frameworks abound, and we will see more and more applications starting to integrate deep learning in one or the other way. However, writing code for deep learning is not just coding - it really helps if you have a basic understanding of what's going on beneath.
In this session, I'll explain the indispensable bits of matrix algebra and calculus you should know, plus tips and tricks to get started with deep learning frameworks like Keras, PyTorch and TensorFlow.
Runtastic believes in a world where people live longer, healthier lives. We approach this goal by gaining insights out of data, to make eg. better training plans or to help motivating users. While gathering data is, in times of big data, no problem at all from a technical perspective, taming the data and gaining useful insights, can be challenging. Microsoft’s integration of R, both, as a language and as a service into their data platform came very handy to us. Come to this talk to learn about which services of Microsoft’s data platform benefit so far from this integration process and to hear about how we made use of that at Runtastic.
Outfittery's mission is to provide relevant fashion to men. In the past it was our stylists that put together the best outfits for our customers. But since about a year ago we started to rely more on intelligent algorithms to augment our human experts. This transition to become a data driven company has left its marks on our IT landscape: In the beginning we just did simple A/B tests.Then we wanted to use more complex logic so we added a generic data enrichment layer.Later we also provided easy configurability to steer processes. And this in turn enabled us to orchestrate our machine learning algorithms as self contained Docker containers within a Kubernetes cluster. All in all it's a nice setup that we are pretty happy with. It then really took us some time to realise that we actually had built a delivery platform to deliver just any pure function that our data scientists come up with. We just now started to use it that way; we just put their R&D experiments directly into production... This talk will guide you through this journey, explain how this platform is built, and what we do with it.
Long Short-Term Memory (LSTM) is a Recurrent Neural Network (RNN) architecture that looks at a sequence and remembers values over long intervals. LSTMs have been known to have achieved state of the art performance in many sequence classification problems. In this talk, I’ll cover how to write an LSTM using TensorFlow’s Python API for natural language understanding. This is going to be a code-heavy talk where I will implement the LSTM model and explain the math behind it step-by-step.
In short, it will cover:
- Understanding how the math behind LSTM architecture works in case of sequence classification
- Writing an LSTM model using TensorFlow for sentiment classification of variable length English language sentences
Building any AI system is hard but building a real-time AI brings its own challenges. At Yedup and Clearpool we have been building a real-time AI system to trade against hundreds of securities simultaneously by extracting fundamental signals from real-time social media feeds. We've gone back to first principals to build out a technology stack from scratch to meet our specific needs.
This talk looks at how we've exploited algorithmic architecture to build a real-time AI system that delivers market leading alpha. That's interesting in itself, but what's more interesting is looking at some of the challenges that we've had to overcome in order to deliver a system that can not only trade hundreds of instruments simultaneously, but that can also correlate the relationships across industries and sectors to extract yet more alpha.
These are challenges that would be applicable to any similar class of problems (such as real-time AI) and this talk explores a couple of the key ones, such as maintaining and recovering with huge amounts of in-flight state to deliver fast, scalable and robust AI systems.
'From knowing what people do (events) using low-level sensor data from phones, Sentiance can find out why they are doing it (moments), when they'll do it again and what type of people they are (segments). In this talk, I'll explain the full machine learning pipeline used to assess a user's driving behavior, including whether they were the driver or a passenger during a trip, which is a key component. We'll look into detecting transport mode, map-matching GPS-fixes to a plausible traveling route, and fusing that data to arrive at the driver/passenger classifier, openly discussing all ML architectures used, from random forests to CNN/RNNs. After that, I'll touch upon how moments and segments are derived. Expect a technical talk.
In many real world scenarios running inference of a trained model in a third party cloud service is not desirable. Especially in an enterprise setting, the customer often wishes more control over the server infrastructure. The project requirements may include custom cloud or other traditional server infrastructure in which an ML solution is to be integrated. Java is still the most widespread platform for enterprise server systems. Adding a new language or framework in such projects is cumbersome and increases risk and cost. In this talk the possibilities to run inference of trained Tensorflow Models in a Java Enterprise Server environment are discussed, together with real world examples of integration into popular server frameworks like Spring and Apache CXF. Different possibilities for deployment and version control of trained models are explored.
Deep Learning is these days often considered a "holy grail" when it comes to lending algorithms, machines, and systems intelligence. While fully automatic and autonomous machine learning is on its way, present solutions require a software designer's and engineer's understanding of the underlying principles and possibilities. This talk focuses on the essentials of making Deep Learning work in a broad range of applications and settings. This includes often encountered limitations such as limited knowledge on an optimal problem domain representation, limited availability of annotated data, or the better understanding of internal decision-making processes. Tricks and hacks presented include end-to-end learning from raw data by convolutional layers, coping with sparse data by generative adversarial topologies, active, reinforced, semi-supervised, and transfer learning, and ways to interpret a trained network. Hints are further given on the design of a suited network topology, and tools in the field. This leaves to the listener's creativity how to best exploit Deep Learning in the next thrilling real-life use-case.
R is the first choice for data scientists for a good reason: besides accessing and transforming data and applying statistical methods and models to it, it has a wide variety of possibilities to visualize data. As visual perception of data is the key to understanding data, this capability is crucial. This session will give you a broad overview over available packages and diagram types you can build with them on the one hand and a deep dive into common visualizations and their possibilities on the other hand. Impress yourself and your peers with stunning visualizations which you could not achieve with other tools of Microsoft’s BI stack.
Computer Vision enables computers to obtain a high-level understanding from images and videos by automatically extracting, analysing and understanding useful information. With autonomous driving, visual failure detection or scene understanding, computer vision is becoming one of the focus areas in artificial intelligence (AI) to enable computers to see and perceive like humans. This talk will help you understand the state of the art in computer vision using deep learning approaches. We will explain the differences and challenges in image classification, object recognition and image segmentation. Enriched with some sample code and explanations of algorithms we will go through the evolution of Computer vision thanks to deep learning. Why is scene description so much easier than segmentation? Is GPU always required? And finally - why is seeing not equal to understanding?
During the development of a Deep Learning (DL) focused Android App, we were able to build up a lot of knowledge and experience. In this talk we will share our learnings organizing the development of an App using DL to save you some headaches during this process. We talk about whether a deep learning approach is the right way or maybe a more traditional approach is sufficient enough to achieve the goal. The process and best practices we found useful to collect, organize and label the required data for training our models will also be outlined. At last we will talk about how it all comes together to ship a production ready and efficient Android App using the Google TensorFlow API.
Amazon AI services bring natural language understanding (NLU), automatic speech recognition (ASR), visual search and image recognition, text-to-speech (TTS), and machine learning (ML) technologies within the reach of every developer. In this session, you will be introduced to several Amazon AI services and learn on a number of examples how those Deep Learning services can be utilised in your application with a few simple API calls.
The keynote by Katleen Gabriels about ethics in computing and the „soft” impacts of programming decisions raised many questions and comments. Let’s continue this important and fruitful conversation in an open panel - with Katleen and others.
Understanding AI from the user and business perspective. Machine learning, smart algorithms and autonomous products start to become realities of our daily life. But how do we need to design the artifical intelligence systems so they will be accepted, used and socially integrated in our societies and future businesses? Getting an idea why it is not about data but the situative relevance in data to become successful in data-driven business and how to architecture the smart ecosystems so they will be beneficial for us as human beings will be an essential part of this presentation. Getting insights of how to design user centric AI systems from the user's point of view will make you understand why it's not about technology but all about user centricity.
Machine Learning and related technologies are changing the world. Modern tools and frameworks are lowering the barrier of entry to this field for every developer. Let’s discuss what we’ve lerned at this conference, what needs to be discussed and what you’d expect from future ML conference. It’s all about feedback and exchange between the speakers and you, the audience!