There’s no doubt that a revolution is coming. As mixed mode, customer contact through Amazon Alexa and Google Assistant becomes ubiquitous and enterprises create their own voice apps and chatbots, how will they ensure brand identity and differentiation from their direct competitors?
Enterprises are coming to understand that these voice and text enabled services can be valuable new channels and a direct connection with the consumer. But will brand identity be diluted by these MetaBots? How do brands avoid disintermediation?
There’s an opportunity and a process for delivering extraordinary, branded, A.I. driven customer experiences through these channels. In this keynote, we’ll discuss how.
AI offers great new opportunities. To become a successful player in AI driven business, companies need to decide on the role of AI in their product strategy. In principle, this decision may range from “develop your own AI modules by yourself” to “use of ready-to-use commercially available AI suites”.
Given the amount of time and resources that have been invested in powerful AI suites already, it seems hard to find an application that benefits from a special purpose AI engine that a company develops for themselves.
In this presentation experiences and selected case studies for AI driven business are explained, covering medical solutions, predictive analysis, services, and automotive industry. In the case studies the effects of decisions on the business case are discussed. In some cases it is explained why AI product development strategy turned out to be successful, although important technical goals were not achieved. On the other hand, cases are explained that turned out not to be successful.
A summary concludes the presentation with recommendations for successful selection of AI strategy.
Learn from successful customer examples of how to apply machine learning in practice. Hear how customers have automated their business processes using a design-led approach, and intelligent services with SAP’s Leonardo Machine Learning portfolio.
With the total number of Alexa skills surpassing 50000 worldwide, companies now face the challenge how to make themselves heard in the voice universe. In this session we take a closer look at how companies can combine the strengths of existing brands with a user-centric approach to conversational design in order to create voice experiences that manage to stick out no matter the size of the competition.
Machine learning in a web browser?! Yes, you read that correctly! Machine learning is typically regarded as the purview of traditional scientific computing environments such as Python and MATLAB. However, recent advances in web technologies are rapidly opening a new frontier for machine learning which extends beyond the desktop and into your browser.
In this session, I'll discuss the current state-of-the-art for machine learning in the browser and introduce libraries for high-performance linear algebra, statistical computing, and neural networks. I'll highlight how changes in web standards are facilitating the rise of these libraries and thus accelerating the adoption of browser-based machine learning. I'll follow with lessons learned while implementing machine learning algorithms for use in web applications, discussing common implementation mistakes, portability issues, and how to maximize performance. And finally, I'll conclude by offering insight into how you can take advantage of the next big machine learning revolution, all within your browser!
What about making your app smarter without any knowledge in AI? With pre-trained models and a few lines of code, Machine Learning APIs can automatically analyze your data. Moreover, AutoML techniques can now help in getting even more specific insights tailored to your needs.
In this session, you’ll see how to transform or extract information from text, image, audio & video with ML APIs, how to train an AutoML custom model, and you’ll be an active player of a live demo. Don't put your smartphone in airplane mode!
This talk explores the genres and types that attract the attention of professional voice developers, as well as the business models that have been tried, established, and brought to fruition. This includes ‘gaming’ - Amazon’s developer reward program, highly engaging games with premium content, voice apps built for branding and marketing purposes, and assistants for board or video games.
One notable example we’ll be investigating is Sensible Object’s ‘When In Rome’. This is a board game that comes with its own Skill, in which Alexa introduces the rules, keeps track of scores, and moderates the game’s trivia questions. We’ll be looking at how well voice is integrated into this traditional medium, and if it has the potential to impact customers’ expectations towards tabletop games in general.
After this fine example of a voice-enabled game, we will see how much value Alexa & Co can provide in voice-assisted games where it serves as an optional modality. One such example from contemporary computer games is Destiny 2, where Alexa assumes the role of an in-game character and manages parts of the player’s inventory and clan communications.
In the past year, machine learning has made another great step forward and is about to become pervasive in enterprise software. Learn more about a wide spectrum of new capabilities intended to provide intelligent solutions for an even broader variety of business challenges.
Making smart chat bots, that really understand what the user means, can be quite time consuming. A smart bot needs to be trained with an extensive set of expressions, and coming up with fifty or a hundred ways to express the same meaning can be hard, especially for people who are not used to it. In order to enhance the user experience for our clients that make use of our chat bot platform, we are currently implementing text generation. Based on some expressions provided by the user, we generate a number of similar expressions, using the most innovative text generation techniques. Additionally, our text generation system learns on the fly! The fine training capabilities allow us to tailor expression generation functionality in near-real time.
We confront approaches based on character- and word-level embeddings and explain their advantages and disadvantages. Finally, we discuss the importance of post-processing for filtering candidate expressions, using Part of Speech taggers and other NLP tools from very popular toolkits and libraries such as SpaCy, Gensim and NLTK.
Data Scientists and Product Owners have a lot of great ideas, but often these ideas are missing data to answer the given questions and to build a solution to them. Although an established company might have a lot of data already collected in their data warehouse, the data might not be suited to answer the new problems. Why and where would we have saved all the images our customers have sent us over the years?
The cold start problem is affecting both startups as well as established companies. Nonetheless, this is also a great opportunity to collect new data with your customer’s problem in focus. We, as an insurtech startup, had to tackle this problem and started to use open data, such as weather and maps, data we find on the web as well as collected data, such as logs and sensor data.
In this talk, we want to share our experience with building a data pipeline starting from “zero data” to a data pipeline with open, found and collected data, which enables us building data products that help our customers in their daily life.
As user behaviour continues to shift to voice-based activities, brand managers are wondering “Is voice relevant to my business?” To help answer that question, we’ll explore whether yours is a voice-first company or, rather, if a “voice as a channel” strategy is better suited to reaching your customers.
In this session, we’ll share an evaluation companies need to undertake to help you decide the best path forward in voice. We’ll examine common business types that should be thinking about a voice-first strategy. For those, we will look at factors to help determine whether to develop an O&O customer-facing Voice Interface Application, factors when considering multimodal devices, and whether to develop for a single platform or multiple platforms.
For those better suited to a “voice as a channel” strategy, we’ll review how to optimize your digital experience for voice search, inclusion in the Google marketplace, and how to leverage existing data to inform future voice decisions. We will share “voice as a channel” strategy tactics to connect with consumers who are engaging with Voice Interface Applications including content development, partnership evaluation, distribution and audience engagement.
It has not been long since the moment when ML transferred from a purely academic discipline, to a technology that is actively being implemented in business. Because of that, it is a common situation when specialists encounter problems they did not have while doing research or participating in competitions (e.g. Kaggle). At the same time, companies that want to introduce ML in their business process, often do not realize what is needed for that to happen, what difficulties might occur and how to estimate the outcome of a project. We will look at two projects implemented in production: carotid artery examination and vehicle behavior analysis. Based on those examples we will cover some of the problems you might encounter in your projects including the fields of data processing, algorithm selection, communication with the customer and other.
We will go through the most significant challenges faced during the development, how they were dealt with and what are the general options. In your future projects, you will probably encounter these and other problems, but this talk will make you a little more prepared for them, whether you are a developer or an entrepreneur.
In comparison to other Voice Assistants, Alexa already has a shopping function for the Amazon store. However, the Amazon Store -and the rest of the internet, as a matter of fact- is not prepared to support this function.
The content and search option must be transformed into "natural language" to fit the users' needs and to make a great Voice-User-Interface possible.
Robert C. Mendez from Internet of Voice (Cologne) offers some Dos and Don'ts, as well as some hints as to what vendors can do on Amazon to make their content findable with Alexa.
The challenges of Machine Learning (ML) start with collecting training data. First, labeled data resources are scarce. Second, the increasing complexity and changing nature of industries, such as healthcare or cyber-security, require the constant knowledge and verification of subject-matter experts (SMEs). In the context of natural language, this knowledge comes in the form of text annotations, for instance entity or document labels.
Collaborations between data analytics/AI professionals and SMEs often fail, or are just non existent. This is partially due to the lack of annotation tools, which could simplify the communication and reduce the time required to label data.
Today we present tagtog, a collaborative text annotation tool to bridge this gap. In this session, we will walk through the web interface, showing how to manually create or validate text annotations. We will also show how SMEs can teach and deploy ML models at scale, only by providing feedback in an easy-to-use interface. We will use these models to annotate automatically and to demonstrate how this approach can create large amounts of training data, in a time-efficient manner, and adapted to the specific subject domain.
All the AI hype these days is around deep learning and that machines eventually are getting rid of us humans. However, the ground work of AI without which no self driving car and no AlphaGo Zero would work is search algorithms.
In this talk we will use a simple game of exiting a maze to illustrate the differences between depth first and breadth first search. We will also solve the mystery around A* and what makes it such a powerful approach.
In the next step we will look at how to compete against other players using adversarial search like the one used in the leading chess engine stockfish. You will understand Monte Carlo Tree Search which was used in Alpha(Go) Zero to beat the world's best Go players.
Recent viral articles on the internet revealed that a number of purchases done through smart speakers is lower than expected and customers tend to avoid shopping via voice.
Does this mean that voice shopping is just an utopia? A promise that cannot be fulfilled?
During the talk we’ll find out the truth behind voice shopping. We’ll take a look at some success stories as well as harsh failures to discover common misconceptions and what the secret of voice commerce is. Attendees will better understand how to design and develop voice applications that really make sense for e-commerce from strategy to user experience. We'll uncover the utopia and discover the business sense in creating for voice purchasing.
Machine Learning on Source Code (MLoSC) is an emerging and exciting domain of research which stands at the sweet spot between deep learning, natural language processing, social science and programming. We’ve accumulated petabytes of source code data that is open, yet there have been few attempts to fully leverage the knowledge that is sealed inside. This talk gives an introduction into the current trends in MLoSC and presents the tools and some of the applications, such as deep code suggestions and structural embeddings for fuzzy deduplication, how to gain insights from your projects and ML at pull request level to improve the daily life of developers.
Echo Buttons are the first gadgets of their kind with a lot of potential for developers. They allow an additional contextual user input for your skills - physically and visually.
When developing for the Buttons, there are more things to consider on the technical and the conceptual side as if you are developing for an echo device alone.
Mario Johansson will show you his best practices, techniques and a few tips to consider when building for the echo buttons in this interactive session. Learn how to use the Gadgets Controller and Game Engine API to build great skills for this revolutionary input device.
RADIX.ai | "How we built a Job Recommender SaaS with Deep Learning". In this talk we'll tell you about our experience building a job recommender software as a service for VDAB, the Flemish Public Employment Service.
Our deep neural net, called JobNet, "reads" job seeker résumés and job descriptions in multiple languages and learns to embed them in a common space. The resulting embeddings allow us to match job seekers and jobs in both directions.
To deliver the best recommendations at the scale of VDAB, we built JobNet using a modern ML stack based on Dask, Sklearn, and TensorFlow.
We'll also talk about how we deployed the solution in the cloud using a modern Continuous Integration pipeline based on CircleCI, Terraform, Docker, and AWS ECS.
Voice Games are one of the fastest growing categories in the Amazon Alexa Skill Store and casual gaming is currently experiencing a revolution with Voice Assistants. This year even a board game was released that interacts with Amazon Alexa. Why “voice” and “games” are a perfect match and how you can transform a game concept from “mobile screen” to a “voice-first” experience will be revealed in their talk.
Tim Kahle, one of the co-founders of 169 Labs, and one of the three German Alexa Champions, is on the jury for the current Alexa Skills Kit Challenge with prizes worth a total of EUR 50,000. He and Dominik Meissner will talk about the agency’s latest voice game project (to be published in October 2018). They have now brought one of the most famous quiz games (even with an own TV show) on Amazon Alexa.
In this talk Matthias will present an overview of the best banking voice applications for Alexa and Google Assistant. He will also look at the banking skill of Sparkasse Bremen in more detail and will pay special attention to topics like optimization for Echo Show and gamification.
Text classification can be very important in businesses. Some tasks involve a lot of repetitive, prone to errors processes that could be automated. One of those could be whether certain unstructured text data belongs in which category. One example of a list of categories on one side and unstructured text on the other side is a script like Star Wars.
We will look at the characters of Star Wars in python with a Jupyter Notebook. How can you analyze and visualize questions like "Which characters use similar words?", using popular open source libraries? The talk will cover the prediction of which character most probably said a given text with different algorithms, from classical machine learning to neural networks using tensorflow.
Modern deep learning architectures are getting more and more computationally demanding which has started hurting hyperparameter tuning and experimentation speed. GPUs are getting stronger and cheaper, but vertical scaling is too slow to keep up with professional demand; we need to go horizontal, multi-GPU and multi-machine.
But what is distributed learning? Should it be used? How is it used? Data parallelism, model parallelism, federated learning, what, what, WHAT?
In this talk, I'll present bottlenecks that various distributed learning approaches solve, so you learn when to start looking at distributed learning if you encounter the presented hindrances. I'll also highlight the differences between different distributed learning technologies, e.g., TensorFlow Parameter Servers and Horovod.
Building a voice application for Amazon Alexa requires the Voice First approach. But with the growing device family with displays like the Echo Spot, the Echo Show, or the Fire TV, you are able to support your voice experience with photos, illustrations, or videos. This session concentrates on how to build a Multi-Modal application with Amazon Alexa. We will have a closer look on the best-practices as well as some tools and techniques to help you to create richer voice applications.
Machine learning enables customized conversations between man and machine that can result in buying decisions. Marketing experts should include artificial intelligence purposefully in their strategic thinking. It is important to build a bridge for the customer from the source of inspiration to your own content. Kathleen Jaedtke and Tina Nord explain how this can be achieved through the use of dialogue-oriented technologies.
Personalization improves customer experience and revenue. Though Matrix factorization based Recommender Systems are still popular, the scientific community have invented newer techniques to address more intricate challenges of diversity, evolving tastes of users and cold start, to name a few.
With the exponential increase in use and access of online shops, online music, video and image libraries, search engines and recommendation system are the best ways to find what you are looking for (and sometimes, what you are NOT looking for).
I will share the practical implementation issues we faced while developing a recommendation system. I will also discuss the recent advances made in the field of recommendation systems using deep learning to address diversity, evolving user tastes, etc.
Attendees: Beginners and Intermediate skilled in Recommender systems
The most difficult aspect of creating a compelling voice app is the design. Because of the limited fidelity, the classic palette of U/X tools goes out the window and new ones have to be invented. With three years of experience creating the richest and most engaging content on Alexa and Assistant, TsaTsaTzu will present audio techniques to deal with the problems of discoverability, complexity, and support that this platform presents. You will leave with actionable design tools that can be used to craft applications which, like ours, will engage users for hundreds of hours.
The dominant programming language for deep learning is Python. It has a wide variety of frameworks and data scientists love it due to its ecosystem and the workflows it allows. Yet when it comes to actually taking models to production, it is usually met with resistance, as in many enterprise environments Java is still king of the hill – and rightly so. It is the underpinning of big data infrastructure, provides better tooling for production monitoring and scales better to larger teams.
Deeplearning4J is both the name of a deep learning library for Java, but also the umbrella for a set of libraries aimed at the production usage of deep learning.
This session will take you along for the journey to create a Deeplearning4J-based model from scratch and take it into a production environment. The journey will start with the formulation of a problem that is then going to be solved during the live coding that follows. Each step and the reasoning for it will be comprehensively explained, such that you should be able to repeat the same process with your own data and problems.
Clustering is the most prominent example of non-supervised learning, a type of learning where it is not necessary to manually annotate each data point with an expected outcome. This is extremely useful, as getting good annotated data is often the most complicated and time consuming part of an ML project. Clustering can be used as a powerful data exploration and preprocessing technique and also as a means in itself to solve an ML problem.
This talk will give an overview over clustering in general and the different properties of clustering algorithms that are useful when comparing them. It will present various clustering methods, explain how they work and what they can be used for. The talk will cover a lot of ground very quickly and may contain some basic maths. The content will be programming language agnostic, algorithms will be described with free text, pseudocode and diagrams.
Jan König is one of the founders of Jovo (http://www.jovo.tech ), the first open source framework that enables developers to build voice apps for both Amazon Alexa and Google Assistant. In this session, Jan will walk through the essentials of building for Alexa and Google, talk about important differences of both platforms, and show practical examples of successful voice apps.
As a consumer, commodities are great. Companies will compete for the lowest possible price because every product is almost the same. For the companies on the other hand, this is a cut throat business. Margins are low and differentiating yourself is nearly impossible. Building materials, vegetables, cars and even smartphones have become (near) commodities. But will AI ever become a commodity? And what are the hurdles we need to overcome to (not) get there?
(by the way the URL seems to be for sale for an outrageous amount of money)
Machine Learning has given us amazing new technologies: Image recognition, Speech-to-text, machine translation,.. And yet, having a robotic arm build a simple lego house remains one of the most challenging problems in the field. Why is such a trivial task (which children learn without effort) so hard even for state-of-the-art machine learning techniques? In this talk I will give a general introduction to Reinforcement Learning, an overview of the most challenging problems in the field and an outlook on what we can expect in the near future of intelligent robotics.
Did you ever want to defeat a computer game by only watching the screen? You can using a software agent!
However, the challenge is this: given only the image pixels of the computer screen, the agent needs to figure out how to optimally play the game.
In this talk I will lead you in-depth and step by step through the deep Q learning algorithm which uses neural networks to learn a Q policy which represents the optimal action given the current situation.
The talk will feature code samples in Python and put all the pieces together in a live demo showing an agent learning to master a game.
Make your devices smarter by embedding the Google Assistant into your own device using the Assistant SDK. You could build your own voice-driven interactions without requiring your users to also have their own voice assistant device (like Google Home). It is available for you to tinker with on Raspberry Pi devices and it's easy to get started!
A hands on approach to developing a Google Action with Dialogflow and Google protocol buffers. After a quick introduction of the tools, we are going to do a hands on coding session to create a Google Action with a python webhook. In the session you will learn how to create your own Google Action and still have access to all the powerful machine learning tools python has to offer.
Learn how to create your own voice interfaces using the Google Actions platform. We'll look at the technologies involved, how to plan for a conversation, and then build a voice interaction together. With the rise of voice assistants, voice is becoming another surface area for users to interact with your product or service. We can now start to blend this new technology with our existing offerings to improve user experience, engagement, and satisfaction.
In this workshop, we'll learn about the Google Actions platform and how it works to provide you with all the tools you need to build your own conversational interfaces. Throughout the workshop, you'll also build your own Action and see how to extend it for deeper integration with your application.We'll also spend time looking at how to design a conversation interface, including thinking through the various phases of dialog and sketching out expected flows.Finally, we'll look at how to review and improve your Action by using the analytics and AI training tools available from Google.
Learn the technical fundamentals of building voice actions quickly, as well as the social and human considerations for its design.
Machine Learning is often hyped, but how does it work? We will show you hands-on how you can do data inspection, prediction, build a simple recommender system and so on. Using realistic datasets and partially programmed code we will make you accustomed to machine learning concepts such as regression, classification, over-fitting, cross-validation and many more. This tutorial is accessible for anyone with some basic Python knowledge who's eager to learn the core concepts of Machine Learning. We make use of an iPython/Jupyter Notebook running on a dedicated server, so nothing but a laptop with an internet connection is required to participate.
In this workshop, we will dive into the world of Deep Learning. After introducing the basics of Google’s Deep Learning library ‘TensorFlow’ we will continue with hands on coding exercises. You will learn how to create a neural network that is able to classify images of handwritten digits (MNIST) into their respective classes (Image Classification). While starting with a very basic, shallow network, we gradually add depth and introduce convolutional layers and regularization to improve the performance of our model.