With the emergence of deep neural networks, the question has arisen how machine learning models can be not only accurate but also explainable. In this article, you will learn more about explainability and what elements it consists of, and why we need expert knowledge to interpret machine learning results to avoid making the right decisions for the wrong reasons.
In modern software development, we’ve grown to expect that new software features and enhancements will simply appear incrementally, on any given day. This applies to consumer applications such as mobile, web, and desktop apps, as well as modern enterprise software. We’re no longer tolerant of big, disruptive software deployments. ThoughtWorks has been a pioneer in Continuous Delivery (CD), a set of principles and practices that improve the throughput of delivering software to production in a safe and reliable way.
Machine learning algorithms can cause the “black box” problem, which means we don’t always know exactly what they are predicting. This may lead to unwanted consequences. In the following tutorial, Natalie Beyer will show you how to use the SHAP (SHapley Additive exPlanations) package in Python to get closer to explainable machine learning results.
Although there are powerful and comprehensive machine learning solutions for the JVM with frameworks such as DL4J, it may be necessary to use TensorFlow in practice. This can, for example, be the case if a certain algorithm exists only in a TensorFlow implementation and the effort to port the algorithm into another framework is too high. Although you interact with TensorFlow via a Python API, the underlying engine is written in C++. Using the TensorFlow Java wrapper library, you can train and inference TensorFlow models from the JVM without having to rely on Python. Existing interfaces, data sources, and infrastructures can be integrated with TensorFlow without leaving the JVM.
Honey bee colony assessment is usually carried out by manually counting and classifying comb cells. Thiago da Silva Alves explains in this interview how deep learning can help to accomplish this time-consuming and error-prone task.
Generative Adversarial Networks (GANs) have recently sparked an increasing amount of interest, as they can generate images of faces that look convincingly real. What else are they capable of, what risks could they pose in the long run, and what do they have in common with the emerging internet in the 1990’s? We interviewed ML Conference speaker Xander Steenbrugge.
.NET Core is not only including WPF and WinForms in the new open source implementation of .NET - Microsoft now wants to make machine learning usable for everyone. That's why machine learning is now making its way into .NET Core with the ML.NET Framework. In this series of articles, we'll show you what ML.NET can do, what options the developer has available, what the tooling and APIs look like, and what’s happening behind the scenes.
As machine learning technologies become more prevalent, the risk of attacks continues to rise. Which types of attacks on ML systems exist, how do they work, and which is the most dangerous? ML Conference speaker David Glavas answered our questions.
Machine Learning (ML) allows applications to obtain hidden knowledge without the need to explicitly program what needs to be considered in the process of knowledge discovery. This way, unstructured data can be analyzed, image and speech recognition can be improved and well-informed decisions can be made. In this article we will in particular discuss new trends and innovations surrounding Apache Kafka and Machine Learning.
You can't see the forest for the trees anymore, and you need new inspirations urgently? Then ML Conference is the place to be. Connect with like-minded people, widen your horizon while gaining deep insights and practical knowledge of the latest trends and technologies.