In recent years we have seen a lot of breakthroughs in AI. We now have deep learning algorithms beating the best of the best in games like chess and go. In computer vision these algorithms now recognise faces with the same accuracy as humans. Except they don’t, they can do it for millions of faces while humans struggle to recognize more than a few hundred people.
Anomalies - or outliers - are ubiquitous in data. Be it due to measurement errors of sensors, unexpected events in the environment or faulty behaviour of a machine. In many cases, it makes sense to detect such anomalies in real time in order to be able to react immediately. The data streaming platform Apache Kafka and the Python library scikit-learn provide us with the necessary tools for this.
Since February, we have been inundated in the media with diagrams and graphics on the spread of the coronavirus. The data comes from freely accessible sources and can be used by everyone. But how do you turn the source data into a data set that can be used to create something visual like a dashboard? With Python and modules like pandas, this is no magic trick.
With the emergence of deep neural networks, the question has arisen how machine learning models can be not only accurate but also explainable. In this article, you will learn more about explainability and what elements it consists of, and why we need expert knowledge to interpret machine learning results to avoid making the right decisions for the wrong reasons.
In modern software development, we’ve grown to expect that new software features and enhancements will simply appear incrementally, on any given day. This applies to consumer applications such as mobile, web, and desktop apps, as well as modern enterprise software. We’re no longer tolerant of big, disruptive software deployments. ThoughtWorks has been a pioneer in Continuous Delivery (CD), a set of principles and practices that improve the throughput of delivering software to production in a safe and reliable way.
Machine learning algorithms can cause the “black box” problem, which means we don’t always know exactly what they are predicting. This may lead to unwanted consequences. In the following tutorial, Natalie Beyer will show you how to use the SHAP (SHapley Additive exPlanations) package in Python to get closer to explainable machine learning results.
Although there are powerful and comprehensive machine learning solutions for the JVM with frameworks such as DL4J, it may be necessary to use TensorFlow in practice. This can, for example, be the case if a certain algorithm exists only in a TensorFlow implementation and the effort to port the algorithm into another framework is too high. Although you interact with TensorFlow via a Python API, the underlying engine is written in C++. Using the TensorFlow Java wrapper library, you can train and inference TensorFlow models from the JVM without having to rely on Python. Existing interfaces, data sources, and infrastructures can be integrated with TensorFlow without leaving the JVM.
Honey bee colony assessment is usually carried out by manually counting and classifying comb cells. Thiago da Silva Alves explains in this interview how deep learning can help to accomplish this time-consuming and error-prone task.
Generative Adversarial Networks (GANs) have recently sparked an increasing amount of interest, as they can generate images of faces that look convincingly real. What else are they capable of, what risks could they pose in the long run, and what do they have in common with the emerging internet in the 1990’s? We interviewed ML Conference speaker Xander Steenbrugge.
.NET Core is not only including WPF and WinForms in the new open source implementation of .NET - Microsoft now wants to make machine learning usable for everyone. That's why machine learning is now making its way into .NET Core with the ML.NET Framework. In this series of articles, we'll show you what ML.NET can do, what options the developer has available, what the tooling and APIs look like, and what’s happening behind the scenes.