JAXenter: Machine learning is regarded by many as a kind of miracle; we train the machine with data until it can make decisions independently. How these decisions are made is a kind of myth. No longer comprehensible, we end up with the “black box problem”. Does that have to be like that?
Ward Van Laer: The black box problem is a perception created by, for most people, the unintelligible jumble of machine learning models. But the decision the models make are always based on the data we feed the model. Will we be able to design completely transparent models without having to compromise the complexity of the problems to be solved? In my opinion, the real question is what kind of explainability do we really need to demystify the black box perception.
JAXenter: In your talk at the ML Conference you show how to develop transparent machine learning models. How does that work?
Ward Van Laer: I will demonstrate that explainability can be interpreted in multiple ways. Depending on the perspective from which we look at an AI system, explainable AI can mean different things.
We can look at explainability in a technical way, which means we are looking through the eyes of machine learning engineers, for example. In this case, transparent AI can help to spot dataset biases. More importantly, this technical explainability is not interesting or understandable for an end-user. From this perspective, UX will play a crucial role in demystifying AI applications.
JAXenter: Why do you think transparency in ML is important?
Ward Van Laer: I believe we need more than just technical transparency, or as it is referred to at the moment, “explainable AI”. We need to pinpoint the needed properties that lay at the ground of a trustworthy AI, instead of focussing on full transparency.
JAXenter: Can you give an example of how a good UX changes the acceptance of AI solutions?
Ward Van Laer: In one of our projects in the health care industry we visualize links between classification results and the dataset, which helps physicians understand why certain decisions are made.
To have more insight in the possibilities I can certainly encourage everyone to attend my talk at MLConference 2019 😉
JAXenter: What is the core message of your session that every participant should take home?
Ward Van Laer: Creating a well-working machine learning model is only half of the work. Developing a thought-through User Experience is the key to successful AI.
Please complete the following sentences:
The fascinating thing about Machine Learning for me is…
… that it will be able to help us solve many complex problems (e.g. health care).
Without Machine Learning, humanity could never…
… improve itself.
The biggest current challenge in machine learning is…
… making sure that an AI system is successful in the eyes of user.
I advise everyone to get started with machine learning …
… to better understand what the real possibilities are.
Once the machines have taken power…
… hmm let’s hope we can explain how it happened! 😉
JAXenter: Thank you very much!