While people will increasingly have to rely on automated AI-systems, the complexity of AI-based systems is constantly increasing and with it the risk of Black Box systems being created. Thus, the need for transparency is growing.
Explainable AI (XAI) is an approach in which the activities or decisions of artificial intelligence are easily comprehensible to humans. This automatically leads to the reproducibility of decisions, for example, or makes it easier to identify errors and deviations in the data that could or may have led to incorrect decisions.
In his lecture, Dr. Felix presents the mechanisms, opportunities and limitations of XAI. Using the example of qualitative labeling – an AI method that combines decision and optimization algorithms with machine learning – he shows the possible effects of XAI. In this particular case, a new KPI-related view emerges of the results generated by the AI system, which helps to understand the behavior of the supposed AI Black Box.