In this workshop, we will have a practical non-toy example with realistic image data (no MNIST). We will train VGG-style and other standard architectures like ResNet using TensorFlow 2 and log and evaluate our experiments using MLFLow. Finally, we will look at saliency maps to understand what parts of the images our trained networks use to classify images. Maybe we are overfitting on random features? We will also look at confusion matrices and analyse what kinds of images are problematic and which ones are easy to identify leading up to an active learning approach: What kind of additional data would we need to improve our training results?