With the advent of Deep architecture, though there has been great development in machine learning models and improving state of the arts, the interpretability of the model has suffered. Neural Networks being black box is very hard to interpret. This not only are difficult in financial and bank sector, but can have modelling risk. There has been multiple instances where deep learning models were easily broken by small perturbation in datasets or cases where the model has learned a very different feature than the one it is required to. In this lecture, we will explore how to interpret complex models. Tools like LIME etc will be covered which will help to breakdown complex models in terms of human interpretable techniques.