In this talk, we will cover how to model model different natural language processing. In present NLP tasks like word-based or sentence-based classification, sentence generation and question answering, it is a challenge to train models with little domain information. The key solution is using a pre-trained model and transfer learn. BERT from google and MTDNN from Microsoft have been breaking all set benchmarks in recent years. Understanding how to use transfer learning and multi tasking is key in building a model for the task. In this talk, we will discuss different models like ULMFIT, GPT and BERT, which are popular for transfer learning, and then we will analyze how multi tasking can immensely improve this task and different ways of doing multi tasking.