The amazing results of OpenAI’s GPT-2 have rekindled interest in Natural Language Generation (NLG), a subfield of Natural Language Processing (NLP). But how does GPT-2 work, how is it trained and how does one interpret its output to generate text? And why, if those new neural network / transformer – based models have such an impressive performance, are rule-based NLG systems still the norm in commercial text generation applications?
This talk will cover the basics of rule based and ML-based NLG systems and their respective advantages and disadvantages. You will learn how Machine Learning systems like GPT-2 learn to generate text and what their strengths and weaknesses are. We will have a look at the latest attempts to better control the output of systems like GPT-2 and what is still necessary for Deep Learning based systems to completely take over one of the last bastions of symbolic/rule-based AI, natural language generation.