We have seen a whole pile of language models getting published in recent years, achieving better and better results for all kinds of standardized natural language processing (NLP) tasks. "Which transformer model from huggingface shall we use?" – This seems to be the only relevant question in the early phases of an NLP project.
I am exaggerating of course. Nevertheless, the language model enthusiasm tends to overshadow valid and sometimes even better solutions to NLP problems. You don’t need the power of BERT and co. for every project. And sometimes the drawbacks (such as memory consumption, runtime, etc.) even outweigh the benefits.
In this talk I make the case for a more diverse toolbox to approach NLP problems. I will illustrate my point with specific projects where we did and didn’t choose the right approach from the beginning and tell you what I learned.