With Solr and Elasticsearch, two technologies form the current standard in the open-source search area, which rely on an algorithm that has its origins in the 70s for their relevance calculation: BM25. However, this keyword-based approach cannot represent the complexity of natural language. In recent years, technologies have evolved that enable semantic indexing of language by vectorizing text. Do such approaches find their way into the current developments of Solr & Elasticsearch? How can AI combined with Vector Similarity Search efficiently deliver more relevant search results than conventional methods? For which cases is there an economic gain from their application? To answer these and other questions, Daniel Wrigley will provide an overview of the current state and an outlook into the future possibilities of new technologies and reveal how search applications can get a boost with the help of AI.