More talks in the program:
Thursday, September 10 2020
15:30 - 16:15
15:30 - 16:15
The interest to adapt and productionalise personalisation, recommendation systems, dynamic pricing models, anomaly detections, chatbots are high; but setting up a data platform that supports these initiatives efficiently is complicated and is only in the head of a few. Some might have attempted to sell you off-the-shelf solutions or you could have found yourselves overwhelmed from an unlimited number of different technologies, concepts, big data engines that are scalable without a limit and claim to provide ready-to-use Machine Learning capabilities… The temptation may be to hire ML engineers and Data Scientists, but let’s face the facts, you must have an architecture and data to support it first.
I have worked and built multiple data architectures for companies with different sizes from only few thousands to billions of active users; and used all modern technologies such as Azure SQL Data Warehouse, Redshift, Presto, Hive, Spark, Airflow, Kinesis Data Firehose. From my experience, integrating ML is just a small part of the journey, and the musts to power up these functionalities for real results are long.
In this talk I will guide you what solutions are available for all company sizes, and what are the core requirements to start building up your data architecture to have a meaningful ML portfolio. We will discuss some deep technical tricks to get the most out of these tools to be able to build your first models, channel back the results for evaluation, and measure their performance.