The Conference for Machine Learning Innovation

Applying Machine Learning online at Scale

Session
Join the ML Revolution!
Register until January 23:
✓Raspberry Pi or C64 Mini for free
✓Save up to $330
✓ Group Discount
Register Now
Join the ML Revolution!
Register until January 23:
✓Raspberry Pi or C64 Mini for free
✓Save up to $330
✓ Group Discount
Register Now
Join the ML Revolution!
Register until March 5:
✓ML Intro Day for free
✓Save up to 500 €
✓10 % Team Discount
Register Now
Join the ML Revolution!
Register until March 5:
✓ML Intro Day for free
✓Save up to 500 €
✓10 % Team Discount
Register Now
Join the ML Revolution!
Until Conference starts:
✓Special discount for Freelancers
✓10% Team Discount
Register Now
Join the ML Revolution!
Until Conference starts:
✓Special discount for Freelancers
✓10% Team Discount
Register Now

Applying machine learning in online applications requires solving the problem of model serving: Evaluating the machine-learned model over some data point(s) in real time while the user is waiting for a response. Solutions such as TensorFlow Serving are available to solve this problem where the model only needs to be evaluated over a one data point per user request, but this is not sufficient for problems where many data points must be evaluated to make a decision, such as in search and recommendation.This talk will show that this is a bandwidth constrained problem, and outline an architectural solution where computation is pushed down to data shards in parallel. It will demonstrate how this solution can be put into use with Vespa.ai, an open source engine, to achieve scalable model serving of TensorFlow and ONNX, and show benchmarks comparing performance and scalability to TensorFlow Serving. Model serving with Vespa is used today for some of the worlds largest recommender systems, such as serving personalized content on all Yahoo content pages and personalized ads in the worlds third largest ad network. These systems evaluate models over millions of data points per request for hundreds of thousands of requests per second.

Behind the Tracks