Tuesday, June 16, 2020

Building Scalable End-To-End Deep Learning Pipeline in the Cloud
Join on Hopin
Rustem Feyzkhanov
Rustem Feyzkhanov
Instrumental, Machine Learning Engineer

Machine and deep learning become essential for a lot of companies for internal and external use. One of the main issues with its deployment is finding the right way to train and operationalize the model within the company. Serverless approach for deep learning provides simple, scalable, affordable and reliable architecture for it. My presentation will show how to do so within AWS infrastructure.

Serverless architecture changes the rules of the game - instead of thinking about cluster management, scalability, and query processing, you can now focus entirely on training the model. The downside within this approach is that you have to keep in mind certain limitations and how to organize training and deployment of your model in a right fashion.

I will show how to train and deploy Tensorflow models on serverless AWS infrastructure. I will also show how you can easily use pretrained models for your tasks. AWS Function-as-a-Service solution - Lambda - can achieve very significant results - 20-30k predictions per one dollar (completely pay as you go model), 10k functions and more can be run in parallel and easily integrates with other AWS services. It will allow you to easily connect it to API, chatbot, database or stream of events.

My talk will be beneficial for data scientists and machine learning engineers.