Get your ticket or log in to build your agenda.

Building Scalable End-To-End Deep Learning Pipeline in the Cloud

Hopin 18
Join on Hopin

Rustem Feyzkhanov
Instrumental, Machine Learning Engineer

Rustem Feyzkhanov is a machine learning engineer at Instrumental, where he creates analytical models for the manufacturing industry. Rustem is passionate about serverless infrastructure (and AI deployments on it) and is the author of the course and book "Serverless Deep Learning with TensorFlow and AWS Lambda" and "Practical Deep Learning on the Cloud".

Machine and deep learning become essential for a lot of companies for internal and external use. One of the main issues with its deployment is finding the right way to train and operationalize the model within the company. Serverless approach for deep learning provides simple, scalable, affordable and reliable architecture for it. My presentation will show how to do so within AWS infrastructure.

Serverless architecture changes the rules of the game - instead of thinking about cluster management, scalability, and query processing, you can now focus entirely on training the model. The downside within this approach is that you have to keep in mind certain limitations and how to organize training and deployment of your model in a right fashion.

I will show how to train and deploy Tensorflow models on serverless AWS infrastructure. I will also show how you can easily use pretrained models for your tasks. AWS Function-as-a-Service solution - Lambda - can achieve very significant results - 20-30k predictions per one dollar (completely pay as you go model), 10k functions and more can be run in parallel and easily integrates with other AWS services. It will allow you to easily connect it to API, chatbot, database or stream of events.

My talk will be beneficial for data scientists and machine learning engineers.