Get your ticket or log in to build your agenda.

OPEN TALK (AI): Building Scalable End-to-End Deep Learning Pipeline in the Cloud

- PDT
AI DevWorld -- Main Stage
Join on Hopin

Rustem Feyzkhanov
Instrumental, Machine Learning Engineer

Rustem Feyzkhanov is a machine learning engineer at Instrumental, where he creates analytical models for the manufacturing industry, and AWS Machine Learning Hero. Rustem is passionate about serverless infrastructure (and AI deployments on it) and is the author of the course and book "Serverless Deep Learning with TensorFlow and AWS Lambda" and "Practical Deep Learning on the Cloud". Also he is a main contributor to open source repository for serverless packages https://github.com/ryfeus/lambda-packs.


Machine and deep learning become essential for a lot of companies for internal and external use. One of the main issues with its deployment is finding the right way to train and operationalize the model within the company. Serverless approach for deep learning provides simple, scalable, affordable and reliable architecture for it. My presentation will show how to do so within AWS infrastructure.

Serverless architecture changes the rules of the game - instead of thinking about cluster management, scalability, and queue processing, you can now focus entirely on training the model. The downside within this approach is that you have to keep in mind certain limitations and how to organize training and deployment of your model in a right fashion.

I will show how to deploy train and inference pipelines for Tensorflow models on serverless AWS infrastructure.

My talk will be beneficial for machine learning engineers and data scientists.