Artificial Intelligence Innovation

Tuesday, June 16, 2020

- PDT
How Screamingbox Builds and Manages Remote Teams and Global Projects
Join on Hopin
Dave Erickson
Dave Erickson
ScreamingBox, CEO

ScreamingBox is a remote company and we have staff and developers spread throughout the world. We have seven years of management experience and building of global remote teams. 


- PDT
Getting Your AI/ML Workloads Into the Kubeflow
Join on Hopin
Elvira Dzhuraeva
Elvira Dzhuraeva
Cisco, Technical Product Engineer AI/ML

As someone once said, the story of enterprise machine learning is three weeks to develop the model, and over a year to deploy. Putting ML into production is not a straight forward process and, what’s more, the actual ML capability is just a tiny cog in the entire AI/ML engine. Along with the technical challenges, it’s vital that enterprises avoid creating silos between data scientists and operations engineers (e.g. SREs) if they’re to break the cycle of enterprise ML.
What’s required are new platforms that promote collaboration, environments that can deliver a set of core applications to efficiently developer, build, train and deploy models. One of those is Kubeflow, an AI/ML lifecycle management platform for Kubernetes. Its capabilities make it easy to train and tune models and deploy ML workloads anywhere.
This session will cover key customer and user pain points, before looking at how the core features of Kubeflow 1.0 address those challenges. To finish, we’ll take a peek into the future to consider possible enhancements, as well as where the opportunities for increased community participation lie.

- PDT
PRO TALK: Getting Your AI/ML Workloads Into the Kubeflow
Join on Hopin
Elvira Dzhuraeva
Elvira Dzhuraeva
Cisco, Technical Product Engineer AI/ML

As someone once said, the story of enterprise machine learning is three weeks to develop the model, and over a year to deploy. Putting ML into production is not a straight forward process and, what’s more, the actual ML capability is just a tiny cog in the entire AI/ML engine. Along with the technical challenges, it’s vital that enterprises avoid creating silos between data scientists and operations engineers (e.g. SREs) if they’re to break the cycle of enterprise ML.
What’s required are new platforms that promote collaboration, environments that can deliver a set of core applications to efficiently developer, build, train and deploy models. One of those is Kubeflow, an AI/ML lifecycle management platform for Kubernetes. Its capabilities make it easy to train and tune models and deploy ML workloads anywhere.
This session will cover key customer and user pain points, before looking at how the core features of Kubeflow 1.0 address those challenges. To finish, we’ll take a peek into the future to consider possible enhancements, as well as where the opportunities for increased community participation lie.

- PDT
From Zero to Deeplearning With Scala
Join on Hopin
Fabio Tiriticco
Fabio Tiriticco
Fabway, Software Engineer

Do you want to get started with deep learning using Scala? This talk introduces AI / deep learning from scratch and guides you through an unique image classification use case, for which both dataset and code are provided. At the end of this talk, you will have a basic understanding of the mechanics behind deep learning and a solid starting point for your own experiments.

We will build a neural network using Scala and deeplearning4j, train it and then run it on a Raspberry PI to classify images taken with its camera. The goal is to detect reliably if the image contains a plane or not. When so, the picture is tweeted at @PlanesOnBridge. Akka Streams is the engine that ties it all together.

This project will give us a chance to observe AI bias and briefly touch upon controversial aspects of AI.

- PDT
Powerful Graph Algorithm Use Cases for the Data Scientist
Join on Hopin
Dr. Victor Lee
Dr. Victor Lee
TigerGraph, Head of Product Strategy & Developer Relations

Graph algorithms such as PageRank, community detection, and similarity match have moved from the classroom to the toolkits of both data scientists and business analysts. Organizations are gaining actionable insights and supercharging their AI by interconnecting and analyzing their data. Users don't need to be computer scientists or programmers to derive meaningful benefits. Increasingly, graph databases come with graph algorithm libraries. Users only need to understand first, what each type of algorithm is designed to tell them, and next, what makes one algorithm different from another.

This presentation will systematically describe and illustrate five categories of graph algorithms. We will also dive into how each of these algorithms has been used -- individually, in combinations, and for ML feature extraction -- to answer real business challenges in key verticals such as banking, financial services, healthcare, pharmaceutical, internet, telecom and eCommerce. We will also discuss the computational requirements for algorithms, to help attendees evaluate and select the right platform.

- PDT
Building Scalable End-To-End Deep Learning Pipeline in the Cloud
Join on Hopin
Rustem Feyzkhanov
Rustem Feyzkhanov
Instrumental, Machine Learning Engineer

Machine and deep learning become essential for a lot of companies for internal and external use. One of the main issues with its deployment is finding the right way to train and operationalize the model within the company. Serverless approach for deep learning provides simple, scalable, affordable and reliable architecture for it. My presentation will show how to do so within AWS infrastructure.

Serverless architecture changes the rules of the game - instead of thinking about cluster management, scalability, and query processing, you can now focus entirely on training the model. The downside within this approach is that you have to keep in mind certain limitations and how to organize training and deployment of your model in a right fashion.

I will show how to train and deploy Tensorflow models on serverless AWS infrastructure. I will also show how you can easily use pretrained models for your tasks. AWS Function-as-a-Service solution - Lambda - can achieve very significant results - 20-30k predictions per one dollar (completely pay as you go model), 10k functions and more can be run in parallel and easily integrates with other AWS services. It will allow you to easily connect it to API, chatbot, database or stream of events.

My talk will be beneficial for data scientists and machine learning engineers.

- PDT
Real-Time Analytics on Computer Vision Data
Join on Hopin
Dhruba Borthakur
Dhruba Borthakur
Rockset, CTO
Tushar Dadlani
Tushar Dadlani
Standard Cognition, Computer Vision Engineering Manager

Walk into a store, grab the items you want, and walk out without having to interact with a cashier or even use a self-checkout system. That’s the no-hassle shopping experience showcasing the AI-powered checkout pioneered by Standard Cognition. The company makes use of Computer Vision to remove the need for checkout lines of any sort in physical retail locations. Streaming Vision-Data is converted into a stream of metadata and the need of the hour is to be able to do continuous analytics on this metadata. The variety and velocity of this metadata stream is very high and requires special purpose analytics tools that are geared for real-time analytics. Application developers need to be able to prototype rapidly on this metadata so that they can try out different analytical models quickly.

This talk describes how Standard Cognition uses Rockset for rapid prototyping of application models on vision data. In specific, we first discuss multiple challenges associated with analysis of vision data, why a traditional database was insufficient for our needs and why we chose a realtime database to address the following challenges:

* Describe the three flavors of velocity of vision data and the policies of keeping high frequency (~ 500 Hz) data on the store premises, immediately processing low-frequency (~ 5Hz) data in the cloud and streaming medium frequency (~ 50 Hz) data for realtime analytics.
* Describe why and how the schema of the generated metadata changes from day to day, which means that analytical tools we use need to be able to handle very frequent schema changes. These changes are typically the addition of new columns, columns with mixed types, complex objects inside a column, etc.
* Describe how we created an application-developer platform REST-api by encapsulating complex analytical SQL queries within Query Lambdas. This allows our application developer to rapidly iterate and build data powered applications on production data sets.

We share with you the workflow we have created for analytical processing of vision data, the salient features of that workflow, and its uniqueness compared to traditional data processing systems.

- PDT
Abusing Your CI/CD: Running Abstract Machine Learning Frameworks Inside GitHub Actions
Join on Hopin
Jon Peck
Jon Peck
GitHub, Technical Advocate & Software Developer

We all love the conventional uses of CI/CD platforms, from automating unit tests to multi-cloud service deployment. But most CI/CD tools are abstract code execution engines, meaning that we can also leverage them to do non-deployment-related tasks. In this session, we'll explore how GitHub Actions can be used to train a machine learning model, then run predictions in response to file commits, enabling an untrained end-user to predict the value of their home by simply editing a text file. As a bonus, we'll leverage Apple's CoreML framework, which normally only runs in an OSX or iOS environment, without ever requiring the developer to lay their hands on an Apple device.

- PDT
Bixby: An Open Ecosystem for AI Assistant Developers
Join on Hopin
Adam Cheyer
Adam Cheyer
Samsung, VP R&D

Bixby is Samsung's conversational assistant, resident on hundreds of millions of devices ranging from smartphones, refrigerators, TVs, tablets, watches, and more. Bixby offers the most advanced tools and platform available in the space, and has been designed from the ground up to support a more equitable marketplace for 3rd party developers. In this technical session, learn how to quickly build a Bixby capsule for your content and services, and how to offer your service to consumers everywhere in the Bixby Marketplace.

Wednesday, June 17, 2020

- PDT
ML/AI Service Mesh Made Easy With API Management
Join on Hopin
Rakesh Talanki
Rakesh Talanki
Google, Principal Architect
Kaz Sato
Kaz Sato
Google, Staff Developer Advocate

The digital transformation in next decade will be empowered by what we call it as "ML/AI Service Mesh". Even though many companies are now generating features from raw data and extracting business insights with ML models, the challenge has been to share the valuable asset for internal and external consumption at scale. Each project or department in enterprises are siloed in the most of ML/AI projects; building features from raw data, training ML models, extracting embeddings, building prediction microservices, and use it internally. There is no standardized way to share the valuable assets and microservices with cross-functional groups and divisions.

API management is the missing link for building the service mesh quickly. By introducing a standardized and established way of securing services, enabling service discovery and observability, Operations teams don't have to spend much resources on exposing the assets to enable the ML/AI Service Mesh across the enterprise. This approach will democratize the ML assets for faster and scalable enterprise-wide consumption.

Solution: AI Platform + Apigee Edge
In this session, we will take a ML model built in the Cloud Machine learning engine and look at ways on how to consume this model from an internal consumer and an external consumer perspective. We will use Apigee’s API Management solution to expose the models. We will also touch upon how to build "ML/AI Service Mesh" where enterprises can build a collection of microservices that exposes the features.

The demo will provide:
- Serving predictions with scalability, performance, and availability in mind
- Authentication, authorization services depending on who the user is
- Managing the life cycle of API keys
- Granting access to your ML APIs with an approval process
- Rolling out new model versions as models are updated
- Self-service consumption using Portal without any DevOps involved
- Monitoring and Analyzing Analytics
- Monetizing the ML Models