AI & ML in the Cloud
Tuesday, September 14, 2021
The Space Shuttle was the most advanced machine ever designed. It was a triumph and a marvel of the modern world.
And on January 1986, the shuttle Challenger disintegrated seconds after launch. This session will discuss how and why the disaster occurred and what lessons modern DevOps and Site Reliability Engineers should learn from it.
The Challenger disaster was not only a failure of the technology, but a failure of the engineering and management culture in NASA. While engineers were aware of problems in the technology stack, there was not enough awareness of the risks they actually posed to the spacecraft. Management had shifted the focus from “prove that it’s safe to launch” to “prove that it’s unsafe to stop the launch”.
This session will present the risk analysis (or lack thereof) of the Shuttle program and draw parallels to modern software development. In the end, launching a shuttle is an extremely complex deployment to the cloud… and above it.
Do you want to detect the license plate of a car? Or if people are wearing their masks? Nowadays, these are typical examples of object detection and image classification which are easy in theory, but what about the actual deployment? There are many options from the Edge to the Cloud how you could do that. Let me show you the simplest and do even a comparison when it comes to the platform question.
I will use Cloud-managed Cisco Meraki IP cameras together with SaaS computer vision platforms, for example from AWS, Azure and GCP, to showcase the simple deployment, possible integrations with APIs & MQTT and its whole architecture as well as actual outcomes.
Join us as Pau Labarta Bajo, Data Scientist and ML Engineer with over eight years of experience will show us how to break multi-million dollar computer vision models using adversarial examples.
Computer vision models based on neural networks have become so good in the last 10 years that nowadays serve as the “eyes” behind many mission-critical systems, like self-driving cars, automatic video surveillance, or face recognition systems in airports. What you probably do not know is that there are easy methods to fool them, forcing them to produce wrong predictions. These methods are theoretically simple and computational feasible and open the door to potentially critical security issues.
Wednesday, September 15, 2021
Many modern video games are constantly evolving post-release. New maps, game modes, and game balancing adjustments are rolled out, often on a weekly basis. This continuous iteration to improve player engagement and satisfaction requires data-driven decision making based on events and telemetry captured during gameplay, and from community forums and discussions.
In this session you will learn how OpenShift Streams for Apache Kafka and Kafka Streams can be used to analyze real-time events and telemetry reported by a game server, using a practical example that encourages audience participation. Specifically you’ll learn how to:
Provision Kafka clusters on OpenShift Streams for Apache Kafka.
Develop a Java application that uses Kafka Streams and Quarkus to process event data.
Deploy the application locally, or on OpenShift and connect it to your OpenShift Streams for Apache Kafka Cluster.
As businesses continue to expose information through APIs, API programming has grown significantly and the number of API endpoints available has increased by leaps and bounds. An integration consuming a set of such APIs needs to adhere to the agreed SLAs with the customers. In the era of cloud integration platforms, making sure that your cloud-based integration is up to scratch in performance is not simple.
Integration-based development increases the risk of performance mistakes in the code compared to traditional programs that do not depend on external services. Since developers combine multiple services or APIs with unknown performance characteristics, these mistakes are usually missed out during development. With the help of artificial intelligence (AI), integrated development environments (IDE) can take up the burden of helping engineers to write performant code.
In this session, I will talk about how to use both AI and theoretical performance models to provide accurate performance forecasts for API integrations. I will demonstrate how this approach can be useful for inexperienced developers to write performant code.
Microsoft's CEO Satya Nadella has said: "Human Language is the new UI layer, bots are like new application". As more and more bots are getting popular in homes and enterprises, the demand for custom bots is increasing in rapid space. In the post-covid-19 pandemic world, we are seeing a high uptick in self-service chat-bots.
However, according to the latest study by Gartner, more than eighty percent of chat-bots projects failed in the year 2019.
In this session, we will cover how to successfully roll out chat-bots in the enterprise space.
We will talk about the factors that contribute to the failure of the chat-bots implementation, how we can learn from them and avoid them. Using the latest offerings in the Microsoft conversational AI space, how can we create enterprise-grade chat-bots.
You will learn:
Common factors that contribute to the failure of the chat-bot implementation
How to use the latest offering in Microsoft conversation AI space and create enterprise-grade chat-bots?
Best practices for Chat-Bots implementation
Transformer-based models have been dominant in the NLP landscape due to their state of the art performance on a wide variety of benchmarks and tasks. However, deploying such large models at scale can be quite difficult and costly. Learn about the techniques that we've utilized at Stream to overcome these challenges and moderate real-time chat messages efficiently on relatively inexpensive hardware. While this talk will focus on the BERT and its offshoots, many of these techniques can also be applied to other models.