Artificial Intelligence Dev Conference
Monday, February 7, 2022
This is the story of using Virtual Assistants like Alexa, Google Assistant, or Bixby alongside Voice and Video AI on mobile and web devices for good!
The building of Project Enabled Play - a platform built in .NET that enables users to turn their voice into gaming controller on any platform they have access to. Come learn about scaling applications in .NET to over a dozen different platforms and channels while building for accessibility to even the playing field. Gain an understanding of voice and conversational platforms, real-time communication technology, and best practices for sharing code and going from PoC to product.
Try your shot at landing a win in Call of Duty, Fall Guys, Minecraft, and more using your voice, then leave with a working knowledge of other ways you can use .NET and the tools you're familiar with.
PRO WORKSHOP: Develop, Deploy and Govern AI: Building an Huggingface End to End Sentiment AI SolutionJoin on Hopin
Learn how to develop, fine tune and deploy an end to end AI application. Focused on an NLP solution architecture that incorporates the latest advancements in NLP from Huggingface as well as optimized Tensorflow and Pytorch containers from from Intel and Nvidia into a robust automated pipeline capable of accounting for Data drift and Model drift while providing Inference API’s that support an interactive application and real time Kafka inferencing on live Twitter streams. Using cnvrg.io Metacloud every aspect of this hybrid solution can be developed collaboratively in a single control plane which manages execution across all of your on-premise investments, while also allowing you to dynamically leverage cloud based compute resources.
This workshop will give you an end to end example that will help you solve your next NLP problem, and strategies to maintain your model in production.
We can easily trick a classifier into making embarrassingly false predictions. When this is done systematically and intentionally, it is called an adversarial attack. Specifically, this kind of attack is called an evasion attack. In this session, we will examine an evasion use case and briefly explain other forms of attacks. Then, we explain two defense methods: spatial smoothing preprocessing and adversarial training. Lastly, we will demonstrate one robustness evaluation method and one certification method to ascertain that the model can withstand such attacks.
Tuesday, February 8, 2022
Conversation Intelligence (CI) APIs enables to build applications that go beyond basic speech to text, creating a new array of sophisticated AI-driven experiences and functionalities. Basic speech recognition is designed to recognize or respond to explicit words and phrases, while conversation intelligence is capable of contextual comprehension of any human conversations to effectively extract key insights, identify user intent, surface actionable insights, detect sentiment, and more.
Conversation Intelligence has given a rise to a new generation of AI driven applications and platforms across various verticals such as revenue intelligence, tele-health, call centers and customer support, collaboration and productivity platforms and more…
Need to harness the power of AI but not a data scientist? No problem. In this presentation, we’ll show you how to consume prebuilt and custom AI models even if you don’t have data science expertise. We’ll also introduce Oracle’s take on machine learning and AI—and how we’ve rearchitected the AI experience to be more streamlined, efficient, and developer-friendly. Come ready to see demos that span capabilities such as language understanding, computer vision, speech, and many more. No data science background required!
Computer vision models based on neural networks have become so good in the last 10 years that nowadays serve as the “eyes” behind many mission-critical systems, like self-driving cars, automatic video surveillance, or face recognition systems in airports. What you probably do not know is that there are easy methods to fool them, forcing them to produce wrong predictions. These methods are theoretically simple and computational feasible and open the door to potentially critical security issues.
Machine Learning is disrupting nearly every industry. AI-First is the new mantra for many, but how do you architect, develop and manage intelligent, real-time applications incorporating Machine Learning that can scale to handle modern workloads driven by Cloud and Edge?
In this talk, you will learn the most critical elements for achieving machine learning applications that can scale to any throughput or volume of interaction up to one billion events per second or more. Also covered will be how to integrate these applications with MLOps frameworks, and how to design for zero-downtime architectures as well as global scale multi-region.
Large language models grow increasingly capable of producing text that is indistinguishable from human written text. Bad actors are able to leverage these text generation capabilities to quickly and easily generate spam or bypass spam filters. Learn different techniques Stream is exploring to catch machine generated text in order to increase our spam detection capabilities.
Almost all AI problems worth solving are made difficult by the challenge of a “long-tail” – where the frequency of data is sparse yet critical. In this talk, Scale AI’s Head of Nucleus Russell Kaplan will discuss why performance on the long tail is a make-or-break situation for AI systems, proactive strategies for identifying long-tail scenarios and how machine learning practitioners can target their experiments to “tame” the long tail, achieve strong performance on rare edge cases and improve model performance.
Cricket, a game of bat and ball is one of the most popular game and played in varied formats(I. Its a game of numbers with each match generating plethora of data about players and match. This data is used by analysts and data scientists to uncover meaningful insights and forecast about matches and players performance. In this session, I'll be performing some analytics and prediction on the cricket data using Microsoft ML.Net framework and C#.
Wednesday, February 9, 2022
From your bank account to your email, APIs power your digital experiences. According to a survey by RapidAPI, 96% of developers are using more APIs in 2021 than in 2020. Are you planning on building internal APIs to be used just within your organization or external APIs for your mobile app? Whatever your API goals are, you need an API gateway to handle incoming traffic. API gateway deployments are a leading use case for NGINX Plus, our commercially supported offering based on NGINX Open Source.
In this webinar we cover what APIs and API gateways are, and how to configure and secure your API gateways using NGINX Plus and NGINX App Protect. We demo how to deploy NGINX Plus as an API gateway, and show how to secure the API gateway by using encrypted JSON Web Tokens (JWE) and importing an OpenAPI spec to NGINX App Protect. In the Q&A we answer your questions about deploying and securing your API gateway.
OPEN TALK: Fake Your Data: Mimicking Production to Maximize Testing, Shorten Sprints, and Release 5x FasterJoin on Hopin
Raise your hand if you’ve ever written a script or built a tool to generate test data for your staging environment. Keep your hand up if it was fun. And easy. And still works. If your hand (and shoulders and morale) fell, rest assured you’re not alone. Now for the good news: help is here.
With the increasing complexity of today’s data ecosystems and the expanding reach of privacy regulations, generating useful, safe test data has become more difficult and riskier than ever. An effective test data solution must work across a variety of database types and de-identify production in a way that ensures privacy. Challenging? Yes. Attainable? That, too.
Technologies now exist that integrate directly into your data ecosystem to create test data that looks, acts, and behaves just like your production data. By hydrating QA and staging with useful, safe, fake data, dev teams are upleveling testing, catching bugs faster, and shortening their development cycles by as much as 60%. Data mimicking sets a new standard of quality test data generation that combines the best aspects of anonymization, synthesis, and subsetting.
Explore these technologies in a live demo and discover how to use them to:
- Maintain consistency in your test data across tables and across databases
- Subset your data from PB down to GB without breaking referential integrity
- Achieve mathematical guarantees of data privacy
- Increase your team’s efficiency by 50%
- Realize 5x more releases per day
As data drives new and evolving IoT opportunities across all segments of the market, the role of the developer becomes increasingly important in being able to utilize existing tools to drive new ways to create Edge AI solutions. However, solving for Edge AI can be a complex design and development process as it requires determining the right selection of sensors, hardware, deep learning frameworks, or deciding how to deploy the unique use case.
By democratizing access to AI and simplifying development, organizations can enable their developers to quickly experiment with different algorithms, processors and optimization techniques or prototype and customize without having to spend weeks obtaining and setting up development boards. In this session, Bill will discuss how organizations can achieve this and empower their developers to build innovative Edge AI solutions – solutions that will improve lives and transform industries.
Ryan McMichael, Sr Manager of Sensors and Systems Engineer for Advanced Hardware, walks through the various sensors available for autonomous driving, and evaluates the pros and cons of each to enable the optimal field of vision for autonomous vehicles.
PRO TALK (CloudWorld): How an AI Driven Approach Reduces Cloud Cost and Makes Your Kubernetes Infrastructure AutonomousJoin on Hopin
Measuring and controlling costs in cloud environments is often complex. But it does not need to be. In this session, we will discuss how an AI driven approach renders your cloud native applications on Kubernetes fully autonomous and rightsizes your cluster in sub-minute intervals the cloud compute resources. We will go over an experiment with the deployment of an application, and apply autonomous techniques that fiercely controls and optimizes the cluster.
We will discuss how to control and optimize in minutes the cost of your AWS EKS, Google GKE and Azure AKS applications. Instantly. You will learn about powerful -yet simple- strategies to rightsize your clusters: automated scaling up and scaling down to zero your nodes and pods, smart selection of VM shapes, and the automated use of spot instances.