AI & ML Dev Conference

Wednesday, February 17, 2021

- PST
PRO WORKSHOP: Deep Understanding Applied to Open Domain Conversations
Join on Hopin
Surbhi Rathore
Surbhi Rathore
Symbl.ai, CEO and co-founder

Now more than ever, when the whole world is communicating on digital channels, the dangers of disconnected data silos and lost knowledge are more than ever. In this session, we will talk about applying deep understanding to capturing this open domain conversation data so that builders can strategize product strategy early in the lifecycle of their business and future-proof growth. The session also entails several aspects of supervised and unsupervised learning that can be used as stepping stones to this application and how combining both these techniques bring the best of both worlds together for human-like comprehension of conversations.

- PST
PRO WORKSHOP: Large Graph Neural Network Learning with Kubernetes Spark
Join on Hopin
Jintao Zhang
Jintao Zhang
Square, Software Engineer Machine Learning

Graph neural network (GNN) learning on very large graphs have gained great popularity recently, as critical business insights are hidden in huge knowledge graphs with billions of edges, such as social networks, sale transactions, and etc. Graph node embedding (e.g. Node2Vec) and inductive graph representation learning (e.g. GraphSAGE) has been widely used for fraud detection, cross-sell recommendation, and etc.

The technical challenges mainly come from scalability and cost effectiveness. We have developed a highly scalable and reliable Python library based on Spark and PyTorch for graph neural networks under the Fugue project (https://github.com/fugue-project). Benchmark tests have proved that it can handle graphs with billions of edges and hundreds of millions of nodes within a few hours. The library can easily support Kubernetes Spark with the help of Fugue, and hence deliver a highly cost effective solution in a flexible and uniform framework.

- PST
PRO WORKSHOP: Making Apps Listen and React with LUIS (Language Understanding Intelligent Service)
Join on Hopin
Sam Nasr
Sam Nasr
NIS Technologies, Sr. Software Engineer

Language Understanding Intelligence Service (LUIS) is part of Azure's Cognitive Services. It's built on the interactive machine learning and language understanding research from Microsoft Research. Luis provides the capability to understand a person’s natural language and respond with actions specified by application code. In this session we'll examine how this powerful feature can be integrated into applications, offering a more natural interaction with a device.

Thursday, February 18, 2021

- PST
PRO SESSION: Operationalising Responsible AI
Join on Hopin
Devangana Khokhar
Devangana Khokhar
ThoughtWorks, Lead Data Scientist
Vanya Seth
Vanya Seth
Thoughtworks India, Head of Technology

In a system that promotes responsibility, designers and architects often face two important questions: how to design mathematical and statistical models using concrete goals for fairness and inclusion and how to architect the system that facilitates it. Devangana Khokhar and myself will highlight the latter to help you learn the paradigm of evolutionary architectures to build and operationalise responsible architectures. We will outline the aspects you need to keep in mind while architecting responsible-first systems.The systems engineering realm often talks in terms of “-ilities” important for the designed system. These are also referred to as “nonfunctional requirements,” which should be tracked and accounted for at each step in the process. They serve as guard rails for making sensible decisions. When it comes to architecting fair, accountable, and transparent (FAT) AI systems,

You must consider a multitude of inner dimensions when determining whether a system is responsible or not. Some of these are auditability of the data transformation pipeline (how the data is handled and treated at each step, if there’s clear visibility into the expected state of the data, and if logs are present in each of the modules in the data pipeline), monitoring of the crucial metrics (if the logs are available centrally, if there’s clear visibility on the correctness of the pipeline modules, if the data can be interpreted in human-readable form, if you can measure the quality of the data at each step, if there’s an analytical dashboard capable of ingesting the audit logs, if you can to drill down in the dashboard to do root cause analysis, identify anomalies, etc., and if you can create reports about the precision and accuracy of the insights and the intermediate data states), and feedback loops (if you can inject biases and anomalies in the data to test the resilience of the model and the system as a whole, how to engage users in uncovering instances of “responsible” system failure and handle it effectively and efficiently, and how to ensure that the feedback loops aren’t biased).

The most important question to answer is how to operationalise this. Evolutionary architecture talks about a fitness function. If you want the systems to hold this tenet close and always ensure its fulfilment—automate it. Codifying these tenets into tests that give feedback is the right way to think about this. Every time a change is made in the model, the system should provide feedback on its responsibility. The tests should start to fail as the system digresses from the set threshold. There’s often a notion that algorithms are a black box and hard to explain. However, it’s important to acknowledge that even white-box algorithms need explanation and need to be accountable.

- PST
PRO SESSION: Serving Machine Learning Models at Scale
Join on Hopin
Manoj Agarwal
Manoj Agarwal
Salesforce, Distributed Systems Architect

Serving machine learning models is a scalability challenge at many companies. Most of the applications require a small number of machine learning models (often <100) to serve predictions. On the other hand, cloud platforms that support model serving, though they support hundreds of thousands of models, provision separate hardware for different customers. Salesforce has a unique challenge that only very few companies deal with, Salesforce needs to run hundreds of thousands of models sharing the underlying infrastructure for multiple tenants for cost effectiveness.

In this talk we will explain how Salesforce hosts hundreds of thousands of models on a multi-tenant infrastructure, to support low-latency predictions.

- PST
PRO SESSION: Bringing Responsible AI to the Enterprise
Join on Hopin
Conor Jensen
Conor Jensen
Dataiku, Director of AI Consulting

Conor Jensen, Director of AI Consulting at Dataiku, works at the intersection of abstract data models and real people. Having worked with clients of all sizes and across a multitude of sectors, Conor has developed a deep understanding of the challenges companies face when incorporating responsible AI into their business model. Thanks to his work with Global 2000 companies (including Morgan Stanley, Comcast, and GE Aviation), all of whom are working to bring data out of silos, Conor is well equipped to speak to explainable AI and the importance of transparency in AI workflows and models, and how that translates to responsible applications of AI in the enterprise.


One key aspect of responsible AI is that it supports the democratization of data. Particular organizational structures, such as those that silo data into an isolated department, have unintended negative consequences, while the most successful enterprise AI applications are human-centered systems that support collaboration, agility, and access. The best way to maximize the value extracted from data is to encourage a culture of collaboration around data, and to allow people throughout a company access to cleaning, modeling, and analysis of data regardless of if they have the word “data” in their job title. Conor's session will explore how companies can build their own guidelines around ethical, explainable AI, and offer attendees best practices on how to responsibly apply AI in the enterprise to organize and use data. In using these best practices, companies across industries across can optimize their data management strategies to drive business value.

- PST
KEYNOTE: Citrix – Your Chip Implant Is Ready, Are You?
Join on Hopin
PJ Hough
PJ Hough
Citrix, Executive Vice President and Chief Product Officer

As AI and machine learning advance, human augmentation becomes less a dystopian idea than a utilitarian one. There is definite trepidation about the increasing role of machines in the workforce, many fear for their livelihood, robotic overlords and other such dystopian visions of how a robot / AI future would look. Yet to remain competitive, some workers in the future may choose to augment themselves with under-the-skin chips to make themselves more competitive and take digital performance enhancement to previously unimaginable levels.

While some roles may disappear, others will emerge bringing employment to more people – but would need to be protected under government regulations, but the protections themselves could not be so stringent as to nullify the perceived benefits to the enterprise. This pathway might also lead to higher burnout, with workers never being able to fully “clock out.”

- PST
PRO SESSION: Local Fulfillment for the Smart Home
Join on Hopin
Toni Klopfenstein
Toni Klopfenstein
Google, Developer Advocate

The Local Home SDK enhances Google smart home integrations by enabling command execution to happen directly on Google Home speakers and Nest displays. This reduces latency and enhances reliability of the commands users send through the Google Assistant.

- PST
OPEN TALK: Move Faster and Break Fewer Things with Observability + AI
Join on Hopin
Richard Whitehead
Richard Whitehead
Moogsoft, Chief Evangelist

A key challenge when working with software is that it’s invisible. It does not inherently lend itself to the universal DevOps goal of “Telemetry Everywhere.” While engineers consciously code their product to emit metrics, logs and traces that allow them to observe the invisible, traditional monitoring methods fall short of generating meaningful data about incidents, leaving teams with excess toil when things break. This talk will explore the relationship between observability and SDLC practices which allow AI to lead the Ops side of DevOps, so developers and SREs can move faster, innovate more and operate less.

Attendees will learn:
- How introducing visibility and control over incidents earlier in the development cycle can reduce toil.
- How to leverage Service Level Objectives (SLOs), error budgets and the ‘wisdom of production’ to improve the Ops part of DevOps.
- Methods for using AI-driven observability to turn every incident into a learning opportunity.

Discover how AI-driven observability methods help improve practices from Site Reliability Engineering to Continuous Integration and Deployment, and supports the transition from project to product-centric ways of working.

- PST
PRO SESSION: AI Readiness: Utilizing Machine Learning to Scale Analytics across Big Data
Join on Hopin
Wolf Ruzicka
Wolf Ruzicka
EastBanc Technologies, Chairman
Victor Shilo
Victor Shilo
EastBanc Technologies, CTO

The synergy of Big Data and Artificial Intelligence techniques like Machine Learning holds amazing promise for business and organizations across the globe. This huge stockpile of data, when properly harnessed, can give valuable insights and business analytics to the sector/ industry where the data set belongs.

Victor Shilo, Director of Engineering and Wolf Ruzicka, IT Innovator and Chairman of EastBanc Technologies take the stage to walk attendees through a remarkable case study that lays out a strategy to make debt collection prediction models for a large Swedish financial services company’s customer base. Through this project, they were able to identify ways to aid its customers in achieving financial health pulling data across eleven European countries.

Join this session to learn how they utilized the Minimal Viable Process (MVP) methodology to incrementally scale analytics, what data prediction models they are testing, how they plan to tackle creating a machine learning umbrella model that collects data and analyzes it despite different languages and currencies and some of the expected challenges.

- PST
OPEN TALK: Releases: The Last Frontier of Standardization
Join on Hopin
Ravi Lachhmaan
Ravi Lachhmaan
Harness, Evangelist

As software engineers, we strive to better our craft and leave a lasting mark on the organizations we work for. Throughout our careers, we balance two types of knowledge: the combination of business domain and technical stack is our bread and butter.

No matter if you work for a bank or an app that is revolutionizing wine delivery for pets, as an engineer you tend to get better at developing features. Design patterns and approaches learned on one project can transfer into others, while the new challenges add to your skillset. Ironically what does not transfer easily between projects is the process of deploying and releasing the software that you work so hard to build. For most organizations, deployments and releases are team-centric since applications are unique but Continuous Delivery is changing that.

Learn in this session how modern Continuous Delivery approaches are ushering in standardization in one of the last and sometimes scary frontiers for software engineers, your releases. Core to Continuous Delivery is making strides in engineering efficiency. With advancements with AI/ML in your CI/CD pipelines, even the most snowflake based deployments can benefit from standardization.

- PST
PRO SESSION: Delivering Visual Perception and Holographic Content with AI
Join on Hopin
Taylor Scott
Taylor Scott
IKIN, Founder and Chief Technology Officer

AI and ML are frequently utilized to optimize processing, adding efficiency and improving performance of applications. Approaching the use of AI and ML from a different perspective can dramatically change the way image processing and display delivers visual data to the eyes of users. Particularly volumetric data.
Holograms have been around for a long time, but the ability to efficiently produce, transmit and display interactive holographic images has historically placed insurmountable demands on processing engines, preventing the potential to make practical consumer-level applications a reality.
Rather than trying to produce and ship the complete volumetric data package, AI and ML can be used to train cores to understand how the human brain needs to receive images for volume perception and preselect the necessary data needed by a user’s retinas, dramatically reducing the necessary transmission bandwidth and display processing demands. Such threaded volumetric processing capabilities can be utilized by developers to add differentiating holographic capabilities and features to applications.
Putting these capabilities into a developer’s toolkit can facilitate the incorporation of volumetric imagery that can be displayed through advanced depth field solutions which entice users, promote loyalty and add exponential value, thereby creating new avenues for monetization.
The speaker will highlight design tools available in standard development platforms which facilitate the incorporation of 3-D and holographic content into applications. In addition, the speaker will demonstrate ways users can be empowered to create and manipulate volumetric content on mobile devices, further expanding the application scope.

- PST
PRO SESSION: Cultivating Engineer Team Resilience in the World of AI
Join on Hopin
Akshay Sharma
Akshay Sharma
doc.ai, Chief Technology Officer

An engineering team that works on AI products and technologies presents unique challenges. There is more ambiguity, unknowns and uncertainty in the definitions of AI products. To build a high performing engineering team in the era of ML and AI requires leaders to think differently, and empower key team members to factor in failures and allow for concurrent experimentation of ideas and technologies. Leaders need to build “resilience” not just at the infrastructure layer, but also at the management layer, allowing the team to take measured bets. In this session, doc.ai CTO Akshay Sharma will dive into what that management layer needs to look like – simple reporting structures, promoting fungibility and collaboration across teams, setting goals and, even, encouraging failure – to ensure engineer teams are keeping up with evolving product cycles and technologies while also fostering a scalable culture.

- PST
PRO SESSION: Making the Case for People First Automation
Join on Hopin
Sagi Eliyahu
Sagi Eliyahu
Tonkean, Co-Founder & CEO

How can we use AI to enhance workers’ performance, not replace them? The employee experience of 2030 will look nothing like that of 2010, and it's up to us to decide if that's a good thing or a bad thing. If we allow automation and AI to be used solely to expand business profits, it may lead to hiring fewer workers, which ultimately spells trouble for the economy. Or, we could use automation and AI to create technology that takes away the most tedious parts of our jobs, leaving us with more time to do thoughtful and meaningful work.


Operations in particular has historically been neglected and under-resourced as a business function. Ops teams themselves lack resources specific to the mission-critical work they do. That is changing, and changing fast. These ops teams should be the ones deciding how AI gets implemented throughout the business. Otherwise, it’ll lead to operational debt where teams are using a messy web of tools and systems that don’t interact.

In this session, Sagi will make the case for people first automation, explain how to reshape the work of employees so they can deliver more value to their companies, how to empower operations professionals with adaptive platforms to complete business processes, and help companies decipher what should and shouldn’t be automated.

- PST
PRO SESSION: DevSec AI Ops
Join on Hopin
Myra Haubrich
Myra Haubrich
Salesforce, Lead Site Reliability Engineer

We aspire to have immutability, automation, resilience in infrastructure. This session will describe how Dev+Sec+AI Ops, a modern concept of Ops can achieve this aspiration.

- PST
OPEN TALK: Commit to the Cause, Push for Change: Contributing to Call for Code Open Source Projects
Join on Hopin
Daniel Krook
Daniel Krook
IBM, Chief Technology Officer for the Call for Code Global Initiative
Andres Meira
Andres Meira
Grillo, Founder & CEO
Lakshyana K.C.
Lakshyana K.C.
Build Change, Technology Consultant

Call for Code is a multi-year program that calls on developers to create practical, effective, and high-quality applications based on one or more IBM Cloud services (for example, web, mobile, data, analytics, AI, IoT, or weather) or Red Hat platforms (including OpenShift) to build a solution that can have an immediate and lasting impact on humanitarian issues as open source projects. In this session you'll learn more about the solutions built to tackle natural hazards, climate change, and the pandemic. What sets Call for Code apart from other technology-for-good competitions is the commitment to deploy the winning solutions with the IBM Service Corps and to help teams build sustainable open source communities through The Linux Foundation. Join us at this talk to hear about the most recent winning projects, get an update on previous year's progress, and learn about how to contribute to two projects directly from the developers.

- PST
PRO SESSION: Leverage Power of Machine Learning with ONNX
Join on Hopin
Ron Dagdag
Ron Dagdag
Spacee, Lead Software Engineer

Have you ever wanted to make your apps “smarter”? This session will cover what every ML/AI developer should know about Open Neural Network Exchange (ONNX) . Why it’s important and how it can reduce friction in incorporating machine learning models to your apps. We will show how to train models using the framework of your choice, save or convert models into ONNX, and deploy to cloud and edge using a high-performance runtime.

Friday, February 19, 2021

- PST
OPEN TALK: Agentless AI-Powered Cloud Threat Detection and Response
Join on Hopin
Arun Raman
Arun Raman
Blue Hexagon, VP of Cloud
James Wenzel
James Wenzel
AWS, Sr Solutions Architect
Saumitra Das
Saumitra Das
Blue Hexagon, CTO


In this session, Blue Hexagon and AWS present AI-powered cloud-native security for near real-time threat detection and response, deep visibility into cloud configuration and workloads, and achieving compliance with industry-standards. Delivered agentless and managed as code, this technology greatly reduces the burden of deployment and management of an effective security posture against adversaries, even as DevOps teams build and deploy business workloads at an agile pace.

- PST
OPEN TALK: AI — Used Right — will Accelerate Cloud Development by 100X
Join on Hopin
Gonçalo Gaiolas
Gonçalo Gaiolas
OutSystems, VP of Product

AI is at the peak of its hype cycle. Too often, ‘AI-capable’ refers to marketing claims instead of practical value add. For this reason, developers tend to be skeptical about AI-driven development. Slapdash application of AI ends up diminishing developer’s creativity and effectiveness.

When implemented in inventive, unique ways, AI dramatically improves the productivity of developers and opens up new opportunities for creativity – especially when applied to cloud app development. Furthermore, beyond the initial development process, AI has the potential to completely transform the entire application lifecycle. Pairing AI with visual, model-driven development enables guidance to be both more powerful and less obtrusive and can compress CI/CD pipelines into days or even hours, instead of weeks.

Come join us as we discuss the three most fundamental design decisions regarding integrating AI into an application platform, our experience analyzing models based on 10s of millions of application graphs and flows, and explore the implications for improving your cloud development productivity by 100x.