DeveloperWeek PRO Stage A
Wednesday, February 17, 2021
We have new powerful instruments to leverage current web apps and provide a richer experience to our users. With the aim of modern web APIs, it is possible to design completely new functionalities and explore unique technological combinations, changing the way we develop and interact with web apps. We can directly use hardware devices and just through our browser! Let's explore some of the most exciting APIs and see how we can combine them to unlock new scenarios and give superpowers to our web apps.
Have you ever tried to hammer a nail with a pair of pliers? While you may succeed eventually, the process is inefficient and frustrating because you’re using the wrong tool. The same holds true for developers who try to work with application performance management (APM) solutions to monitor mobile and web applications. Because these solutions are designed for DevOps and infrastructure teams to monitor backend systems and performance, they don’t provide the insights developers need into release stability, errors, and how these are impacting the customer experience.
Then there are application stability management (ASM) solutions, which are built specifically for engineering organizations. ASM provides actionable insights into how stable the application is, where bugs exist, and how to improve the end user experience. James Smith, CEO of Bugsnag, will explain the differences between the two solutions and outlines the benefits organizations can achieve when APM and ASM are provided to the right teams.
How nice would it be to be able to remember everyone’s name? What if you could just walk into a room and know everyone’s Twitter handle? Kubernetes is a great tool that is being used more and more for deploying applications, but it can also be used in the context of machine learning. In this talk, the speaker will demonstrate how to use NodeJs, a touch of machine learning and a sprinkle of Kubernetes to recognize people in a crowd.
With a demo inspired by the Black Mirror series, the attendees will learn how to use openly available tools to do face recognition with NodeJs and how to create and deploy microservices in a Kubernetes cluster.
While using mobile applications, users intuitively expect fully featured gesture controls such as zooming and panning full screened images. As developers, we have many choices on how to enable these features in our mobile applications. This talk presents a functional programming approach in Typescript for handling gesture events in a mobile application developed with NativeScript. Join a Web Systems Engineer with a mathematics background to learn how to leverage group theory, a field of mathematics, to deliver the features your users want.
Graph neural network (GNN) learning on very large graphs have gained great popularity recently, as critical business insights are hidden in huge knowledge graphs with billions of edges, such as social networks, sale transactions, and etc. Graph node embedding (e.g. Node2Vec) and inductive graph representation learning (e.g. GraphSAGE) has been widely used for fraud detection, cross-sell recommendation, and etc.
The technical challenges mainly come from scalability and cost effectiveness. We have developed a highly scalable and reliable Python library based on Spark and PyTorch for graph neural networks under the Fugue project (https://github.com/fugue-project). Benchmark tests have proved that it can handle graphs with billions of edges and hundreds of millions of nodes within a few hours. The library can easily support Kubernetes Spark with the help of Fugue, and hence deliver a highly cost effective solution in a flexible and uniform framework.
Language Understanding Intelligence Service (LUIS) is part of Azure's Cognitive Services. It's built on the interactive machine learning and language understanding research from Microsoft Research. Luis provides the capability to understand a person’s natural language and respond with actions specified by application code. In this session we'll examine how this powerful feature can be integrated into applications, offering a more natural interaction with a device.
Thursday, February 18, 2021
In a system that promotes responsibility, designers and architects often face two important questions: how to design mathematical and statistical models using concrete goals for fairness and inclusion and how to architect the system that facilitates it. Devangana Khokhar and myself will highlight the latter to help you learn the paradigm of evolutionary architectures to build and operationalise responsible architectures. We will outline the aspects you need to keep in mind while architecting responsible-first systems.The systems engineering realm often talks in terms of “-ilities” important for the designed system. These are also referred to as “nonfunctional requirements,” which should be tracked and accounted for at each step in the process. They serve as guard rails for making sensible decisions. When it comes to architecting fair, accountable, and transparent (FAT) AI systems,
You must consider a multitude of inner dimensions when determining whether a system is responsible or not. Some of these are auditability of the data transformation pipeline (how the data is handled and treated at each step, if there’s clear visibility into the expected state of the data, and if logs are present in each of the modules in the data pipeline), monitoring of the crucial metrics (if the logs are available centrally, if there’s clear visibility on the correctness of the pipeline modules, if the data can be interpreted in human-readable form, if you can measure the quality of the data at each step, if there’s an analytical dashboard capable of ingesting the audit logs, if you can to drill down in the dashboard to do root cause analysis, identify anomalies, etc., and if you can create reports about the precision and accuracy of the insights and the intermediate data states), and feedback loops (if you can inject biases and anomalies in the data to test the resilience of the model and the system as a whole, how to engage users in uncovering instances of “responsible” system failure and handle it effectively and efficiently, and how to ensure that the feedback loops aren’t biased).
The most important question to answer is how to operationalise this. Evolutionary architecture talks about a fitness function. If you want the systems to hold this tenet close and always ensure its fulfilment—automate it. Codifying these tenets into tests that give feedback is the right way to think about this. Every time a change is made in the model, the system should provide feedback on its responsibility. The tests should start to fail as the system digresses from the set threshold. There’s often a notion that algorithms are a black box and hard to explain. However, it’s important to acknowledge that even white-box algorithms need explanation and need to be accountable.
Serving machine learning models is a scalability challenge at many companies. Most of the applications require a small number of machine learning models (often <100) to serve predictions. On the other hand, cloud platforms that support model serving, though they support hundreds of thousands of models, provision separate hardware for different customers. Salesforce has a unique challenge that only very few companies deal with, Salesforce needs to run hundreds of thousands of models sharing the underlying infrastructure for multiple tenants for cost effectiveness.
In this talk we will explain how Salesforce hosts hundreds of thousands of models on a multi-tenant infrastructure, to support low-latency predictions.
Conor Jensen, Director of AI Consulting at Dataiku, works at the intersection of abstract data models and real people. Having worked with clients of all sizes and across a multitude of sectors, Conor has developed a deep understanding of the challenges companies face when incorporating responsible AI into their business model. Thanks to his work with Global 2000 companies (including Morgan Stanley, Comcast, and GE Aviation), all of whom are working to bring data out of silos, Conor is well equipped to speak to explainable AI and the importance of transparency in AI workflows and models, and how that translates to responsible applications of AI in the enterprise.
One key aspect of responsible AI is that it supports the democratization of data. Particular organizational structures, such as those that silo data into an isolated department, have unintended negative consequences, while the most successful enterprise AI applications are human-centered systems that support collaboration, agility, and access. The best way to maximize the value extracted from data is to encourage a culture of collaboration around data, and to allow people throughout a company access to cleaning, modeling, and analysis of data regardless of if they have the word “data” in their job title. Conor's session will explore how companies can build their own guidelines around ethical, explainable AI, and offer attendees best practices on how to responsibly apply AI in the enterprise to organize and use data. In using these best practices, companies across industries across can optimize their data management strategies to drive business value.
The Local Home SDK enhances Google smart home integrations by enabling command execution to happen directly on Google Home speakers and Nest displays. This reduces latency and enhances reliability of the commands users send through the Google Assistant.
The synergy of Big Data and Artificial Intelligence techniques like Machine Learning holds amazing promise for business and organizations across the globe. This huge stockpile of data, when properly harnessed, can give valuable insights and business analytics to the sector/ industry where the data set belongs.
Victor Shilo, Director of Engineering and Wolf Ruzicka, IT Innovator and Chairman of EastBanc Technologies take the stage to walk attendees through a remarkable case study that lays out a strategy to make debt collection prediction models for a large Swedish financial services company’s customer base. Through this project, they were able to identify ways to aid its customers in achieving financial health pulling data across eleven European countries.
Join this session to learn how they utilized the Minimal Viable Process (MVP) methodology to incrementally scale analytics, what data prediction models they are testing, how they plan to tackle creating a machine learning umbrella model that collects data and analyzes it despite different languages and currencies and some of the expected challenges.
AI and ML are frequently utilized to optimize processing, adding efficiency and improving performance of applications. Approaching the use of AI and ML from a different perspective can dramatically change the way image processing and display delivers visual data to the eyes of users. Particularly volumetric data.
Holograms have been around for a long time, but the ability to efficiently produce, transmit and display interactive holographic images has historically placed insurmountable demands on processing engines, preventing the potential to make practical consumer-level applications a reality.
Rather than trying to produce and ship the complete volumetric data package, AI and ML can be used to train cores to understand how the human brain needs to receive images for volume perception and preselect the necessary data needed by a user’s retinas, dramatically reducing the necessary transmission bandwidth and display processing demands. Such threaded volumetric processing capabilities can be utilized by developers to add differentiating holographic capabilities and features to applications.
Putting these capabilities into a developer’s toolkit can facilitate the incorporation of volumetric imagery that can be displayed through advanced depth field solutions which entice users, promote loyalty and add exponential value, thereby creating new avenues for monetization.
The speaker will highlight design tools available in standard development platforms which facilitate the incorporation of 3-D and holographic content into applications. In addition, the speaker will demonstrate ways users can be empowered to create and manipulate volumetric content on mobile devices, further expanding the application scope.
An engineering team that works on AI products and technologies presents unique challenges. There is more ambiguity, unknowns and uncertainty in the definitions of AI products. To build a high performing engineering team in the era of ML and AI requires leaders to think differently, and empower key team members to factor in failures and allow for concurrent experimentation of ideas and technologies. Leaders need to build “resilience” not just at the infrastructure layer, but also at the management layer, allowing the team to take measured bets. In this session, doc.ai CTO Akshay Sharma will dive into what that management layer needs to look like – simple reporting structures, promoting fungibility and collaboration across teams, setting goals and, even, encouraging failure – to ensure engineer teams are keeping up with evolving product cycles and technologies while also fostering a scalable culture.
How can we use AI to enhance workers’ performance, not replace them? The employee experience of 2030 will look nothing like that of 2010, and it's up to us to decide if that's a good thing or a bad thing. If we allow automation and AI to be used solely to expand business profits, it may lead to hiring fewer workers, which ultimately spells trouble for the economy. Or, we could use automation and AI to create technology that takes away the most tedious parts of our jobs, leaving us with more time to do thoughtful and meaningful work.
Operations in particular has historically been neglected and under-resourced as a business function. Ops teams themselves lack resources specific to the mission-critical work they do. That is changing, and changing fast. These ops teams should be the ones deciding how AI gets implemented throughout the business. Otherwise, it’ll lead to operational debt where teams are using a messy web of tools and systems that don’t interact.
In this session, Sagi will make the case for people first automation, explain how to reshape the work of employees so they can deliver more value to their companies, how to empower operations professionals with adaptive platforms to complete business processes, and help companies decipher what should and shouldn’t be automated.
Have you ever wanted to make your apps “smarter”? This session will cover what every ML/AI developer should know about Open Neural Network Exchange (ONNX) . Why it’s important and how it can reduce friction in incorporating machine learning models to your apps. We will show how to train models using the framework of your choice, save or convert models into ONNX, and deploy to cloud and edge using a high-performance runtime.
Friday, February 19, 2021
In this talk, we’ll cover the latest development in strong authentication for mobile web developers with a focus on Apple’s latest commitment supporting the Web Authentication API (WebAuthn) with supported Touch/Face ID and hardware security keys over USB-C, NFC, and Lightning. iOS, iPadOS, and mobile web developers can now create native or web apps with strong passwordless authentication using cross-platform (security keys) or platform specific authenticators such as Touch ID or Face ID when available. You’ll walk away with knowledge of what is available from tools to hardware, and learn how the Web Authentication flow works (with demos), and real use cases from developers and their end-user experiences. Let’s mobile authentication strong and simple!
Do you remember Microsoft FrontPage? The concept of static web pages is back! With some of the tools and concepts developed in the last years, we can get advantage of some features that static web pages offer and get web apps with better performance, more security, easier scaling and cheaper costs.
“Jamstack” is a term that is becoming more popular lately. There are more and more tools, services and frameworks that help us to develop web applications with this approach. We will talk about this architecture and what advantages it brings us compared to “Server Side Rendered” applications. We will also talk about different types of tools that we have at our disposal to create web applications based mainly on pre-rendered content.
At Box, the majority of our application data resides in a horizontally sharded MySQL infrastructure, made up of 100s of shards and 1000s of servers. We've built a distributed relational data service whose goal is to provide developers with a uniform, language-agnostic & performant way to interact with our application data at the scale of millions of requests per second. In this session, you will learn about some of the strategies that we employ in our distributed relational data service to protect our MySQL infrastructure. Strategies discussed will include rate limiting in a low-latency high-throughput environment and QoS enforcement to protect both our primary databases from load and replicas from lag.
PRO SESSION: Do Not Download Your PDF: A Story of Digital Document Usability and Security in Your Application
The usage of digital documents within an app affects basically every industry and use-case. Have you ever looked into incorporating documents in your application? There’s a lot to consider. And what about digital security? When it comes to thinking about the document lifecycle within an app, there are several things to consider:
- The in-app experience when working with multiple documents
- Integrating a viewer inside of the app beyond any built-in viewers
- Providing consistent behaviour across multiple browsers
- Providing customized UI for annotating PDFs, images, MS Office documents and videos
- Improving your search across multiple documents beyond just title and metadata
What if I said you could take it step by step? What If I said you could stop halfway and still gain a lot?