Tuesday, October 27, 2020
Machine learning and artificial intelligence continue to become ever more central to every aspect of our lives and the pace of adoption is only continuing to accelerate. AI should be a force for good and has already delivered innumerable benefits. However, as AI starts to decide everything from whether we get a home loan to whether our resume is considered by a company, it is critical to ensure that these decisions are fair, equitable and explainable. Unfortunately, it is becoming increasingly clear that, much like humans, AI can be biased, and there have been many very public incidents where projects had to be abandoned due to catastrophic biases.
In this presentation, we start by considering the ramifications of bias, discuss how fairness is defined, and consider regulated domains and protected classes. We continue by highlighting how bias can be introduced into AI solutions, with significant focus on NLP, where models trained on large public data corpora can assume many of the explicit and implicit biases that are unfortunately present in humankind’s communications. We subsequently discuss how this bias can be measured, tracked and even minimized. We present best practices for ensuring that bias doesn’t creep into models over time, discuss open-source toolkits and highlight how explainability can be used to perform real-time checks on predictions.
Today, data is being generated from devices and containers living at the edge of networks, clouds and data centers. We need to run business logic, analytics and deep learning at the edge before we start our real-time streaming flows. Fortunately using the all Apache Mm FLaNK stack we can do this with ease! Streaming AI Powered Analytics From the Edge to the Data Center is now a simple use case. With MiNiFi we can ingest the data, do data checks, cleansing, run machine learning and deep learning models and route our data in real-time to Apache NiFi and/or Apache Kafka for further transformations and processing. Apache Flink will provide our advanced streaming capabilities fed real-time via Apache Kafka topics. Apache MXNet models will run both at the edge and in our data centers via Apache NiFi and MiNiFi. Our final data will be stored in Apache Kudu via Apache NiFi for final SQL analytics.
Predicting the future has always been a fascinating topic. Now we have AI tools and techniques that can help us do it better than ever before. In this session, we'll cover the fundamentals of solving time-series problems with AI, and show how it can be done with popular data science tools such as Pandas, TensorFlow, and the Google Cloud AI Platform.
We'll start with how to visualize, transform, and split time-series data for use in an ML model. We'll also discuss both statistical and machine learning techniques for predictive analytics. Finally, we'll show how to train a demand forecasting model in the cloud and make predictions with it. Attendees can access Jupyter notebooks after the session to review the material in more detail.
Wednesday, October 28, 2020
In our software-driven world, business success depends on engineering organizations. But manual data gathering, meeting cycles and ad hoc reports weigh down engineering teams’ ability to build collaborative solutions, and disjointed and overlapping data silos prevent engineering teams from productive collaboration.
This session will outline the problems engineering teams face in creating streamlined workflows, and the AI and machine learning solutions that enable engineering organizations to produce accurate insights on team performance and potential. Jeff will address how machine learning and applied AI leverage data science to transform engineering operations from the bottom up, allowing engineers to focus on what they do best: building software.
Nowadays “Artificial Intelligence” is everywhere! And rightly so, it does enable us to do really cool things, things we couldn’t even imagine doing just a decade ago. In fact, it sometimes just feels like magic. This ‘magic’ behind it is often powered by “Machine Learning”. But even “AI” has its limitations.
I’ll show examples where “AI” and ML have failed (sometimes with horrible consequences) and will explain why failures are unavoidable in ML but also mention what we can do to reduce them in the future.
Furthermore, I’ll showcase how current AI implementations discriminate against minorities and how that in some cases even leads to a higher risk of death for those groups.
I’ll cover the bias that humans introduce and I’ll explain how poor choice of data makes our world even more unjust than it already is.
The takeaway for the audience: AI can fail and sometimes it has horrible consequences. Why is AI so hard to “do right”? How can we make AI better?
Strategically using AI in business operations comes with an inherent ethical responsibility. This means a multifaceted approach is needed to address it. Ashutosh can explain how using career data from a large enough data set, equal parity algorithms, and audit and monitoring processes creates a transparent system that is independent from bias due to race, gender, ethnicity, age, and other characteristics. Breaking down each of these steps, Ashu can share how combining all of these levers allows candidates to move through the hiring process efficiently, accurately and while significantly reducing the potential for bias. Lastly, Ashu will share how an AI-based hiring process can help enterprises hire for potential, increase diversity, and even contribute to flattening the unemployment curve at scale today and in the future.
Edge computing and AI might suffer from trust and interoperability issues. Edge computing often creates environments more difficult to control. So trust depends on the rightfulness of data used to feed or leverage an AI model. Trust is also the result of being able to validate that an AI model has not been tampered while used in the edge, the same for the result received after computation. Hybride and heterogeneous infrastructures are common in Edge environments. It is key to have platforms managing interoperability in order to allow already existing infrastructures to interact together while managing security and privacy compliances. Blockchain technologies will be presented, through the iExec’ stack, as solutions to address the issues of trust and interoperability AI and Edge computing. Use cases integrating Nvidia GPU for smart sensors will be showcased in order to illustrate concrete implementations.
Building a tech stack in today’s world means constantly making decisions about whether to automate or abstract challenges, but the goals are always the same – simplicity, security and speed. As organizations embrace myriad technologies, such as Kubernetes, to abstract away DevOps challenges, they also increase the need for automation to help them manage increasingly complex processes across platforms. In this session, Kong’s VP of Product Reza Shafii will explore how organizations can use automation to reduce friction in adopting new platforms, eliminate repetitive, error-prone tasks and increase the overall effectiveness of their development teams.
An automated web application testing tool generates huge number of screenshots and rarely contains errors such as screen collapse.
Manual verification used be only way to find the error before I have successfully developed web screen error detection automatically, applying machine learning (ML) , specially “graph” technology. The uniqueness is, I applied ML to detect error in large and small area separately then they are merged to classify error screen image, this technique is called ensemble learning notably graph technology is best at capturing image structural feature in terms of its semantics.
I applied the following machine learning technologies, Random Forest and Semantic Graph Convolutional Networks to detect functional and non-functional error from screen images.
Applied frameworks are, Tensorflow-Keras, Scikit-learn and Networkx
Conversational AI systems suffer from two forms of decay: concept drift (when interpretation of data changes) and data drift (when the underlying distributions of the data change). These forms of decay cause static AI models to degrade, often within days of creation.
Using a combination of state-of-the-art NLP transfer learning tasks, a modern data pipeline (using Databricks), and a network of experts completing distributed gamified data labeling tasks, Directly is able to provide a more effective and powerful end-to-end machine learning and conversation automation solution than systems that train static models and then expect performance to stay steady over time.
This talk will dive into the specific mechanics required to create and maintain a living, breathing AI ecosystem, including lessons learned by creating a global network of experts and the pitfalls of training/hosting/versioning high-performance dynamic AI.
Both technical and non-technical attendees are highly encouraged to participate in this talk. We will have deep dives into AI code/theory that will always be backed by an underlying real business use-case and performance metrics.
Machine and deep learning become essential for a lot of companies for internal and external use. One of the main issues with its deployment is finding the right way to train and operationalize the model within the company. Serverless approach for deep learning provides simple, scalable, affordable and reliable architecture for it. My presentation will show how to do so within AWS infrastructure.
Serverless architecture changes the rules of the game - instead of thinking about cluster management, scalability, and queue processing, you can now focus entirely on training the model. The downside within this approach is that you have to keep in mind certain limitations and how to organize training and deployment of your model in a right fashion.
I will show how to deploy train and inference pipelines for Tensorflow models on serverless AWS infrastructure.
My talk will be beneficial for machine learning engineers and data scientists.
Understandability is the most important concept in software, that most companies today aren’t tracking. Systems should be built and presented in ways that make it easy for engineers to comprehend them; the more understandable a system is, the easier it will be for engineers to change it in a predictable and safe manner. But with the rise of complex systems, it’s become all too common that we don’t understand our own code once we deploy it.
To deal with system complexity, developers are spending too much time firefighting and fixing bugs. In recent surveys, most devs say they spend at least a day per week troubleshooting issues with their code (sometimes, it can be a couple of days up to a full week trying to fix an elusive bug). This is hurting developer productivity and business results. It also creates a tough choice between flying slow or flying blind; as developers, we are too often making decisions without data in order to maintain velocity.
In this talk, I’ll highlight the importance of Understandability and how it has a huge impact on our day-to-day work. I’ll also discuss how it relates to popular concepts such as complexity, observability, and readability. Finally, I’ll share some tools and techniques to manage and optimize Understandability.
Applying AI to healthcare is a great opportunity — better predictions on who is more likely to develop diabetes, back pain, and other chronic diseases, better predictions on which patients will require hospital re-admissions — not only in saving money but also improving patient health. In this talk, we will discuss our technology solution and our challenges in building AI/ML solutions in this domain:
* We built a data ingestion and extraction process using Apache Beam and Google Cloud DataFlow. We will describe our obstacles around joining and normalizing disparate patient datasets and our heuristics to solve this problem. We will also talk about performance and scalability obstacles and our solutions.
* We built model training and serving pipelines using Kubeflow (TensorFlow on Kubernetes and Istio). We will talk about how we built a HIPAA/SOC2 compliant infrastructure with these technologies and our experience using Katib for model tuning.
Join us as we share a new offering by Microsoft that utilizes AI/ML concepts to bring a new, more efficient way to test desktop applications with each new Windows update.
We take algorithms for granted and assume that they are unbiased and neutral. An algorithm by definition, according to Merriam-Webster, is a “set of rules a machine [... specifically a computer] follows to achieve a particular goal.” As these rules are designed by humans, they can contain flaws that can lead to biases. A few outliers are expected, but when it leads to bias against certain groups, it can be problematic. Let's take an example - a few years back, Amazon attempted to create a hiring algorithm to efficiently select candidates. However, due to the nature of the historical data used to train the algorithm, it was biased against female applicants. The goal of this talk is to educate on algorithmic bias, present case studies to highlight the adverse effects of algorithmic bias, and address prevention strategies.
Amplify.ai, the world leader in Conversational AI, is the first and only enterprise-class AI-driven omni-channel messaging platform, with over 500M users and over 9B interactions across all three pillars of digital marketing: web, social media and search.
When the COVID-19 pandemic hit, our team turned our attention to helping governments, health agencies and NGOs deliver critical information to millions of people around the world, and to engage them in automated conversations to measure the most pressing issues. We also partnered with key players like Facebook, Google and Zendesk to deliver these services at hyperscale. A great example of this work can be seen at MyGov.in, where in collaboration with the Government of India and the Ministry of Health we deployed AI powered digital assistance on their website, Facebook, Facebook Messenger, and Google Search and Maps. These systems that have engaged more than 10 million people, provide real-time, critical information about COVID-19 and connect citizens with more than 11 thousand food banks and shelters locations across India.
Additionally, we’ve helped commercial customers automate customer care interactions, dramatically reducing the burden on human agents by as much as 60%. This has been especially helpful in enabling social distancing in call centers that are typically house agents in tight quarters.
As COVID-19 disrupts the retail industry worldwide, retailers like Walmart and Amazon are investing heavily in AI and in automating retail processes.
Daisy Intelligence Founder & CEO, Gary Saarenvirta, will discuss how AI, automation and shock proofing merchant process is key to not only survive, but thrive in a post-pandemic world.
The chatbots have evolved a long way in the last few years, from the remote process automation, server orchestrations, account provisioning, customer agents to managing your schedule. One key area, where the chatbots are slowly penetrating and will be the key components, is enterprise. There're various challenges when it comes to building an enterprise chatbot and in this talk, the speaker would share a journey of enterprise chatbot, along with how to build a one that actually works.
Transfer learning enables leveraging knowledge acquired from related data to improve performance on a target task. It has become a new paradigm in NLP and new state of the art results on many NLP tasks have been achieved.
In this session we'll learn the different types of transfer learning, the architecture of these pre-trained language models, and how different transfer learning techniques can be used to solve various NLP tasks. In addition, we’ll also show a variety of applications that leverage transfer learning and the pre-trained language models.
In this session let's discuss some important concepts of Azure AI in Customer Interaction services. Let's look deeply about Azure Bot services that includes Azure QnA maker and LUIS Platform and how to implement solutions using the Azure AI Service.A deep dive of complete chat bot Services with integration with SDK and other social media networks.
Serving machine learning models is a scalability challenge at many companies. Most of the applications require a small number of machine learning models (often <100) to serve predictions. On the other hand, cloud platforms that support model serving, though they support hundreds of thousands of models, provision separate hardware for different customers. Salesforce has a unique challenge that only very few companies deal with, Salesforce needs to run hundreds of thousands of models sharing the underlying infrastructure for multiple tenants for cost effectiveness. In this talk we will explain how Salesforce hosts hundreds of thousands of models on a multi-tenant infrastructure, to support low-latency predictions.
Thursday, October 29, 2020
As a C-Level executive, you understand that a hypothesis-driven market strategy is key to the success of your business. Increasingly, however, the IT Department is taking over, using data analysis to furnish “answers” and creating an environment where hypotheses ending in questions are “defeats.” You know that data explains the past while your domain knowledge and experience informs the future. If this success/failure climate takes hold and persists at your company, your business will fail. IT is analyzing the past, but they are mistaking it for the future. For the holistic health and future of your business, you must reclaim your hypothesis-driven methodology. You must use your experience to reconnoiter analyst perspective, remind all of company credo, and recalibrate to encourage hypothesis-driven analysis.
Breakthroughs in artificial intelligence (AI), machine learning (ML) and natural language processing (NLP) have helped customers and call agents alike, to get more done in less time. It draws on multiple data sources to anticipate customer and company needs, handles interactions on its own where possible, and provides in-call support where needed.
The future of AI in the contact center is one where software tools make humans more efficient and allow the customers to have natural conversations with a bot via voice, webchat, social messaging app or other channels, handling requests, retrieving information and delivering answers to frequently asked questions. In short, creating the ultimate customer experience.
During this session, Tony Hung, senior software engineer at Vonage will discuss how enterprises with limited machine learning expertise can leverage communications APIs to unlock simple, secure and flexible solutions to deploy AI in their contact centers, elevating issues to experienced agents when needed to ensure personalized, emotive CX. He will draw on his experience to explain how enterprises can automate their agent-based live chats and streamline their support channels and operations, while offering a personalized human-like interaction. Most importantly, he will discuss how to find the right balance between seamless, intelligent self-service and efficient human intervention using integrated AI-driven communications - applications, APIs and the best of both.
OPEN TALK (AI): Abusing Your CI/CD: Running Abstract Machine Learning Frameworks Inside Github ActionsJoin on Hopin
We all love the conventional uses of CI/CD platforms, from automating unit tests to multi-cloud service deployment. But most CI/CD tools are abstract code execution engines, meaning that we can also leverage them to do non-deployment-related tasks. In this session, we'll explore how GitHub Actions can be used to train a machine learning model, then run predictions in response to file commits, enabling an untrained end-user to predict the value of their home by simply editing a text file. As a bonus, we'll leverage Apple's CoreML framework, which normally only runs in an OSX or iOS environment, without ever requiring the developer to lay their hands on an Apple device.
Personalization is a game changing goal for an organization. It can bring a boost to revenue as well as increase customer satisfaction. In this talk, I’ll show you the real life technology use cases of AI for implementing Personalization. I’ll share the tips and tricks of effectively using AI in the context of personalization. Join this session to learn the core AI models that can help in building awesome Personalized experiences for your customer.
Determining the best and most suitable Machine Learning model for a given
data science problem isn't an easy task and it can be rather challenging at times.
It is like benchmarking sports cars created by different racing teams!
This presentation will show an easily extensible framework
that implements several Machine Learning models for supervised,
unsupervised and semi supervised learning to execute and/or compare models. Additionally, the talk will introduce the open source python scikit learn toolkit through several Machine Learning Models and the open source python Hydra package from Facebook and how they have been used in the framework.
The framework is extensible, generic, portable and easy to use.
In this talk, we will discuss Machine Learning practices in Software Testing stages in detail with a case study. This is an important study since nowadays, researches are looking for adaptation of Machine Learning algorithms to testing processes to reduce the manual effort and improve quality.
We start with a quick view of the machine learning types. Then, we list AI applications in testing these perspectives: test definition, implementation, execution, maintenance and grouping, and bug handling. What’s more, we do not only present existing AI applications but also what can be done in the future. Finally, we summarize the application areas with algorithms and discuss the advantages and potential risks of AI applications in software testing.
NLP is a key component in many data science systems that must understand or reason about text. This hands-on tutorial uses the open-source Spark NLP library to explore advanced NLP in Python. Spark NLP provides state-of-the-art accuracy, speed, and scalability for language understanding by delivering production-grade implementations of some of the most recent research in applied deep learning. It's the most widely used NLP library in the enterprise today. You'll edit and extend a set of executable Python notebooks by implementing these common NLP tasks: named entity recognition, sentiment analysis, spell checking and correction, document classification, and multilingual and multi domain support. The discussion of each NLP task includes the latest advances in deep learning used to tackle it, including the prebuilt use of BERT embeddings within Spark NLP, using tuned embeddings, and 'post-BERT' research results like XLNet, ALBERT, and roBERTa. Spark NLP builds on the Apache Spark and TensorFlow ecosystems, and as such it's the only open-source NLP library that can natively scale to use any Spark cluster, as well as take advantage of the latest processors from Intel and Nvidia. You'll run the notebooks locally on your laptop, but we'll explain and show a complete case study and benchmarks on how to scale an NLP pipeline for both training and inference.
Customer servicing is evolving from reactive to proactive approach to retain your customer and build loyalty. I would like to share how to build AI driven Customer Service Platform with my experience at PayPal.
I will share a framework to assess your current landscape and provide a step by step journey to reach highest level of transformational impact for your customers at scale using AI.