Register to build your agenda.

PRO SESSION: Operationalising Responsible AI

- PST
DeveloperWeek PRO Stage A
Join on Hopin

Devangana Khokhar
ThoughtWorks, Lead Data Scientist

Devangana Khokhar is an experienced data scientist and strategist with years of experience in building intelligent systems for clients across domains and geographies and has a research background in theoretical computer science, information retrieval, and social network analysis. Her interests include data-driven intelligence, data in the humanitarian sector, and data ethics and responsibilities. In the past, Devangana has led the India chapter of DataKind. Devangana frequently consults for and guides nonprofit organizations and social enterprises on the value of data literacy and holds workshops and boot camps on the same. She’s the author of the book titled Gephi Cookbook, a beginner's guide on network sciences. Devangana currently works as Lead Data Scientist with ThoughtWorks.

Vanya Seth
Thoughtworks India, Head of Technology

Experienced Lead Consultant with a demonstrated history of working in the information technology and services industry. Strong consulting professional skilled in platforms, delivery infrastructure and cloud native applications. Working with clients from various domains and markets, guiding them on building evolutionary architectures. A passionate technologist with a knack for solving complex problems. With 10+ years of experience in building cloud native applications designed for scale. As the Head of Technology of ThoughtWorks India, helps formulate technology strategy for the clients and consults them on various aspects such as scalability, security, etc. Having worked with product firms, also have a strong product background. Also brings along extensive experience in working with open source communities


In a system that promotes responsibility, designers and architects often face two important questions: how to design mathematical and statistical models using concrete goals for fairness and inclusion and how to architect the system that facilitates it. Devangana Khokhar and myself will highlight the latter to help you learn the paradigm of evolutionary architectures to build and operationalise responsible architectures. We will outline the aspects you need to keep in mind while architecting responsible-first systems.The systems engineering realm often talks in terms of “-ilities” important for the designed system. These are also referred to as “nonfunctional requirements,” which should be tracked and accounted for at each step in the process. They serve as guard rails for making sensible decisions. When it comes to architecting fair, accountable, and transparent (FAT) AI systems,

You must consider a multitude of inner dimensions when determining whether a system is responsible or not. Some of these are auditability of the data transformation pipeline (how the data is handled and treated at each step, if there’s clear visibility into the expected state of the data, and if logs are present in each of the modules in the data pipeline), monitoring of the crucial metrics (if the logs are available centrally, if there’s clear visibility on the correctness of the pipeline modules, if the data can be interpreted in human-readable form, if you can measure the quality of the data at each step, if there’s an analytical dashboard capable of ingesting the audit logs, if you can to drill down in the dashboard to do root cause analysis, identify anomalies, etc., and if you can create reports about the precision and accuracy of the insights and the intermediate data states), and feedback loops (if you can inject biases and anomalies in the data to test the resilience of the model and the system as a whole, how to engage users in uncovering instances of “responsible” system failure and handle it effectively and efficiently, and how to ensure that the feedback loops aren’t biased).

The most important question to answer is how to operationalise this. Evolutionary architecture talks about a fitness function. If you want the systems to hold this tenet close and always ensure its fulfilment—automate it. Codifying these tenets into tests that give feedback is the right way to think about this. Every time a change is made in the model, the system should provide feedback on its responsibility. The tests should start to fail as the system digresses from the set threshold. There’s often a notion that algorithms are a black box and hard to explain. However, it’s important to acknowledge that even white-box algorithms need explanation and need to be accountable.