Thursday, February 18, 2021

- PST
PRO SESSION: Operationalising Responsible AI
Join on Hopin
Devangana Khokhar
Devangana Khokhar
ThoughtWorks, Lead Data Scientist
Vanya Seth
Vanya Seth
Thoughtworks India, Head of Technology

In a system that promotes responsibility, designers and architects often face two important questions: how to design mathematical and statistical models using concrete goals for fairness and inclusion and how to architect the system that facilitates it. Devangana Khokhar and myself will highlight the latter to help you learn the paradigm of evolutionary architectures to build and operationalise responsible architectures. We will outline the aspects you need to keep in mind while architecting responsible-first systems.The systems engineering realm often talks in terms of “-ilities” important for the designed system. These are also referred to as “nonfunctional requirements,” which should be tracked and accounted for at each step in the process. They serve as guard rails for making sensible decisions. When it comes to architecting fair, accountable, and transparent (FAT) AI systems,

You must consider a multitude of inner dimensions when determining whether a system is responsible or not. Some of these are auditability of the data transformation pipeline (how the data is handled and treated at each step, if there’s clear visibility into the expected state of the data, and if logs are present in each of the modules in the data pipeline), monitoring of the crucial metrics (if the logs are available centrally, if there’s clear visibility on the correctness of the pipeline modules, if the data can be interpreted in human-readable form, if you can measure the quality of the data at each step, if there’s an analytical dashboard capable of ingesting the audit logs, if you can to drill down in the dashboard to do root cause analysis, identify anomalies, etc., and if you can create reports about the precision and accuracy of the insights and the intermediate data states), and feedback loops (if you can inject biases and anomalies in the data to test the resilience of the model and the system as a whole, how to engage users in uncovering instances of “responsible” system failure and handle it effectively and efficiently, and how to ensure that the feedback loops aren’t biased).

The most important question to answer is how to operationalise this. Evolutionary architecture talks about a fitness function. If you want the systems to hold this tenet close and always ensure its fulfilment—automate it. Codifying these tenets into tests that give feedback is the right way to think about this. Every time a change is made in the model, the system should provide feedback on its responsibility. The tests should start to fail as the system digresses from the set threshold. There’s often a notion that algorithms are a black box and hard to explain. However, it’s important to acknowledge that even white-box algorithms need explanation and need to be accountable.