04:00 - 06:00 PM (IST)

Pre-Boot Camp

by Abhijeet Sharma

Overview of Machine Learning including, Supervised, Unsupervised, and Reinforcement Learning along with a broad discussion on some common ML techniques will be covered.

04:00 - 05:30 PM (IST)

Approaches towards explaining model predictions in NLP

by Anirban Laha

With the advent of deep learning, a lot of progress has been made towards algorithm development to solve a plethora of practical problems in Natural Language Processing. These algorithms incorporate highly complex networks, which are becoming increasingly difficult to explain theoretically. The difficulty is exacerbated even further due to the recent trends of extremely large networks which are trained on datasets of size in the order of billions (for example, GPT-3). It has been seen that models often pick up a lot of human biases and spurious patterns from the data and can also lead to offensive results (like Microsoft Tay, or Amazon's Recruiting tool). Thus, it has become necessary to understand the working of these networks to establish trust and ensure fairness and safety before there are deployable in large production environments. Also, this understanding can help unravel shortcomings which can lead to better algorithm development. In recent years, various approaches have been proposed to explain model predictions in a network-agnostic way or with limited assumptions about the network. This talk will focus on these approaches in the context of NLP, starting with motivating applications, touching upon the basic paradigms of explainability, following up with discussion on influential approaches, and laying the ground for research gaps and current trends.

06:00 - 07:30 PM (IST)

Explainability in Healthcare: From application oriented approaches to ethics of deployment

by Shalmali Joshi

Machine learning-based tools are increasingly being considered a valuable tool for providing clinical care. XAI is often used to increase trust and the adoption of such tools in healthcare applications. In this talk, I will discuss an approach to Explainability in Machine Learning grounded in the applications. I will demonstrate that understanding users' needs can guide the technical development of novel explanation methods. I will provide an overview of explainability techniques developed in the context of healthcare. Next, I will demonstrate why explanations can be misleading and miscalibrate user trust, especially in high-stakes applications like healthcare, and should be validated with care. Finally, I will talk about how developing good XAI techniques for any application domain should be guided by the ethics of the application domain and how that, in turn, sets a high and domain-specific standard of validation of explainability methods themselves.

04:00 - 05:30 PM (IST)

Explainable AI: An Introduction and Overview

by Vineeth N Balasubramanian

The last decade has seen rapid strides in Artificial Intelligence (AI) moving from being a fantasy to a reality that is a part of each one of our lives, embedded in various technologies. A catalyst of this rapid uptake has been the enormous success of deep learning methods for addressing problems in various domains including computer vision, natural language processing, and speech understanding. However, as AI makes its way into risk-sensitive and safety-critical applications such as healthcare, aerospace and finance, it is essential for AI models to not only make predictions but also be able to explain their predictions. This talk will introduce the audience to this increasingly important area of explainable AI (XAI), as well as present an overview of existing methods in XAI -- focusing on methods used in tandem with deep neural network models.

06:00 - 07:30 PM (IST)

Explainability in Sequential Decision-Making Problems

by Tathagata Chakraborti

The world of Explainable AI is rapidly expanding in scope from classification tasks to more complex decision-making processes where AI algorithms play an outsized role. Arguably, such settings bring out more challenging problems in XAI since they involve interactions with the end-user rather than explanations for purposes of debugging. In this talk, I will give a whirlwind tour of the many domains where such explainability issues manifest and the role of mental modeling as an underlying theme in designing the explainability of AI algorithms in all those domains.

06:00 - 09:00 PM (IST)

Explainable AI: From Correlation to Causation

by Karthikeyan Shanmugam and Amit Dhurandhar

As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. At the same time, these stakeholders, whether they be affected citizens, government regulators, domain experts, or system developers, present different requirements for explanations. We in this tutorial, first provide a broad overview of the field in terms of types of methods each of which may be appropriate in different scenarios. Overview would cover different types of interpretable methods including directly interpretable models, prototype generation and local explanations In the second part of the tutorial, we will introduce the notion of contrastive explanations. We will first explore bi-factual contrastive explanations and discuss some methods that provide such explanations. Then, we explore contrastive explanations that are counterfactual in nature. We will define the notion of counterfactuals from the “actual causality” perspective and specify what assumptions are needed to estimate it in general from within a Pearlian framework. We will discuss recent works that provide a principled way to reason about counterfactuals that use a combination of causal assumptions and data driven methods. We will discuss open problems relating to counterfactual explanations.

04:00 - 05:30 PM (IST)

Towards Interpretable AI: Visualization of learned neural models

by Harish Guruprasad

Models trained from data using machine learning are becoming ubiquitous: from tagging your friends on a Facebook photo to auto-approving an insurance claim and even more. While they do remarkably well on average, sometimes they have quirks that cannot be understood and can potentially hurt an individual. In particular, with the ascendance of neural networks and deep learning, each model often does millions of multiplications and additions to make a single prediction and hence it is not possible to interpret the reasoning behind such decisions. This has led to explosive growth in a rich new field known variously as interpretable or explainable AI. The broad goal of interpretable or explainable AI is to aid humans in understanding AI models and individual model decisions. In this talk, we will see a broad overview of various analytical and visualization techniques that enable this field.

06:00 - 07:30 PM (IST)

One Explanation Does Not Fit All: A hands-on session on AI Explainability 360 Toolkit

by Vijay Arya

As machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. Moreover, these stakeholders, whether they be government regulators, affected citizens, domain experts, or developers, present different requirements for explanations. To address these needs, we introduce AI Explainability 360, an open-source software toolkit featuring eight diverse state-of-the-art explainability methods, two evaluation metrics, and an extensible software architecture that organizes these methods according to their use in the AI modeling pipeline. Additionally, we have implemented several enhancements to bring research innovations closer to consumers of explanations, ranging from simplified, more accessible versions of algorithms, to guidance material to help users navigate the space of explanations along with tutorials and an interactive web demo to introduce AI explainability to practitioners. Together, our toolkit can help improve transparency of machine learning models and provides a platform to integrate new explainability techniques as they are developed. Code for hands-on session can be found in this GitHub repository.

04:00 - 05:00 PM (IST)

Research Showcase