Foundations of Reliable and Lawful Machine Learning for High-Stakes Applications
Abstract: How do we ensure that the machine learning algorithms in high-stakes applications, such as hiring, lending, admissions, etc., are fair, explainable, and lawful? Towards addressing this urgent question, this talk will provide some foundational perspectives that are deep-rooted in information theory, causality, and statistics. In the first part of the talk, I will discuss an emerging problem in policy-compliant explanations: how do we guide a rejected applicant to change the model outcome while also being robust to potential real-world model updates (also called robust counterfactual explanations)? We propose an axiomatic measure to quantify the robustness of counterfactual explanations to real-world model updates and provide strategies to generate explanations that are provably more reliable. In the second part of the talk, I will discuss a question that bridges the fields of fairness, explainability, and law: how do we check if the disparity in a model is purely due to critical occupational necessities or not? We propose a systematic measure of the legally non-exempt disparity, that brings together information theory (Partial Information Decomposition) with causality. Lastly, I will also briefly talk about some of our other research interests in related topics.
Bio: Dr. Sanghamitra Dutta is an assistant professor in the Department of Electrical and Computer Engineering at the University of Maryland College Park. She is also affiliated with the University of Maryland Center for Machine Learning at UMIACS. Prior to joining UMD, she was a senior research associate at JPMorgan Chase AI Research in the Explainable AI Centre of Excellence (XAI CoE). She received her Ph.D. and Master's from Carnegie Mellon University and B. Tech. from IIT Kharagpur, all in Electrical and Computer Engineering. Her research interests broadly revolve around reliable, lawful, and trustworthy machine learning. She is particularly interested in addressing the challenges concerning fairness, and explainability, by bringing in a novel foundational perspective deep-rooted in information theory, causality, and statistics. Her research has been published in both machine learning and information theory venues, featured in New Scientist, and also adopted as part of the fair lending model review at JPMorgan. In her prior work, she has also examined problems in reliable computing for distributed machine learning, using coding theory (an emerging area called “coded computing”). She is a recipient of several fellowships including 2022 Simons Institute Fellowship for Causality and 2019 K&L Gates Presidential Fellowship in Ethics and Computational Technologies. Her Ph.D. thesis received the 2021 A G Milnes Outstanding Thesis Award.
Event Contact: Iam-Choon Khoo