CSE Colloquium: Costs and Benefits of Invariant Representation Learning
Abstract: The success of supervised machine learning in recent years crucially hinges on the availability of large-scale and unbiased data, which is often time-consuming and expensive to collect. Recent advances in deep learning focus on learning invariant representations that have found abundant applications in both domain adaptation and algorithmic fairness. However, it is not clear what price we have to pay in terms of task utility for such universal representations. In this talk, I will discuss my recent work on understanding and learning invariant representations. In the first part, I will focus on understanding the costs of existing invariant representations by characterizing a fundamental tradeoff between invariance and utility. In particular, I will use domain adaptation as an example to both theoretically and empirically show such tradeoff in achieving small joint generalization error. This result also implies an inherent tradeoff between fairness and utility in both classification and regression settings. In the second part of the talk, I will focus on designing learning algorithms to escape the existing tradeoff and to utilize the benefits of invariant representations. I will show how the algorithm can be used to guarantee equalized treatment of individuals between groups and discuss what additional problem structure it requires to permit efficient domain adaptation through learning invariant representations.
Biography: Han Zhao is a PhD candidate at the Machine Learning Department, Carnegie Mellon University, advised by Geoffrey J. Gordon. Before coming to CMU, he obtained his BEng degree in Computer Science from Tsinghua University (honored as a Distinguished Graduate) and MMath degree in mathematics from the University of Waterloo (honored with the Alumni Gold Medal Award). He has also spent time at Huawei Noah's Ark Lab, Baidu Research, Microsoft Research, and the D. E. Shaw Group. His research interests are broadly in machine learning, with a focus on invariant representation learning and tractable probabilistic reasoning.
Event Contact: Daniel Kifer