CSE Colloquium: Interpretable Machine Learning: Theory and Practice

ZOOM INFORMATION: Join from PC, Mac, Linux, iOS or Android: https://psu.zoom.us/j/98281077532?pwd=V2JZUU1hdlRnb3ROU1NRRXZJTkp2QT09 Password: 162955 

or iPhone one-tap (US Toll): +16468769923,98281077532#or +13017158592,98281077532# 

or Telephone: Dial: +1 646 876 9923 (US Toll) +1 301 715 8592 (US Toll) +1 312 626 6799 (US Toll) +1 669 900 6833 (US Toll) +1 253 215 8782 (US Toll) +1 346 248 7799 (US Toll) Meeting ID: 982 8107 7532 Password: 162955 International numbers available: https://psu.zoom.us/u/abaI9k4GKF 

 ABSTRACT: Continued and remarkable empirical successes of increasingly complicated machine learning models such as neural networks without a sound theoretical understanding of success and failure conditions can leave a practitioner blind-sided and vulnerable, especially in critical applications such as self-driving cars and medical diagnosis. As such, there has been an enhanced interest in recent times in research on building interpretable models as well as interpreting model predictions. In this talk, I will discuss various theoretical and practical aspects of interpretability in machine learning along both these directions through the lenses of feature attribution and example-based learning. In the first part of the talk, I will present novel theoretical results to bridge the gap in theory and practice for interpretable dimensionality reduction aka feature selection. Specifically, I will show that feature selection satisfies a weaker form of submodularity. Because of this connection, for any function, one can provide constant factor approximation guarantees that are solely dependent on the condition number of the function. Moreover, I will discuss that the cost of interpretability accrued because of selecting features as opposed principal components is not as high as previously thought to be. In the second part of the talk, I will discuss the development of a probabilistic framework for example-based machine learning to address ``which training data points are responsible for making given test predictions?“. This framework generalizes the classical influence functions. I will also present an application of this framework to understanding the transfer of adversarially trained neural network models. 

BIOGRAPHY: Rajiv Khanna is currently a postdoc in the Department of Statistics at University of California at Berkeley, working with Professor Michael Mahoney. Previously, he was Research Fellow in the program of Foundations of Data Science at the Simons Institute also at UC Berkeley. He completed his Ph.D. at the Department of Electrical and Computer Engineering at the University of Texas at Austin in August 2018, working with Professors Joydeep Ghosh and Alexandros G. Dimakis. 

 

Share this event

facebook linked in twitter email

Event Contact: Daniel Kifer

 
 

About

The School of Electrical Engineering and Computer Science was created in the spring of 2015 to allow greater access to courses offered by both departments for undergraduate and graduate students in exciting collaborative research fields.

We offer B.S. degrees in electrical engineering, computer science, computer engineering and data science and graduate degrees (master's degrees and Ph.D.'s) in electrical engineering and computer science and engineering. EECS focuses on the convergence of technologies and disciplines to meet today’s industrial demands.

School of Electrical Engineering and Computer Science

The Pennsylvania State University

207 Electrical Engineering West

University Park, PA 16802

814-863-6740

Department of Computer Science and Engineering

814-865-9505

Department of Electrical Engineering

814-865-7667