CS Events

Seminar

Interpretability vs. Explainability in Machine Learning

 

Download as iCal file

Friday, July 17, 2020, 10:00am - 11:30am

 

Join this meeting via Webex:
https://rutgers.webex.com/webappng/sites/rutgers/meeting/download/f858abb590d543f9bb77a82e217964cd?siteurl=rutgers&MTID=meecbc14d8b70aac1b49564fba8b6d21a

Meeting number (access code): 120 870 2906
Meeting password: 1234

 

Speaker: Cynthia Rudin, Duke University

Bio

Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University, and directs the Prediction Analysis Lab, whose main focus is in interpretable machine learning. She is also an associate director of the Statistical and Applied Mathematical Sciences Institute (SAMSI). Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is a three time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the "Top 40 Under 40" by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She is past chair of both the INFORMS Data Mining Section and the Statistical Learning and Data Science section of the American Statistical Association. She has also served on committees for DARPA, the National Institute of Justice, and AAAI. She has served on three committees for the National Academies of Sciences, Engineering and Medicine, including the Committee on Applied and Theoretical Statistics, the Committee on Law and Justice, and the Committee on Analytic Research Foundations for the Next-Generation Electric Grid. She is a fellow of the American Statistical Association and a fellow of the Institute of Mathematical Statistics. She gave a Thomas Langford Lecturer at Duke University during the 2019-2020 academic year, and will be the Terng Lecturer at the Institute for Advanced Study in 2020.

Event Type: Seminar

Abstract: With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice. Explanations for black box models are not reliable, and can be misleading. If we use interpretable machine learning models, they come with their own explanations, which are faithful to what the model actually computes.In this talk, I will discuss some of the reasons that black boxes with explanations can go wrong, whereas using inherently interpretable models would not have these same problems. I will give an example of where an explanation of a black box model went wrong, namely, I will discuss ProPublica's analysis of the COMPAS model used in the criminal justice system: ProPublica's explanation of the black box model COMPAS was flawed because it relied on wrong assumptions to identify the race variable as being important. Luckily in recidivism prediction applications, black box models are not needed because inherently interpretable models exist that are just as accurate as COMPAS.I will also give examples of interpretable models in healthcare. One of these models, the 2HELPS2B score, is actually used in intensive care units in hospitals; most machine learning models cannot be used when the stakes are so high.Finally, I will discuss two long-term projects my lab is working on, namely optimal sparse decision trees and interpretable neural networks.

Organization

TRIPODS (Transdisciplinary Research in Principles of Data Science) Seminar Series

Sponsored by the TRIPODS DATA-INSPIRE Institute, a joint collaboration of

DIMACS and the Rutgers Departments of Computer Science, Mathematics, and Statistics

 

Contact  Faculty Host: David Pennock