CS Events

Computer Science Department Colloquium

Mathematical Foundations for Trustworthy Machine Learning

 

Download as iCal file

Thursday, March 07, 2024, 10:30am - 12:00pm

 

Speaker: Lunjia Hu

Bio

Lunjia Hu is a final-year Computer Science PhD student at Stanford University, advised by Moses Charikar and Omer Reingold. He works on advancing the theoretical foundations of trustworthy machine learning, addressing fundamental questions about interpretability, fairness, robustness, and uncertainty quantification. His works on algorithmic fairness and machine learning theory have received Best Student Paper awards at ALT 2022 and ITCS 2023.

Location : CoRE 301

Event Type: Computer Science Department Colloquium

Abstract: Machine learning holds significant potential for positive societal impact. However, in critical applications involving people such as healthcare, employment, and lending, machine learning raises serious concerns of fairness, robustness, and interpretability. Addressing these concerns is crucial for making machine learning more trustworthy. This talk will focus on three lines of my recent research establishing the mathematical foundations of trustworthy machine learning. First, I will introduce a theory that optimally characterizes the amount of data needed for achieving multicalibration, a recent fairness notion with many impactful applications. This result is an instance of a broader theory developed in my research giving the first sample complexity characterizations for learning tasks with multiple interacting function classes (ALT’22 Best Student Paper, ITCS’23 Best Student Paper). Next, I will discuss my research in omniprediction, a new approach to robust learning that allows for simultaneous optimization of different loss functions and fairness constraints (ITCS'23, ICML’23). Finally, I will present a principled theory of calibration of neural networks (STOC’23). This theory provides an essential tool for understanding uncertainty quantification and interpretability in deep learning, allowing rigorous explanations for interesting empirical phenomena (NeurIPS'23 spotlight, ITCS'24).

Contact  Assistant Professor Aaron Bernstein

Join Zoom Meeting
https://rutgers.zoom.us/j/2014444359?pwd=WW9ybFNCNVFrUWlycHowSHdNZjhzUT09
Meeting ID: 201 444 4359
Password: 550978