CS Events
Computer Science Department ColloquiumReliable Machine Learning by Integrating Context |
|
||
Thursday, March 21, 2024, 10:30am - 12:00pm |
|||
Speaker: Chengzhi Mao
Bio
Chengzhi Mao is a postdoctoral research fellow and prior Ph.D. student from the Department of Computer Science at Columbia University, advised by Prof. Carl Vondrick and Junfeng Yang. He is also a core faculty member at MILA, Quebec AI Institute, since 2023. He received his Bachelor's from Tsinghua University. His research resides in trustworthy machine learning and computer vision. His work has led to over 20 publications and Orals at top computer vision and machine learning conferences, which have been covered by Science and MITNews. He is a recipient of the CVPR doctoral award in 2023.
Location : CoRE 301
:
Event Type: Computer Science Department Colloquium
Abstract: Machine learning is now widely used and deeply embedded in our lives. However, despite the excellent performance of machine learning models on benchmarks, state-of-the-art methods like neural networks often fail once they encounter realistic settings. Since neural networks often learn correlations without reasoning with the right signals and knowledge, they fail when facing shifting distributions, unforeseen corruptions, and worst-case scenarios. Since neural networks are black boxes, they are not interpretable and not trusted by the user. In this talk, I will show how to build reliable machine learning by tightly integrating context into the models. The context has two aspects: the intrinsic structure of natural data, and the extrinsic structure of domain knowledge. Both are crucial: By capitalizing on the intrinsic structure in natural images, I show that we can create adaptive computer vision systems that are robust, even in the worst case, an analytical result that also enjoys strong empirical gains. Through the integration of external knowledge, such as causal structure, my framework can instruct models to use the right signals for visual recognition, enabling new opportunities for controllable and interpretable models. I will also talk about future work on reliable foundation models.
:
Contact Assistant Professor Hao Wang