Congratulations to Profs. Hao Wang and Yongfeng Zhang, who have received an NSF RI Small Grant for their project titled "Enabling Interpretable AI via Bayesian Deep Learning", for an amount of $499,926, covering a three-year period starting from 10/1/2021.
Interpretability is one of the fundamental obstacles on the adoption and deployment of deep-learning-based AI systems across various fields such as healthcare, e-commerce, transportation, earth science, and manufacturing. An ideal interpretable model should be able to interpret its prediction using human-understandable concepts, conform to conditional dependencies in the real world, and handle uncertainty in data (e.g., how certain the model is about the rainfall tomorrow). The goal of this project is to develop a general interpreter framework for deep learning models to natively support these desiderata. Methods developed in this project will be applied in health monitoring to interpret models’ reasoning on patient status, and in recommender systems to interpret models’ recommended items for users.