CS Events

PhD Defense

Counterfactual Explainable AI for Human and Science


Download as iCal file

Thursday, November 30, 2023, 10:30am - 12:00pm


Speaker: Juntao Tan

Location : CoRE 305


Professor Yongfeng Zhang (Chair)

Professor Jie Gao

Professor Dong Deng

Professor Quanquan Gu, University of California, Los Angeles (UCLA)

Event Type: PhD Defense

Abstract: Artificial Intelligence (AI) goes beyond merely making predictions. Its explainability is crucial not only for enhancing user satisfaction but also for facilitating more effective decision-making. Among all available methods for achieving explainable AI, this dissertation focuses on the specialized domain of counterfactual explanations. Counterfactual explanations offer a unique interpretation of systems by providing ``what-if'' scenarios that illuminate how a given outcome could differ if the system input were altered. The model-agnostic nature of counterfactual explanations makes them exceptionally well-suited for elucidating the intrinsic mechanisms of advanced AI systems. This is particularly critical in an era where such systems, especially those employing deep neural networks, are becoming increasingly opaque and complex. An in-depth investigation is conducted into the applicability of counterfactual explainable AI across both human-centered and science-oriented AI models. Within the context of human-centered AI systems, such as recommender systems, the incorporation of counterfactual explanations can enhance user trust and satisfaction. In the scientific field, counterfactual explainable AI offers a valuable contribution. It helps researchers identify key factors behind model predictions in a straightforward manner and promotes trust and credibility in AI-generated outcomes, thereby accelerating both the human comprehension of natural phenomena and the pace of scientific innovation. This dissertation offers a thorough and methodical exploration of counterfactual explainable AI, encompassing its underlying philosophy, stated objectives, methodological framework, practical applications, and evaluation metrics.

Contact  Professor Yongfeng Zhang (Chair)