CS Events

Computer Science Department Colloquium

Explainable AI: From Human to Nature

 

Download as iCal file

Tuesday, December 01, 2020, 10:30am

 

Speaker: Yongfeng Zhang

Bio

Yongfeng Zhang is an Assistant Professor in the Department of Computer Science at Rutgers University. His research interest is in Machine Learning and Data Mining, Information Retrieval and Recommender Systems, Economics of Data Science and Explainable AI. He is a Siebel Scholar of the class 2015, and a 2018 Best Professor Awardee by Rutgers CSGSS for Teaching and Mentoring. Recently, he has been working on various approaches to explainable AI, including but not limited to neural logical reasoning, knowledge graph reasoning, causal machine learning, explainable graph neural networks, as well as their application in human-centered and nature-oriented tasks. His research is generously supported by funds from Rutgers, NSF, Google, Adobe, and NVIDIA. He has been serving as PC or SPC members for top-tier computer science conferences as well as the Associate Editor for the ACM Transactions on Information Systems.

Event Type: Computer Science Department Colloquium

Abstract: AI has been an important part in many research areas or disciplines, spanning from human-centered tasks such as search, recommendation, dialog systems and social networks, to nature-oriented tasks such as drug discovery, bioscience and physics. However, how to understand and interpret the decisions or results produced by AI remains a significant problem, which greatly influences the trust between human and AI. In this talk, we will introduce our recent work on Explainable AI from both technical and application perspectives. On the technical perspective, we will introduce our work on explainable machine learning models for Explainable AI, including neural logic and neural symbolic reasoning, causal and counterfactual reasoning, knowledge graph reasoning, explainable graph neural networks and techniques for generating natural language explanations. On the application perspective, we will introduce our efforts on both human-centered and nature-oriented tasks, including search engine, recommender systems, dialog systems, molecular classification and biodiversity conservation based on our developed Explainable AI models.

Organization

Department of Computer Science

Contact  Host: Matthew Stone