CS Events

PhD Defense

Neural Graph Reasoning for Explainable Decision-Making


Download as iCal file

Friday, April 09, 2021, 10:30am - 12:30pm


Speaker: Yikun Xian

Location : Remote via Zoom


Prof. Shan Muthukrishnan (Advisor)
Prof. Yongfeng Zhang
Prof. Gerard de Melo
Dr. Lihong Li (Amazon)

Event Type: PhD Defense

Abstract: Modern decision-making systems are primarily accuracy-driven, seeking to learn good representations and provide accurate predictions via deep learning techniques. However, due to the black-box nature of deep neural networks, the explainability is largely ignored, which plays a pivotal role in practical applications such as user modeling, digital marketing and e-commerce platforms, etc. The explanations can be leveraged to not only assist model developers to understand and debug the working mechanism of the decision-making process, but also facilitate better engagement and trustworthiness for the end users who consume the results delivered by the systems. Meanwhile, the explanations are supposed to be both consistent to the decision-making process and understandable to human beings, while most existing explainable approaches only possess one of the properties. In this thesis, we concentrate on how to generate and evaluate faithful and comprehensible explanations via external heterogeneous graphs in various practical scenarios. The meaningful and versatile graph structures (e.g., knowledge graphs) are shown to be effective in improving model performance, and more importantly, make it possible for an intelligent decision-making system to conduct explicit reasoning over graphs to generate predictions. The benefit is that the resulting graph paths can be directly regarded as the explanations to the predictions, because the traceable facts along the paths reflect the decision-making process and can also be easily understood by humans. We propose several neural graph reasoning approaches to generate such path-based explainable results, by marrying the powerfulness of deep neural models and the interpretability of graph structures. The covered techniques range from deep reinforcement learning, imitation learning to neural-symbolic reasoning and neural logic reasoning. The proposed models are extensively evaluated on real-world benchmarks across different application scenarios such as recommendations and column annotations. The experimental results demonstrate both superior performance gains and better explainability of these methods.


Zoom Info:
Join Zoom Meeting

Join by SIP
This email address is being protected from spambots. You need JavaScript enabled to view it.

Meeting ID: 716 848 2183
Password: 984854
One tap mobile
+13017158592,,7168482183# US (Washington DC)
+13126266799,,7168482183# US (Chicago)

Join By Phone
+1 301 715 8592 US (Washington DC)
+1 312 626 6799 US (Chicago)
+1 646 558 8656 US (New York)
+1 253 215 8782 US (Tacoma)
+1 346 248 7799 US (Houston)
+1 669 900 9128 US (San Jose)

Meeting ID: 716 848 2183
Find your local number: https://rutgers.zoom.us/u/atA1uxURp

Join by Skype for Business