Past Events
PhD DefenseExplanation-Driven Learning-Based Models for Visual Recognition Tasks |
|
||
Friday, April 24, 2020, 11:00am - 12:30pm |
|||
Speaker: Zachary Daniels
Location : Remote via Webex
Committee:
Prof. Dimitris Metaxas (Chair)
Prof. Konstantinos Michmizos
Prof. George Moustakides
Prof. Fuxin Li (External Member, Oregon State University)
Event Type: PhD Defense
Abstract: Safety-critical applications (e.g., autonomous vehicles, human-machine teaming, and automated medical diagnosis) often require the use of computational agents that are capable of understanding and reasoning about the high-level content of real-world scene images in order to make rational and grounded decisions that can be trusted by humans. Many of these agents rely on machine learning-based models which are increasingly being treated as black-boxes. One way to increase model interpretability is to make explainability a core principle of the model, e.g., by forcing deep neural networks to explicitly learn grounded and interpretable features. This talk will consist of three parts. First, I will provide a high-level overview of the field of explainable/interpretable machine learning and review some existing approaches for interpreting neural networks used for computer vision tasks. Second, I will introduce several novel approaches for making convolutional neural networks (CNNs) more interpretable by utilizing explainability as a guiding principle when designing the model architecture. Third, I will discuss some possible future research directions involving explanation-driven machine learning.
:
Link:
https://rutgers.webex.com/rutgers/j.php?MTID=m2c8de56370e163e237d72908d3c475eb
ID:
798 066 597
Password:
uPaHcY6ju39