Multimodal Communication: Commonsense, Grounding, and Computation Abstract
Tuesday, June 23, 2020, 10:00am - 12:00pm
Location : Remote via Webex
Matthew Stone (Chair)
Gerard de Melo
Ani Nenkova (external member)
Event Type: PhD Defense
Abstract: From the gestures that accompany speech to images in social media posts, humans effortlessly combine words with visual presentations. Communication succeeds even though visual and spatial representations are not necessarily wired to syntax and conventions, and do not always replicate appearance. Machines, however, are not equipped to understand and generate such presentations due to people’s pervasive reliance on commonsense and world knowledge in relating words and external presentations. I show the potential of discourse modeling for solving the problem of multimodal communication. I start by presenting a computational model for diagram understanding, extending accounts from linguistics to learn the interpretation of schematic elements such as arrows. I then present a novel framework for modeling and learning a deeper combined understanding of text and images by classifying inferential relations to predict temporal, causal, and logical entailments in context. This enables systems to make inferences with high accuracy while revealing author expectations and social-context preferences. I proceed to design methods for generating text based on visual input that use these inferences to provide users with key requested information. The results show a dramatic improvement in the consistency and quality of the generated text by decreasing spurious information by half. Finally, I describe the design of two multimodal communicative systems that can reason on the context of interactions in the areas of human-robot collaboration and conversational artificial intelligence and describe my research vision: to build human-level communicative systems and grounded artificial intelligence by leveraging the cognitive science of language use.