Past Events

Faculty Candidate Talk

Reasoning and Learning in Interactive Natural Language Systems


Download as iCal file

Tuesday, March 22, 2022, 10:30am - 12:00pm


Zoom meeting Information:

Topic: Alane Suhr Faculty Candidate Talk
Time: Mar 22, 2022 10:30 AM Eastern Time (US and Canada)

Join Zoom Meeting

Join by SIP

Meeting ID: 980 7484 8709
Password: 278066
One tap mobile
+16465588656,,98074848709# US (New York)
+13017158592,,98074848709# US (Washington DC)

Join By Phone
+1 646 558 8656 US (New York)
+1 301 715 8592 US (Washington DC)
+1 312 626 6799 US (Chicago)
+1 669 900 9128 US (San Jose)
+1 253 215 8782 US (Tacoma)
+1 346 248 7799 US (Houston)
Meeting ID: 980 7484 8709
Find your local number:

Join by Skype for Business

If you have any questions, please <a href="">contact the Office of Information Technology Help Desk</a>

Speaker: Alane Suhr


Alane Suhr is a PhD Candidate in the Department of Computer Science at Cornell University, advised by Yoav Artzi. Her research spans natural language processing, machine learning, and computer vision, with a focus on building systems that participate and continually learn in situated natural language interactions with human users. Alane’s work has been recognized by paper awards at ACL and NAACL, and has been supported by fellowships and grants, including an NSF Graduate Research Fellowship, a Facebook PhD Fellowship, and research awards from AI2, ParlAI, and AWS. Alane has also co-organized multiple workshops and tutorials appearing at NeurIPS, EMNLP, NAACL, and ACL. Previously, Alane received a BS in Computer Science and Engineering as an Eminence Fellow at the Ohio State University.

Location : Via Zoom

Event Type: Faculty Candidate Talk

Abstract: Systems that support expressive, situated natural language interactions are essential for expanding access to complex computing systems, such as robots and databases, to non-experts. Reasoning and learning in such natural language interactions is a challenging open problem. For example, resolving sentence meaning requires reasoning not only about word meaning, but also about the interaction context, including the history of the interaction and the situated environment. In addition, the sequential dynamics that arise between user and system in and across interactions make learning from static data, i.e., supervised data, both challenging and ineffective. However, these same interaction dynamics result in ample opportunities for learning from implicit and explicit feedback that arises naturally in the interaction. This lays the foundation for systems that continually learn, improve, and adapt their language use through interaction, without additional annotation effort. In this talk, I will focus on these challenges and opportunities. First, I will describe our work on modeling dependencies between language meaning and interaction context when mapping natural language in interaction to executable code. In the second part of the talk, I will describe our work on language understanding and generation in collaborative interactions, focusing on continual learning from explicit and implicit user feedback.


Rutgers University School of Arts and Sciences

Contact  Yongfeng Zhang