CS Events Monthly View
Qualifying ExamEnhancing Language Models with Logical Reasoning and Automatic Error Analysis |
|
||
Thursday, April 06, 2023, 03:30pm - 05:00pm |
|||
Abstract: Language models have been widely adopted in natural language processing as they can learn informative representations and store a large amount of knowledge implicitly. However, various problems are identified as (1) they do not know what they know, i.e. lacking logical reasoning ability based on expertise knowledge, and (2) they do not know what they do not know, i.e. providing answers even in low-confidence scenarios. To address these limitations, we propose two approaches. Firstly, we infuse knowledge and logical reasoning based on knowledge into language models using a chain-of-logic framework for all representation learning models in a common sense knowledge graph link prediction task. Our experiments show that this approach can improve all representation learning models on ConceptNet-100k and WebChild-comparative datasets. Secondly, we design an automatic slice detection model that can automatically analyze types of error-incurring features and detect error-prone datapoints, and propose a benchmark with 38 potential error-incurring linguistic features. The model (1) excels in picking error-prone datapoints, subsequently improving model performance and (2) points out error-incurring features in datapoints. Our experiments show promising results on GLUE benchmark tasks and Jigsaw detoxification tasks.
Speaker: Wenyue Hua
Location : CoRE 305
Committee:
Professor Yongfeng Zhang (Advisor)
Professor He Zhu
Professor Hao Wang
Professor Peng Zhang
Event Type: Qualifying Exam
Abstract: See above
Organization:
Rutgers University
School of Arts & Sciences
Department of Computer Science
Contact Professor Yongfeng Zhang