CS Events Monthly View

PhD Defense

Neural Logic Reasoning and Applications


Download as iCal file

Friday, March 25, 2022, 03:00pm


Zoom Invitation Info:
Hanxiong Chen is inviting you to a scheduled Zoom meeting.

Topic: Hanxiong Chen's Ph.D. Defense
Time: Mar 25, 2022 03:00 PM Eastern Time (US and Canada)

Join Zoom Meeting

Join by SIP

Meeting ID: 936 9883 6491
Password: 010437
One tap mobile
+13017158592,,93698836491# US (Washington DC)
+13126266799,,93698836491# US (Chicago)

Join By Phone
+1 301 715 8592 US (Washington DC)
+1 312 626 6799 US (Chicago)
+1 646 558 8656 US (New York)
+1 253 215 8782 US (Tacoma)
+1 346 248 7799 US (Houston)
+1 669 900 9128 US (San Jose)
Meeting ID: 936 9883 6491
Find your local number: https://rutgers.zoom.us/u/aehJyNHi8t

Join by Skype for Business

Speaker: Hanxiong Chen

Location : Via Zoom


Dr. Yongfeng Zhang (Chair, advisor)

Dr. He Zhu (Rutgers, Computer Science Dept.)

Dr. Hao Wang (Rutgers, Computer Science Dept.)

Dr. Qingyao Ai (External member from the University of Utah)

Event Type: PhD Defense

Abstract: Recent years have witnessed the success of deep neural networks in many research areas. The fundamental idea behind the design of most neural networks is to learn similarity patterns from data for prediction and inference, which lacks the ability of cognitive reasoning. However, the concrete ability of reasoning is critical to many theoretical and practical problems. On the other hand, traditional symbolic reasoning methods do well in making logical inference, but they are mostly hard rule-based reasoning, which limits their generalization ability to different tasks since different tasks may require different rules. In this work, we propose a Neural Logical Reasoning (NLR) framework to integrate the power of deep learning and logical reasoning. NLR is a dynamic modularized neural architecture that learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the logical structured network for inference. Experiments show that our approach achieves state-of-the-art performance in various application scenarios. Moreover, we utilize a neural architecture search strategy to allow the model to learn the adaptive logical neural architectures automatically which brings flexibility to our framework.


Rutgers University School of Arts and Sciences

Contact  Yongfeng Zhang