Past Events

PhD Defense

Scenario Generalization and its Estimation in Data-driven Decentralized Crowd Modeling


Download as iCal file

Wednesday, December 08, 2021, 10:00am - 12:00pm


Speaker: Gang Qiao

Location : Via Zoom


Prof. Petros Faloutsos (York University)

Prof. Yongfeng Zhang

Prof. Mubbasir Kapadia

Prof. Vladimir Pavlovic (chair)

Event Type: PhD Defense

Abstract: In the context of crowd modeling, we propose the notion of scenario generalization, which is a macroscopic view of the performance of a decentralized crowd model. Based on this notion, firstly, we aim to answer the question that how a training paradigm and a training domain (source) affect the scenario generalization of an imitation learning model when applied to a different test domain (target). We evaluate the exact scenario generalizations of models built on combinations of imitation learning paradigms and source domains. Our empirical results suggest that (i) Behavior Cloning (BC) is better than Generative Adversarial Imitation Learning (GAIL), (ii) training samples in source domain with diverse agent-agent and agent-obstacle interactions are beneficial for reducing collisions when generalized to new scenarios. Secondly, we note that although the exact evaluation of scenario generalization is accurate, it requires training and evaluation on large datasets, coupled with complex model selection and parameter tuning. To circumvent this challenge by estimating the scenario generalization without training, we proposed an information-theoretic inspired approach to characterize both the source and the target domains. It estimates the Interaction Score (IS) that captures the task-level inter-agent interaction difficulty of target scenario domain. When augmented with Diversity Quantification (DQ) on the source, the combined ISDQ score offers a means to estimating the source to target generalization of potential models. Various experiments verify the efficacy of ISDQ in estimating the scenario generalization, compared with the exact scenario generalizations of models trained with imitation leaning paradigms (BC, GAIL) and reinforcement learning paradigm (proximal policy optimization, PPO). Thus, it would enable rapid selection of the best source-target domain pair among multiple possible choices prior to training and testing of the actual crowd model.


Rutgers University School of Arts and Sciences

Contact   Prof. Vladimir Pavlovic (chair)

Join Zoom Meeting
Join by SIP
This email address is being protected from spambots. You need JavaScript enabled to view it.
Meeting ID: 434 700 3519
Password: 709087
One tap mobile
+13017158592,,4347003519# US (Washington DC)
+13126266799,,4347003519# US (Chicago)
Join By Phone
+1 301 715 8592 US (Washington DC)
+1 312 626 6799 US (Chicago)
+1 646 558 8656 US (New York)
+1 253 215 8782 US (Tacoma)
+1 346 248 7799 US (Houston)
+1 669 900 9128 US (San Jose)
Meeting ID: 434 700 3519
Find your local number:
Join by Skype for Business