CS Events

Qualifying Exam

Generative Video Transformer: Can Objects be the Words?

 

Download as iCal file

Tuesday, December 13, 2022, 09:00am

 

Speaker: Yi-fu Wu

Location : Virtual https://rutgers.zoom.us/j/96462372427?pwd=Tm5vUmxzcGdZZ1RaWGVWUEE0VDNwQT09

Committee

Prof. Sungjin Ahn (Chair)

Prof. Hao Wang

Prof. Abdeslam Boularias

Prof.Yongfeng Zhang(external)

Event Type: Qualifying Exam

Abstract: Transformers have been successful for many natural language processing tasks. However, applying transformers to the video domain for tasks such as long-term video generation and scene understanding has remained elusive due to the high computational complexity and the lack of natural tokenization. In this paper, we propose the Object-Centric Video Transformer (OCVT) which utilizes an object-centric approach for decomposing scenes into tokens suitable for use in a generative video transformer. By factoring the video into objects, our fully unsupervised model is able to learn complex spatio-temporal dynamics of multiple interacting objects in a scene and generate future frames of the video. Our model is also significantly more memory-efficient than pixel-based models and thus able to train on videos of length up to 70 frames with a single 48GB GPU. We compare our model with previous RNN-based approaches as well as other possible video transformer baselines. We demonstrate OCVT performs well when compared to baselines in generating future frames. OCVT also develops useful representations for video reasoning, achieving start-of-the-art performance on the CATER task.