CS Events

PhD Defense

Improving Inference and Generation Process of Generative Adversarial Networks


Download as iCal file

Tuesday, March 16, 2021, 05:00pm - 07:00pm


Speaker: Yu Tian

Location : Remote via Webex


Prof. Dimitris Metaxas (Advisor)

Prof. Yongfeng Zhang

Prof. Hao Wang

Dr. Han Zhang (Google Brain)

Event Type: PhD Defense

Abstract: Generative Adversarial Networks (GANs) have been witnessed tremendous successes in broad Computer Vision applications. This thesis considers two directions that are closely related to GAN-based image synthesis. (i) GAN-based inference learning and (ii) GAN-based video synthesis. Both directions face many challenges in generation and inference: In GAN-based Inference Learning, the application-driven methods usually outperform the approaches with elegant theories. For GAN-based video synthesis, the generation quality is far behind the contemporary image generators. To tackle those challenges, we conduct extensive studies on improving inference and generation processes. First, we identify the “incomplete” representation issue in the existing single-pathway framework, then propose a two-pathway approach to address this problem. Self-supervised learning is also employed to make the use of both labeled and unlabeled data. Second, as the theoretical extension of the previous work. We analyze three issues that degrade both generation and inference in GAN-based inference learning approaches. To address all three issues in a unified framework, we take the single-pathway approach as a baseline and propose two strategies to solve all issues. Third, we present a framework that leverages contemporary image generators to render high-resolution videos. We frame the video synthesis problem as discovering a trajectory in the latent space of a pre-trained and fixed image generator. Furthermore, we introduce a new task, which we call cross-domain video synthesis, this allows for generating moving objects for which the desired video data is not available. Extensive experiments on various datasets demonstrate the advantages of our methods over existing techniques.


Join from the meeting link

Join by meeting number
Meeting number (access code): 182 073 5787