Past Events

Qualifying Exam

Learning Latent Representation: A Key to Domain Adaptation and Disentangling

 

Download as iCal file

Friday, June 14, 2019, 09:30am

 

Abstract:

Variational autoencoders (VAEs) learn compressed representations of their input. This compressed representation of data contains semantic meaning and can be used in tasks such as supervised learning and reinforcement learning, transfer learning, and zero-shot learning. We assume that the data has been generated from a fixed number of independent factors of variation and aim to learn a representation where a change in one dimension corresponds to a change in one factor of variation while being relatively invariant to changes in other factors. I would like to present our proposed method to tackle the above problems. In our method, we learn the disentanglement representation, by choosing a latent dimension large enough to keep the major factors including the minor nuances/noise. By introducing relevance indicator variables, the learning loss (e.g., total correlation loss) considers relevant disentangled factors by tolerating large prior divergence of these factors from those a priori specified in the nuisance model, while simultaneously attempting to identify the noise factors with small divergence from the same nuisance priors.

For multi-source data where the generating process for each source is different, a supervised learner trained in one domain when applied to another domain will perform poorly. Domain adaptation (DA) addresses the above-mentioned problem by transferring knowledge from a label-rich domain (i.e., source domain) to a label-scarce domain or data from different distribution (i.e., target domain). More specifically in the unsupervised domain adaptation, the labels for the target domain are not known during the learning process. In this work, we propose a way to achieve hypothesis consistency using Gaussian Process (GP). The GP allows us to induce a hypothesis space of classifiers from the posterior distribution of the latent random functions, turning the learning into a, significantly easier to solve than previous approaches based on adversarial mini-max optimization.

Speaker: Pritish Sahu

Location : CoRE A (301)

Committee

Prof. Vladimir Pavlovic (Chair), Prof. Dimitri Metaxas, Prof. Yongfeng Zhang, Prof. Swastik Kopparty

Event Type: Qualifying Exam

Abstract: 

Organization

Rutgers