Skip to content Skip to navigation
Pre-Defense
10/19/2015 02:00 pm
CBIM 22

Learning Kernels for Structured Prediction using Polynomial Kernel Transformations

Chetan Tonde, Rutgers University

Defense Committee: Ahmed Elgammal (chair), Pranjal Awasthi and Tina Eliassi-Rad

Abstract

A key challenge in machine learning is to automatically extract relevant feature representations for a given task. This becomes especially formidable for structured data like images, which are often highly structured and complex. In this work we focus on learning good representations of structured data.

In the core part of the dissertation, we look at the problem of finding a good kernel feature representation for structured prediction. We present a novel framework called Simultaneous Twin Kernel Learning (STKL), which proposes the idea of polynomial expansions of kernels, to learn them over structured input and output data, simultaneously. We also propose an efficient matrix-decomposition based algorithm for learning these expansions. We use this approach for learning covariance function of Twin Gaussian Processes and demonstrate state-of-the-art experimental results on synthetic and several real-world datasets.

In latter part of this work, I will present a framework that learns explicit feature representations of input data for supervised dimensionality reduction (SDR). We propose the idea of learning features that maximize non-linear statistical distance correlation between learned features and labels. In this case too, we show state-of-the art results using these learned features on multiple datasets.