Skip to content Skip to navigation
Qualifying Exam
2/18/2014 12:00 pm
CBIM Multipurpose Room ( Room 22 )

Simultaneous Twin Kernel Learning for Structured Prediction

Chetan Tonde, Rutgers University

Examination Committee: Prof. Ahmed Elgammal (Advisor), Prof. Swastik Kopparty, Prof. Tina Eliassi-Rad and Prof. Amélie Marian


Many learning problems in computer vision can be posed as structured prediction problems where the input and outputs instances are trees, graphs and strings, among others, instead of, single labels $\{+1,-1\}$ or real values ${\mathbb{R}$}. Positive definite kernel functions allow us empirically capture similarity between a pair of instances over these arbitrary domains. Kernel methods like Structured Support Vector Machines, Twin Gaussian Processes (TGP's) and vector valued RKHS spaces ,among others, offer a powerful way to perform inference and prediction over these domains.  Appropriate selection of the kernel function, which implicitly defines the feature space of an algorithm, has a crucial role in the success of these methods. Poor choice of the kernel function often results in poor system performance. Automatic kernel selection based on input data have been developed, but have focused only on kernels on the input domain ('one-way'). In this work, we propose a novel and efficient algorithm for learning kernels functions on both inputs and outputs domains, simultaneously. We call this Twin Kernel Learning (TKL). This technique is general in the sense that it can learn, arbitrary, but continuous kernel functions, and includes 'one-way' kernel learning as a special case. For illustration, we formulate this problem for learning covariances kernels of TGP’s for structured prediction and compare empirically with results where no kernel learning is performed. Our experimental evaluation on synthetic and several real world datasets demonstrate consistent improvement in performance of TGP's with the learned kernels.