Our society is inundated by massive amounts of high-dimensional data in the form of images, videos and documents. Automatic analysis of such data is crucial, yet fundamentally difficult due to the "curse of dimensionality". However, my research has shown that problem transformations can greatly facilitate such learning. I will discuss two types of transformations: one that applies to the objective function of a given learning problem, and the other directly to the data.
1. OBJECTIVE: Sophisticated models such as deep networks involve high-dimensional nonconvex optimization, which is generally intractable. However, I have developed a theory that allows obtaining decent solutions efficiently for smooth, nonconvex problems. The main idea is to simplify the objective function through an evolution process known as the diffusion equation. I will present applications of this theory in improving the speed and accuracy of deep learning and image alignment.
2. DATA: Sparse and low-rank structures are prevalent in high-dimensional data and can make learning tractable. However, these structures are often obscured by interfering processes. I will discuss scenarios where transformations can undo the interfering effects and unveil the underlying parsimony in the data. I will present my work on image segmentation and 3D reconstruction, which built on this idea and led to state of the art result.
I will conclude by describing my future plans for exploring a broader set of applications. Specifically, I will discuss some problems in NLP, robotics, and biocomputing that may benefit from the theories and algorithms I have developed for high-dimensional learning.