Skip to content Skip to navigation
PhD Defense
8/25/2016 08:30 am
CBIM 22

Generalized Distributed Learning Under Uncertainty for Camera Networks

Sejong Yoon, Dept. of Computer Science

Defense Committee: Prof. Vladimir Pavlovic (chair), Prof. Dimitris Metaxas, Prof. Mubbasir Kapadia, Prof. Norman I. Badler (University of Pennsylvania)

Abstract

Consensus-based distributed learning is a machine learning problem used to find the general consensus of local learning models to achieve a global objective. It is an important problem with increasing level of interest due to its applications in sensor networks. There are many benefits of distributed learning over traditional centralized learning, such as faster computation and reduced communication cost. In this dissertation, we focus on the merit that distributed learning can be performed in a fully decentralized way, which makes it one step further different from parallel computing approaches.

First, we propose a general distributed probabilistic learning framework based on distributed optimization using an Alternating Direction Method of Multipliers (ADMM). We show that it can be applied to computer vision algorithms which have traditionally assumed a centralized computational setting. We demonstrate that our probabilistic interpretation of the decentralized processing is useful in dealing with missing values which are not explicitly handled in prior works. We provide empirical evaluations on a computer vision problem termed distributed affine structure from motion (SfM).

Second, we propose two useful extensions of the distributed probabilistic learning framework. We first extend our framework so that it can incrementally update the learned model in an online fashion. To do this, we propose to use a Bayesian inference model based on Bregman ADMM (B-ADMM). Next, we show that the distributed learning tasks can be carried out more rapidly by introducing smart update strategies to the underlying ADMM optimization algorithm. By adaptively balancing primal and dual residuals of ADMM, we demonstrate an improved empirical convergence speed in a fully decentralized setting, without limiting the application range of ADMM-based optimization.

Finally, we introduce a potential application of consensus-based distributed optimization on the human trajectory estimation problem. We formulate the trajectory estimation problem as a global optimization issue with constraints encoding various prior conditions that can be either allowed or forbidden in real world situations. We show that our method can effectively estimate the noisy, corrupted trajectories from off-the-shelf human trackers that could assist in human crowd analysis and simulation.