Past Events

Seminar

Recent Advances in Stochastic Gradient Methods: From Convex to Non-convex Optimization and Deep Learning

 

Download as iCal file

Wednesday, November 06, 2019, 11:00am

 

For many large-scale optimization and machine learning problems, first-order methods and their accelerated variants based on momentum have been a leading approach for computing low-to-medium accuracy solutions because of their cheap iterations and mild dependence on the problem dimension and data size. Even though momentum-based accelerated gradient (AG) methods proposed by Nesterov for convex optimization converges provably faster than gradient descent (GD) in the absence of noise, the comparison is no longer clear when the gradients are stochastic; containing random gradient errors.

In the first part of the talk, we focus on stochastic gradient (SGD) and accelerated stochastic gradient (ASG) methods for convex optimization when the gradient has random errors. We study the trade-offs between convergence rate and robustness to gradient errors in designing a first-order algorithm and provide a systematic way of trading off these two in an optimal fashion. Our results show that stochastic momentum methods can achieve acceleration while being more robust to random gradient errors. Our framework also leads to "optimal" algorithms that can perform better than other state-of-the-art methods in the presence of random gradient noise. We also discuss extensions of our results and algorithms to distributed convex optimization problems.

In the second part of the talk, we focus on SGD for non-convex optimization and deep learning. The gradient noise (GN) in the SGD algorithm is often considered to be Gaussian in the large data regime by assuming that the classical central limit theorem (CLT) kicks in. This assumption is often made for mathematical convenience, since it enables SGD to be analyzed as a stochastic differential equation (SDE) driven by a Brownian motion. We argue that the Gaussianity assumption might fail to hold in deep learning settings and hence render the Brownian motion-based analyses inappropriate. Inspired by non-Gaussian natural phenomena, we consider the GN in a more general context and invoke the generalized CLT (GCLT), which suggests that the GN converges to a heavy-tailed α-stable random variable. Accordingly, we propose to analyze SGD as an SDE driven by a Lévy motion. Such SDEs can incur ‘jumps’, which force the SDE transition from narrow minima to wider minima, as proven by existing metastability theory. To validate the α-stable assumption, we conduct extensive experiments on common deep learning architectures and show that in all settings, the GN is highly non-Gaussian and admits heavy-tails. We further investigate the tail behavior in varying network architectures and sizes, loss functions, and datasets. Our results open up a different perspective and shed more light on the belief that SGD prefers wide minima.

Speaker: Mert Gurbuzbalaban

Bio

Mert Gürbüzbalaban is an assistant professor at Rutgers University. Previously, he was a postdoctoral associate at the Laboratory for Information and Decision Systems (LIDS) at MIT. He is broadly interested in optimization and computational science driven by applications in large-scale information and decision systems. He received his B.Sc. degrees in Electrical Engineering and Mathematics as a valedictorian from Boğaziçi University, Istanbul, Turkey, the “Diplôme d’ingénieur” degree from École Polytechnique, France, and the M.S. and Ph.D. degrees in (Applied) Mathematics from the Courant Institute of Mathematical Sciences, New York University.

Dr. Gürbüzbalaban received the Kurt Friedrichs Prize (given by the Courant Institute of New York University for an outstanding thesis) in 2013, Bronze Medal in the École Polytechnique Scientific Project Competition in 2006, the Nadir Orhan Bengisu Award (given by the electrical-electronics engineering department of Boğaziçi University to the best graduating undergraduate student) in 2005 and the Bülent Kerim Altay Award from the Electrical-Electronics Engineering Department of Middle East Technical University in 2001. He received funding from a variety of sources including multiple programs at the U.S. National Science Foundation. He is also a co-recipient of the Best Paper Runner-Up Award from the International Conference on Machine Learning, 2019.

Location : CoRE A 301

Committee

Rutgers/DIMACS Theory of Computing Seminar

Event Type: Seminar

Organization

Rutgers