Skip to content Skip to navigation
PhD Defense
10/2/2018 12:00 pm

Optimization in Sparse Learning: from Convexity to Non-convexity

Bo Liu, Dept. of Computer Science

Defense Committee: Prof, Dimitris Metaxas (Chair), Prof. Vladimir Pavlovic, Prof. Ahmed Elgammal, Prof. Michael Katehakis (Management Science and Information Systems)


Powerful machine learning models and large-scale training data motivate the rapid popularization of AI method in various applications such as data science, computer vision and natural language processing. The explosive model complexity and training data scale increase propose an urgent requirement for highly efficient model training algorithms. Optimization algorithm research for model training, as a fundamental issue in machine learning, keeps on getting extensive attention from academia and industry.

In this presentation, I will introduce my research on optimization algorithm design and analysis for sparse model learning problems. The model learning objective includes optimizing convex model with sparse inducing regularizer and model cardinality constrained minimization which is a non-convex problem. In addition to the single machine algorithms, I will also introduce my recent research progress in communication efficient distributed sparse model learning. The designed algorithm targeted for each specific problem significantly improves the model training efficiency compared to baseline algorithms