Skip to content Skip to navigation

Computer Science Department Colloquium: Optimization in Sparse Learning: from Convexity to Non-convexity

Abstract: 

Powerful machine learning models and large-scale training data motivate the rapid popularity of AI method in various applications such as data science, computer vision and natural language processing. The explosive model complexity and training data scale increase propose an urgent requirement for highly efficient and stable model training algorithms. Optimization algorithm for model training, as a fundamental issue in machine learning, keeps on getting extensive attention from academy and industry.

In this talk, I will introduce my work on optimization algorithm design and analysis for sparse model learning problems. The model learning objective includes optimizing convex model with sparse inducing regularizer and l0-constrained minimization which is non-convex problem. In addition to the single machine algorithms, I will also introduce my recent research progress in communication efficient distributed sparse model learning. The designed algorithm targeted for each specific problem significantly improves the model training efficiency than baseline algorithms.

Speaker: 
Bo Liu
Location: 
CBIM 22
Event Date: 
06/14/2018 - 2:00pm
Committee: 
Prof. Dimitris Metaxas (Chair), Prof. Vladimir Pavlovic, Prof. Ahmed Elgammal, Prof.Michael Katehakis (Rutgers, Mgmt Science & Info Systems)
Event Type: 
Computer Science Department Colloquium
Organization: 
Dept. of Computer Science