Powerful machine learning models and large-scale training data motivate the rapid popularity of AI method in various applications such as data science, computer vision and natural language processing. The explosive model complexity and training data scale increase propose an urgent requirement for highly efficient and stable model training algorithms. Optimization algorithm for model training, as a fundamental issue in machine learning, keeps on getting extensive attention from academy and industry.
In this talk, I will introduce my work on optimization algorithm design and analysis for sparse model learning problems. The model learning objective includes optimizing convex model with sparse inducing regularizer and l0-constrained minimization which is non-convex problem. In addition to the single machine algorithms, I will also introduce my recent research progress in communication efficient distributed sparse model learning. The designed algorithm targeted for each specific problem significantly improves the model training efficiency than baseline algorithms.