CS Events
PhD DefenseComputational Learning Theory through a New Lens: Scalability, Uncertainty, Practicality, and Beyond |
|
||
Wednesday, January 31, 2024, 12:10pm - 02:10pm |
|||
Speaker: Chen Wang
Location : CoRE 301
Committee:
Assistant Professor Sepehr Assadi (Advisor)
Assistant Professor Aaron Bernstein
Professor Jie Gao
Rajesh Jayaram (External)
Professor Qin Zhang (External)
Event Type: PhD Defense
Abstract: Computational learning theory studies the design and analysis of learning algorithms, and it is integral to the foundation of machine learning. In the modern era, classical computation learning theory is growingly unable to catch up with new practical demands. In particular, problems arise in the aspects of scalability of the input size, uncertainty on the input formation, and the discrepancy between the theoretical and practical efficiency. There are several promising approaches to tackle the above challenges. For scalability, we can consider learning algorithms under sublinear models, e.g., streaming and sublinear time models, that use resources substantially smaller than the input size. For uncertainty, we can resort to learning algorithms that naturally take noisy inputs, e.g., algorithms that deal with multi-armed bandits (MABs). Finally, for practicality, we should design algorithms that strike a balance between theoretical guarantees and experimental performances.In light of the above discussion, we will discuss results in three areas of study in this talk. In the first part, we present recent results in streaming multi-armed bandits, where the arms arrive one by one in a stream. We study the fundamental problems of pure exploration and regret minimization under the model and present optimal algorithms and lower bounds. In the second part, we discuss graph clustering problems in sublinear settings. We consider two important problems: correlation clustering and hierarchical clustering. We give various sublinear algorithms for these problems in the streaming, sublinear time, and parallel computation settings. Finally, in the third part, we move to the more practically-driven problems of differential privacy (DP) range queries and weak-strong oracle learning. Both problems are motivated by practical industry settings, and we give near-optimal algorithms with strong experimental performances.
:
Contact Assistant Professor Sepehr Assadi