Project Guidelines

For the course project you can either choose a topic of your liking and read 2-3 related papers, or decide to work on an open problem or do an empirical evaluation. In all cases your project must involve a significant theoretical component. There are two deliverables, an online blog post and a 30 minute in-class presentation. Below is a list of possible project topics. The list will be continuously updated.

Project Schedule

  • Dec 8: 12pm: Abhishek Bhrushundi
  • Dec 8: 12:30pm: Jeremy Bierema
  • Dec 8: 1pm: Aditi Dudeja
  • Dec 8: 1:30pm: Nathaniel Hobbs
  • Dec 8: 2pm: Mahyar Khayatkhoei
  • Dec 8: 2:30pm: Aditya Potukuchi
  • Dec 8: 3pm: Hareesh Ravi
  • Dec 8: 3:30pm: Pritish Sahu
  • Dec 12: 12pm: Vladimir Ivanov
  • Dec 12: 12:30pm: Neelesh Kumar
  • Dec 12: 1pm: Chaowei Tan

  • Open Problems

    Learning monotone DNFs: Let H be the functions class of monotone DNF formulas over n variables with poly(n) terms. It is an open question to design a PAC learning algorithm for this class under the uniform distribution. See this paper on learning random monotone DNFs for state of the art results and related work: [Learning random monotone DNFs]

    Learning sparse parities with noise: Let H be the class of all parity functions over k out of n Boolean variables. Here k is much smaller than n. In the parity with noise problem, random examples are generated from the uniform distribution and labeled according to an unknwon parity function. Each label is then flipped i.i.d. with probabiltiy \eta < 0.5. The goal is to learn the unknown parity function. The best known algorithm has runtime n^{.8k}. Improving the .8 factor in the exponent will be very interesting. See this paper for state of the art: [Learning sparse parities with noise]

    Attribute efficient learning: Let H be the class of length r decision lists over n Boolean variables. We know how to PAC learning this class in poly(n) time. A big open question is to design a learning algorithm that is more efficient when the list is sparse and runs in time poly(r, log(n)). See this paper for state of the art and related work:< href="http://www.phillong.info/publications/LS06_attr_eff.pdf">[Attribute efficient learning]

    Matrix completion: Matrix completion is the problem of inferring an unknown low rank matrix from a random set of its entries. Typically it is assumed that each entry of the matrix is revealed i.i.d. with some probability p. Two types of algorithms exist for this problem: convex programming based approaches and gradient descent algorithms. Now consider a model for matrix completion where each entry is revealed i.i.d. with probability p and then adversary can decide to reveal more entires to the algorithm. Intuitively this should only help any algorithm and indeed convex programming based approaches continue to work. However, it is unclear if gradient descent still provably works or if it can be arbitrarily bad. Investigate this question. This is also a good project for empirical evaluation.

    Non convex optimization: A recent line of work proves how gradient descent + noise can reach local minima of non convex functions without getting stuck at saddle points. Empirically and formally investigate the performance of gradient descent + momentum based algorithm in the same setting.


    Possible Project Topics

  • Do a survey on the chaning method
    [See Chapter 5 of this book] [See Chapter 8 of this book]
  • Algorithms for stochastic multi armed bandits
    [Survey by Bubek and Cesa-Bianchi] [UCB algorithm]
  • Algorithms for online convex optimization in the bandit setting
    [paper by Bubek, Eldan and Lee]
  • Deep Nets
    [hardness result] [hardness result] [structure of local minima]
  • Computational lower bounds
    [hardness of DNFs] [hardness of halfspaces]
  • Tensor methods in ML
    [Tensor methods for latent variable models] [Tensors methods for HMMs]
  • Adaptive data analysis
    [On holdout resuse]
  • Membership query learning
    [Learning decision trees] [Learning DNFs]
  • Learning graphical models
    [Structure learning of ising models]
  • Hypothesis testing
    [Testing closeness of distributions]
  • Training Algorithms for Deep Nets
    [paper 1] [paper 2]
  • Generalization for Deep Nets
    [paper 1] [paper 2] [paper 3]