This is a graduate course in supervised learning. The course will cover the theory and practice of methods and problems such as point estimation, naive Bayes, decision trees, nearest neighbor, linear classfication and regression, kernel methods, learning theory, cross validation and model selection, boosting, optimization, graphical models, semi supervised learning, reinforcement learning, deep nets etc.

There is no required textbook for the course. The lecture notes will serve as the primary reference. The following books are a good source for additional reference

[link]

[link]

[link]

>90 A | [85,90] B+ | [80,85] B | [75,80] C+ | [70,75] C | <70 F |
---|

All homeworks must be typeset in LaTex and follow the [ACM formatting guidelines]. Homeworks must be submitted via sakai by 6pm EST of the due date. **Late homeworks are not accepted**. Due to the large size of the class, we will not accomodate any requests for regrading. The TA's decision in all such matters will be final. You are encouraged to discuss the homework problems provided you have spent enough time(> 24 hrs) thinking about the solution yourself. A discussion is meant to be a collaborative effort to help everyone involved understand the problem better. Asking for solutions is not considered a collaborative effort. **In the end, you must write all the solutions in your own words.** You must also write the names and Rutgers id's of the people that you discussed the homework problems with. We will follow the [Rutgers academic policy on cheating].

- [Ben Taskar's Matlab Tutorial]
- [MIT Matlab Tutorial 1]
- [MIT Matlab Tutorial 2]

You can use any programming language for the homework problems. It is your responsibility to make sure that your code runs without errors on CS machines. You can use any programing environment for the final project as well. You can download Matlab to your computer from the university's [software portal]. Matlab is also installed on the CS machines. Just type "matlab" at the prompt. Below are some useful tutorials