Main Page Contents
About this course
Schedule
News
Lectures
Materials
Links
Course People
Matthew Stone
Aynur Dayanik

Schedule
Class
Tuesday/Thursday 4:305:50, Core 301A
Office Hours
MS: Thursday 1012 (Core 328).
AD: Tuesday 24 (Core 333).
News
 Tuesday, Dec 16, 4:00 pm
Final, Hill 254.
The exam will be 90 minutes.
Those with another 6pm exam may begin at 3:15pm.
 Dec 11
Selected answers to sample problems (no pictures).
 Dec 4
Sample problems.
 Nov 20
Additional readings will be available presently in the Math
library and on electronic reserve.
 Nov 13
Homework 3 corrected. Numerical change will
make answering 1h easier.
 Nov 6
Homework 3 out, due Nov 18.
Final paper out, due Dec 9.
 The midterm, Oct 21
will take place back in Hill 254.
 Oct 2
Homework 2 out, due Oct 14.
 Sep 30
A reminder to hand in your program in homework 1 electronically
by email to mdstone@cs.
 Sep 11
Quick clarification on correlation, as used by Paskin to
describe the SLAM problem. Correlation can be
understood intuitively as the interdependence of variable
quantities. It is measured in terms of covariance.
...it is difficult to employ the covariance as an absolute measure
of dependence because its value depends on the scale of measurement
and so it is hard to determine whether a particular covariance is
large at first glance. This problem can be elimnated by
standardizing its value, using the simple coefficient of linear
correlation. The population linear coefficient of
correlation, ρ, is related to the covariance and is defined
as ρ = covariance(X,Y)/(σ(X)σ(Y))
where σ(X) and σ(Y) are the standard deviations of X
and Y respectively. [Mendenhall et al, Math. Stats. with
Applications, PWS Kent 1990]
 Sep 09
Reminder: Class is moved to Core 301A.
Homework One out.
 Sep 02
Initial revision of this web page.
Announcement: Who should take this class?
Lecture Schedule, AI Events, Notes
 Sep 02
What is AI?
 Sep 04
Research practice and ideas in AI.
Readings:
Mark Paskin. Thin Junction Tree Filters for Simultaneous Localization and Mapping. IJCAI 2003, pages 11571164.
Darse Billings, Neil Burch, Aaron Davidson, Robert Holte, Jonathan Schaeffer, Terence Schauenberg and Duane Szafron. Approximating Gametheoretic Optimal Strategies for Fullscale Poker. IJCAI 2003, pages 661668.
 Module 1: A Prototypical Case Study in AI
Reading: Agents in
the Real World
Supplementary material: Clemen, Decision Analysis,
Chapters 15.
Available on reserve.
Homework: Decision
Analysis due Sep 30 Lectures: Sep 09Sep 18
 Sep 09
Decision analysis as a computational model (1)
Actions, observations, outcomes; probability and utility.
 Sep 11
Decision analysis as a computational model (2)
Interpreting models; solving for and carrying out policies in
agents.
 Sep 16
Decision analysis as a computational model (3)
Design and evaluation; sensitivity analysis, statistical
hypotheses about running agents.
 Sep 18
Decision analysis as a computational model (4)
Computational complexity; training data; model estimation and
model induction; generalization, model selection, data sparsity.
 Module 2: Perception and Bayesian Inference
Lectures:
Sep 23Oct 14. Resources: Andrew Moore's
computational statistics tutorials. Supplementary material:
Duda, Hart and Stork, Pattern Classification, Chapters
13. Available on
reserve.
 Sep 23
Bayesian analysis and models for classification (1)
Motivation for a Bayesian approach. Resources: Probabilistic
and Bayesian analytics.
 Sep 25
Bayesian analysis and models for classification (2)
Naive Bayes models (discrete case).
Text classification.
 Sep 30
Bayesian analysis and models for classification (3)
Classifying with Naive Bayes models;
feature space and linear classifiers.
 Oct 02
Bayesian analysis and models for classification (4)
Training and Naive Bayes models.
Homework: Model assumptions: due Oct 14
 Oct 07
Bayesian analysis and models for classification (5)
Continuous variables, normal distributions, linear
classifiers.
Resources: Probability
Density Functions.
Resources: Gaussians.
 Oct 09
Bayesian analysis and models for classification (6)
Multivariate Gaussians.
Classifying with Gaussians. Resources: Gaussian
Bayes Classifiers.
 Oct 14
Bayesian analysis and models for classification (7)
Maximum likelihood estimation.
Density estimation.
Clustering.
Resources: Gaussian
Mixture Models.
 Oct 16
Review.
 Oct 21
Midterm.
 Module 3: Time
Lectures: Oct 23Nov 06.
Resources: Andrew Moore's
Tutorial Slides on Hidden Markov Models
Larry Rabiner's tutorial article on HMMs.
(Laurence R. Rabiner, A tutorial on Hidden Markov Models and
Selected Applications in Speech Recognition. Proceedings of the
IEEE 77(2):257286, 1989).
Greg Welch and Gary Bishop's web resource on the
Kalman filter.
Peter
Maybeck's introduction to the Kalman filter.
Source: Maybeck, 1979
Stochastic Models, Estimation and Control, Chapter 1,
"Introduction", pp 115.
 Oct 23
Models of time (1)
Markov models and hidden Markov models.
 Oct 28
Models of time (2)
HMM inference. Forward/backward.
 Oct 30
Models of time (3)
HMM decoding. Partofspeech tagging.
 Nov 04
HMM inference: Training. Speech and gesture
recognition.
Graphical representations of probabilistic models.
 Nov 06
Models of time (4)
The Kalman filter. Tracking and learning with
Gaussian priors.
 Module 4: Planning
 Nov 11
General probabilistic inference: belief networks.
Reading: Russell and Norvig, Probabilistic Reasoning (chapter
15 of Artificial Intelligence).
Charniak, Bayesian Networks without Tears, from AI
Magazine, 1991.
Tutorial
Slides on Bayesian networks.
 Nov 13
General probabilistic inference: influence diagrams.
Reading: Russell and Norvig, Making simple decisions
(chapter 16 of Artificial Intelligence).
Supplementary reading: Clemen, Decision Analysis, Chapters
3 and 4.
 Nov 18
Markov decision processes: Value iteration.
Reading: Russell and Norvig, Making complex decisions
(chapter 17 of Artificial Intelligence).
Tutorial
Slides on Markov decision processes.
 Nov 20
Markov decision processes: Policy iteration; reinforcement
learning.
Reading: Russell and Norvig, Making complex decisions and
Reinforcement learning
(chapters 17 and 20 of Artificial Intelligence).
Supplementary reading: Howard, Dynamic Programming and Markov
Processes, excerpts, especially Chapter 7.
Tutorial
Slides on Reinforcement learning.
 Nov 25
Games, strategies, and coordination.
Tutorial
Slides on game theory.
 Module 5: Evaluation
 Dec 02
Evaluation (1).
Pitfalls and methodology.
Reading: Cohen, Empirical Methods for AI, Chapter 3.
 Dec 04
Evaluation (2).
Performance metrics.
Reading: Cohen, Empirical Methods for AI, Chapter 6.
 Dec 09
Evaluation (3).
Training and test data. Reliability. Crossvalidation.
 Tuesday, Dec 16, 4:00 pm
Final.
Materials
This class will work from an eclectic combination of survey and
tutorial papers, research articles, course notes, and other resources.
This list will grow as the semester proceeds.
Links
 General AI References
 Cool AI Systems
 Robots and the Media
 More AI related Rutgers stuff
