Machine Learning

Course dates – MT – week beginning Monday 05th November 2018 – for Year 1 students
Varun Kanade


There will be essentially two separate parts to this course.

The first part will introduce key ideas from unsupervised learning. In particular, we'll focus on dimensionality reduction and clustering.


  • Principal component analysis (PCA)
  • kernel PCA
  • Johnson-Lindenstrauss transform (time permitting)
  •  k-means clustering
  • hierarchical clustering algorithms
  • spectral clustering

The second part will focus on the multi-armed bandit problem. The goal of this part is to prepare the students for the Reinforcement Learning course. We will cover some basic algorithms and their detailed mathematical analysis.


  • Stochastic multi-armed bandit problem
  • ε-Greedy, UCB
  • Lower bounds on the regret for bandit problems
  • Thompson Sampling (time-permitting)
  • Non-stochastic multi-armed bandits (time-permitting)
Other Sources
  • C. M. Bishop, 'Pattern Recognition and Machine Learning', 2006, Springer
  • K. P. Murphy, 'Machine Learning: A Probabilistic Perspective', 2012, MIT Press
  • A. Gelman, ‘Bayesian Data Analysis’, 1996, Chapman & Hall
  • D. Barber, Bayesian Reasoning and Machine Learning', (pdf available for free), 2012, Cambridge University Press. Chapters: 1; 3.1, 3.3; 7.1-7.2; 8; 9.1-9.2; 10; 17.1-17.4; 24.1-24.4.
  • D. MacKay, ' Information Theory, Inference, and Learning Algorithms' (pdf available for free), 2003, Cambridge University Press. Chapters: 2.1-2.4; 3; 36.
Assessment Mode

As part of the assessment the students will be asked to choose between

I.        some experiments with clustering or dimensionality reduction on text datasets

II.        perform simulations of bandit algorithms in different settings to compare the algorithms

III.        reproduce experiments from a bandit-related paper of their choice (a list of suggestions will be posted on the website, but students will be permitted to choose their own, provided the topic is reasonable)