582638 Unsupervised machine learning
Lecture course in English, 4-6 cu (ECTS), 2009-2010
Teachers
Lectures: Aapo Hyvärinen
Exercices and computer projects: Doris Entner and Michael Gutmann
Schedule
In the 4th period, starts 16/03/2010, ends 30/04/2009. Sessions are Tuesdays, Thursdays and Fridays, 14:15-15:45 at lecture room C222.
There are typically two lecture sessions each week (Tue, Thu) and one exercice session (Fri), but there are some expections. Here's the detailed schedule:
Tue 16 Mar | Lecture
| *
| Thu 18 Mar | Lecture
| *
| Fri 19 Mar | Exercices
|
Tue 23 Mar | Lecture
| *
| Thu 25 Mar | Lecture
| *
| Fri 26 Mar | Exercices
|
Tue 30 Mar | No teaching
| *
| Thu 1 Apr | Easter break
| *
| Fri 2 Apr | Easter break
|
Tue 6 Apr | Easter break
| *
| Thu 8 Apr | Exerc (1h) + Lect (1h)
| *
| Fri 9 Apr | Lecture
|
Tue 13 Apr | Lecture
| *
| Thu 15 Apr | Lecture
| *
| Fri 16 Apr | Exercices
|
Tue 20 Apr | Lecture
| *
| Thu 22 Apr | Lecture
| *
| Fri 23 Apr | Exercices
|
Tue 27 Apr | Lecture
| *
| Thu 29 Apr | Lecture
| *
| Fri 30 Apr | Exercices
|
Registration
Please register using the ILMO system. If you don't have permissions to this system, register at the first lecture.
Target audience
Master's students in statistics (incl. EuroBayes), computer
science (specialization in algorithms & machine learning, intelligent systems, or bioinformatics), or applied mathematics
(specialization e.g. in stochastics)
Description
Unsupervised learning is one of the main streams of machine learning,
and closely related to exploratory data analysis and data mining. This
course describes some of the main methods in unsupervised learning.
In recent years, machine learning has become heavily dependent on
statistical theory which is why this course is somewhere on the
borderline between statistics and computer science. Emphasis is put
both on the statistical formulation of the methods
as well as on their computational implementation. The goal is not
only to introduce the methods on a theoretical level but also to show
how they can be implemented in scientific computing environments
such as Matlab or R. Computer projects are an important part of the
course.
How to obtain the credits
There are two ways of getting credits for this course:
- Taking the exam which consists of solving mathematical problems. The time and place of the exam are here (subject to change!)
- Doing computer projects which consist of programming in either Matlab or R (you can choose which one you use)
If you do one of these, you get 4 cu. If you do both of them, you get 6 cu. You are strongly encouraged to do both of them.
Mathematical exercices (Friday sessions) are not obligatory but they will give extra points to be added to your points in the exam and/or computer assignments, up to a maximum of 25% more points.
If you do both exam and computer assignments, the grade will be the average of the two (with possible exercice points added as explained above).
Prerequisites
- Statistics majors: Bachelor's degree recommended.
- Mathematics majors: Bachelor's degree recommended. It should include basic courses in analysis (including vector analysis), linear algebra I&II, introduction to probability, introduction to statistical inference. (Preferably also some more statistics courses.)
- Computer science majors: Bachelor's degree recommended. It should include the mathematics courses listed above for mathematic majors. Preferably you should also have done both the courses "Introduction to machine learning" and "Probabilistic models" or their previously lectured counterpart "Computational data analysis I".
Contents:
- Introduction
- supervised vs. unsupervised learning
- applications of unsupervised learning
- probabilistic formulation: generative models or latent variable models
- overview of the topics below
- Numerical optimization
- gradient method, Newton's method, stochastic gradient, alternating variables
- Principal component analysis and factor analysis
- formulation as minimization of reconstruction error or maximization of component variance
- computation using
covariance matrix and its eigen-value decomposition
- factor analysis and interpretation of PCA as estimation of gaussian generative model
- factor rotations
- Independent component analysis
- problem of blind source separation, why non-gaussianity is needed for identifiability
- correlation vs. independence
- ICA as maximization of non-gaussianity, measurement of non-Gaussianity by cumulants
- likelihood of the model and maximum likelihood estimation
- information-theoretic approach, connections between different approaches
- implementation by gradient methods and FastICA
- Clustering
- k-means algorithm
- formulation as mixture of gaussians
- maximization of likelihood: alternating variables method, EM algorithm
- Nonlinear dimension reduction
- non-metric multi-dimensional scaling and related methods, e.g. kernel PCA, IsoMap
- Kohonen's self-organizing map
Course material
The complete lecture notes are here. Just to keep search engines away, you need the login uml and password uml. There is no book for the course.
Exercices are here.
Aapo Hyvärinen, Dec 2009. Last update 19 Apr 2010.