9.520 Statistical Learning Theory and Applications

As taught in: Spring 2003

Design of a system that will function the same way as a human visual system.

Designing and building a system that will function the same way as a human visual system, but without getting bored, and with a greater degree of accuracy. (Image courtesy of Poggio Laboratory, MIT Department of Brain and Cognitive Sciences.)




Dr. Ryan Rifkin

Dr. Sayan Mukherjee

Prof. Tomaso Poggio

Alex Rakhlin

Course Features

Course Highlights

Support vector machines have proven to be very useful in classification networks. These SVMs are now being used by drivers for pedestrian avoidance. This is one of the first truly universal applications of this technology.

This course is for upper-level graduate students who are planning careers in computational neuroscience. The assignments focus on some of the functions needed to make problem-solving more efficient for computer systems. The project topics students can choose from are based on unsolved problems in the field today. By the conclusion of this course, students should be able to solve one or two of these problems, and should be able to frame an approach to the rest of them.

Course Description

Focuses on the problem of supervised learning from the perspective of modern statistical learning theory starting with the theory of multivariate function approximation from sparse data. Develops basic tools such as Regularization including Support Vector Machines for regression and classification. Derives generalization bounds using both stability and VC theory. Discusses topics such as boosting and feature selection. Examines applications in several areas: computer vision, computer graphics, text classification and bioinformatics. Final projects and hands-on applications and exercises are planned, paralleling the rapidly increasing practical uses of the techniques described in the subject.

*Some translations represent previous versions of courses.