Topics in Artificial Intelligence (Learning Theory)

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Toyota Technological Institute at Chicago.
The lecture notes for the course "Topics in Artificial Intelligence", spring 2008.
Sham Kakade and Ambuj Tewari.
Contents:
Mistake Bound Model, Halving Algorithm, Linear Classifiers
Perceptron and Winnow
Online Convex Programming and Gradient Descent
Exponentiated Gradient Descent and Applications of OCP
Game Playing, Boosting
AdaBoost
Probabilistic Setup and Empirical Risk Minimization
Concentration, ERM, and Compression Bounds
Rademacher Averages
Massart’s Finite Class Lemma and Growth Function
VC Dimension and Sauer’s Lemma
VC Dimension of Multilayer Neural Networks, Range Queries
Online to Batch Conversions
Exponentiated Stochastic Gradient Descent for L1 Constrained Problems
Covering Numbers
Dudley’s Theorem, Fat Shattering Dimension, Packing Numbers
Fat Shattering Dimension and Covering Numbers
Rademacher Composition and Linear Prediction
Note: The single pdf file is glued from the multiple pdf files of individual lectures, so the printed pages do not match. On the other hand, there are added bookmarks, allowing to find the topic of interest more easily than via browing the pdf files of individual lectures.

Author(s): Kakade S. Tewari A.

Language: English
Commentary: 1185482
Tags: Информатика и вычислительная техника;Искусственный интеллект