Statistical Learning Theory and Stochastic Optimization: Ecole d’Eté de Probabilités de Saint-Flour XXXI - 2001

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Statistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously "wrong'' (i.e. over-simplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PAC-Bayesian theorems. Results on the large deviations of trajectories of Markov chains with rare transitions are also included. They are meant to provide a better understanding of stochastic optimization algorithms of common use in computing estimators. The author focuses on non-asymptotic bounds of the statistical risk, allowing one to choose adaptively between rich and structured families of models and corresponding estimators. Two mathematical objects pervade the book: entropy and Gibbs measures. The goal is to show how to turn them into versatile and efficient technical tools, that will stimulate further studies and results.

Author(s): Olivier Catoni (auth.), Jean Picard (eds.)
Series: Lecture Notes in Mathematics 1851
Edition: 1
Publisher: Springer-Verlag Berlin Heidelberg
Year: 2004

Language: English
Pages: 284
Tags: Statistical Theory and Methods; Optimization; Artificial Intelligence (incl. Robotics); Information and Communication, Circuits; Probability Theory and Stochastic Processes; Numerical Analysis

Introduction....Pages 1-4
1. Universal lossless data compression....Pages 5-54
2. Links between data compression and statistical estimation....Pages 55-69
3. Non cumulated mean risk....Pages 71-95
4. Gibbs estimators....Pages 97-154
5. Randomized estimators and empirical complexity....Pages 155-197
6. Deviation inequalities....Pages 199-222
7. Markov chains with exponential transitions....Pages 223-260
References....Pages 261-265
Index....Pages 267-269
List of participants and List of short lectures....Pages 271-273