Learning Representation and Control in Markov Decision Processes: New Frontiers

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Из серии Foundations and Trends in Machine Learning издательства NOWPress, 2008, -163 pp.
This paper describes a novel machine learning framework for solving sequential decision problems called Markov decision processes (MDPs) by iteratively computing low-dimensional representations and approximately optimal policies. A unified mathematical framework for learning representation and optimal control in MDPs is presented based on a class of singular operators called Laplacians, whose matrix representations have nonpositive off-diagonal elements and zero row sums. Exact solutions of discounted and average-reward MDPs are expressed in terms of a generalized spectral inverse of the Laplacian called the Drazin inverse. A generic algorithm called representation policy iteration (RPI) is presented which interleaves computing low-dimensional representations and approximately optimal policies. Two approaches for dimensionality reduction of MDPs are described based on geometric and reward-sensitive regularization, whereby low-dimensional representations are formed by diagonalization or dilation of Laplacian operators. Model-based and model-free variants of the RPI algorithm are presented; they are also compared experimentally on discrete and continuous MDPs. Some directions for future work are finally outlined.
Introduction
Sequential Decision Problems
Laplacian Operators and MDPs
Approximating Markov Decision Processes
Dimensionality Reduction Principles in MDPs
Basis Construction: Diagonalization Methods
Basis Construction: Dilation Methods
Model-Based Representation Policy Iteration
Basis Construction in Continuous MDPs
Model-Free Representation Policy Iteration
Related Work and Future Challenges

Author(s): Mahadevan S.

Language: English
Commentary: 623056
Tags: Информатика и вычислительная техника;Искусственный интеллект