An Introduction to Optimal Control Theory: The Dynamic Programming Approach

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This book introduces optimal control problems for large families of deterministic and stochastic systems with discrete or continuous time parameter. These families include most of the systems studied in many disciplines, including Economics, Engineering, Operations Research, and Management Science, among many others.

The main objective is to give a concise, systematic, and reasonably self contained presentation of some key topics in optimal control theory. To this end, most of the analyses are based on the dynamic programming (DP) technique. This technique is applicable to almost all control problems that appear in theory and applications. They include, for instance, finite and infinite horizon control problems in which the underlying dynamic system follows either a deterministic or stochastic difference or differential equation. In the infinite horizon case, it also uses DP to study undiscounted problems, such as the ergodic or long-run average cost.

 After a general introduction to control problems, the book covers the topic dividing into four parts with different dynamical systems: control of discrete-time deterministic systems, discrete-time stochastic systems, ordinary differential equations, and finally a general continuous-time MCP with applications for stochastic differential equations.

The first and second part should be accessible to undergraduate students with some knowledge of elementary calculus, linear algebra, and some concepts from probability theory (random variables, expectations, and so forth). Whereas the third and fourth part would be appropriate for advanced undergraduates or graduate students who have a working knowledge of mathematical analysis (derivatives, integrals, ...) and stochastic processes.



Author(s): Onésimo Hernández-Lerma, Leonardo R. Laura-Guarachi, Saul Mendoza-Palacios, David González-Sánchez
Series: Texts in Applied Mathematics, 76
Publisher: Springer
Year: 2023

Language: English
Pages: 278
City: Cham

Preface
Contents
1 Introduction: Optimal Control Problems
2 Discrete–Time Deterministic Systems
2.1 The Dynamic Programming Equation
2.2 The DP Equation and Related Topics
2.2.1 Variants of the DP Equation
2.2.2 The Minimum Principle
2.3 Infinite–Horizon Problems
2.3.1 Discounted Case
2.3.2 The Minimum Principle
2.3.3 The Weighted-Norm Approach
2.4 Approximation Algorithms
2.4.1 Value Iteration
2.4.2 Policy Iteration
2.5 Long–Run Average Cost Problems
2.5.1 The AC Optimality Equation
2.5.2 The Steady–State Approach
2.5.3 The Vanishing Discount Approach
3 Discrete–Time Stochastic Control Systems
3.1 Stochastic Control Models
3.2 Markov Control Processes: Finite Horizon
3.3 Conditions for the Existence of Measurable Minimizers
3.4 Examples
3.5 Infinite–Horizon Discounted Cost Problems
3.6 Policy Iteration
3.7 Long-Run Average Cost Problems
3.7.1 The Average Cost Optimality Inequality
3.7.2 The Average Cost Optimality Equation
3.7.3 Examples
4 Continuous–Time Deterministic Systems
4.1 The HJB Equation and Related Topics
4.1.1 Finite–Horizon Problems: The HJB Equation
4.1.2 A Minimum Principle from the HJB Equation
4.2 The Discounted Case
4.3 Infinite–Horizon Discounted Cost
4.4 Long-Run Average Cost Problems
4.4.1 The Average Cost Optimality Equation (ACOE)
4.4.2 The Steady-State Approach
4.4.3 The Vanishing Discount Approach
4.5 The Policy Improvement Algorithm
4.5.1 The PIA: Discounted Cost Problems
4.5.2 The PIA: Average Cost Problems
5 Continuous–Time Markov Control Processes
5.1 Markov Processes
5.2 The Infinitesimal Generator
5.3 Markov Control Processes
5.4 The Dynamic Programming Approach
5.5 Long–Run Average Cost Problems
5.5.1 The Ergodicity Approach
5.5.2 The Vanishing Discount Approach
6 Controlled Diffusion Processes
6.1 Diffusion Processes
6.2 Controlled Diffusion Processes
6.3 Examples: Finite Horizon
6.4 Examples: Discounted Costs
6.5 Examples: Average Costs
Appendix A Terminology and Notation
Lower Semicontinuous Functions
Appendix B *26ptExistence of Measurable Minimizers
Appendix C *26ptMarkov Processes
Continuous–Time Markov Processes
Theorem of C. Ionescu–Tulcea
Appendix Bibliography
Index