Model-Based Reinforcement Learning: From Data to Continuous Actions with a Python-based Toolbox

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Model-Based Reinforcement Learning

Explore a comprehensive and practical approach to reinforcement learning

Reinforcement learning is an essential paradigm of machine learning, wherein an intelligent agent performs actions that ensure optimal behavior from devices. While this paradigm of machine learning has gained tremendous success and popularity in recent years, previous scholarship has focused either on theory―optimal control and dynamic programming – or on algorithms―most of which are simulation-based.

Model-Based Reinforcement Learning provides a model-based framework to bridge these two aspects, thereby creating a holistic treatment of the topic of model-based online learning control. In doing so, the authors seek to develop a model-based framework for data-driven control that bridges the topics of systems identification from data, model-based reinforcement learning, and optimal control, as well as the applications of each. This new technique for assessing classical results will allow for a more efficient reinforcement learning system. At its heart, this book is focused on providing an end-to-end framework―from design to application―of a more tractable model-based reinforcement learning technique.

Model-Based Reinforcement Learning readers will also find:

  • A useful textbook to use in graduate courses on data-driven and learning-based control that emphasizes modeling and control of dynamical systems from data
  • Detailed comparisons of the impact of different techniques, such as basic linear quadratic controller, learning-based model predictive control, model-free reinforcement learning, and structured online learning
  • Applications and case studies on ground vehicles with nonholonomic dynamics and another on quadrator helicopters
  • An online, Python-based toolbox that accompanies the contents covered in the book, as well as the necessary code and data

Model-Based Reinforcement Learning is a useful reference for senior undergraduate students, graduate students, research assistants, professors, process control engineers, and roboticists.

Author(s): Jun Liu, Milad Farsi
Series: IEEE Press Series on Control Systems Theory and Applications
Publisher: Wiley-IEEE Press
Year: 2023

Language: English
Pages: 273
City: Piscataway

Cover
Title Page
Copyright
Contents
About the Authors
Preface
Acronyms
Introduction
Chapter 1 Nonlinear Systems Analysis
1.1 Notation
1.2 Nonlinear Dynamical Systems
1.2.1 Remarks on Existence, Uniqueness, and Continuation of Solutions
1.3 Lyapunov Analysis of Stability
1.4 Stability Analysis of Discrete Time Dynamical Systems
1.5 Summary
Bibliography
Chapter 2 Optimal Control
2.1 Problem Formulation
2.2 Dynamic Programming
2.2.1 Principle of Optimality
2.2.2 Hamilton–Jacobi–Bellman Equation
2.2.3 A Sufficient Condition for Optimality
2.2.4 Infinite‐Horizon Problems
2.3 Linear Quadratic Regulator
2.3.1 Differential Riccati Equation
2.3.2 Algebraic Riccati Equation
2.3.3 Convergence of Solutions to the Differential Riccati Equation
2.3.4 Forward Propagation of the Differential Riccati Equation for Linear Quadratic Regulator
2.4 Summary
Bibliography
Chapter 3 Reinforcement Learning
3.1 Control‐Affine Systems with Quadratic Costs
3.2 Exact Policy Iteration
3.2.1 Linear Quadratic Regulator
3.3 Policy Iteration with Unknown Dynamics and Function Approximations
3.3.1 Linear Quadratic Regulator with Unknown Dynamics
3.4 Summary
Bibliography
Chapter 4 Learning of Dynamic Models
4.1 Introduction
4.1.1 Autonomous Systems
4.1.2 Control Systems
4.2 Model Selection
4.2.1 Gray‐Box vs. Black‐Box
4.2.2 Parametric vs. Nonparametric
4.3 Parametric Model
4.3.1 Model in Terms of Bases
4.3.2 Data Collection
4.3.3 Learning of Control Systems
4.4 Parametric Learning Algorithms
4.4.1 Least Squares
4.4.2 Recursive Least Squares
4.4.3 Gradient Descent
4.4.4 Sparse Regression
4.5 Persistence of Excitation
4.6 Python Toolbox
4.6.1 Configurations
4.6.2 Model Update
4.6.3 Model Validation
4.7 Comparison Results
4.7.1 Convergence of Parameters
4.7.2 Error Analysis
4.7.3 Runtime Results
4.8 Summary
Bibliography
Chapter 5 Structured Online Learning‐Based Control of Continuous‐Time Nonlinear Systems
5.1 Introduction
5.2 A Structured Approximate Optimal Control Framework
5.3 Local Stability and Optimality Analysis
5.3.1 Linear Quadratic Regulator
5.3.2 SOL Control
5.4 SOL Algorithm
5.4.1 ODE Solver and Control Update
5.4.2 Identified Model Update
5.4.3 Database Update
5.4.4 Limitations and Implementation Considerations
5.4.5 Asymptotic Convergence with Approximate Dynamics
5.5 Simulation Results
5.5.1 Systems Identifiable in Terms of a Given Set of Bases
5.5.2 Systems to Be Approximated by a Given Set of Bases
5.5.3 Comparison Results
5.6 Summary
Bibliography
Chapter 6 A Structured Online Learning Approach to Nonlinear Tracking with Unknown Dynamics
6.1 Introduction
6.2 A Structured Online Learning for Tracking Control
6.2.1 Stability and Optimality in the Linear Case
6.3 Learning‐based Tracking Control Using SOL
6.4 Simulation Results
6.4.1 Tracking Control of the Pendulum
6.4.2 Synchronization of Chaotic Lorenz System
6.5 Summary
Bibliography
Chapter 7 Piecewise Learning and Control with Stability Guarantees
7.1 Introduction
7.2 Problem Formulation
7.3 The Piecewise Learning and Control Framework
7.3.1 System Identification
7.3.2 Database
7.3.3 Feedback Control
7.4 Analysis of Uncertainty Bounds
7.4.1 Quadratic Programs for Bounding Errors
7.5 Stability Verification for Piecewise‐Affine Learning and Control
7.5.1 Piecewise Affine Models
7.5.2 MIQP‐based Stability Verification of PWA Systems
7.5.3 Convergence of ACCPM
7.6 Numerical Results
7.6.1 Pendulum System
7.6.2 Dynamic Vehicle System with Skidding
7.6.3 Comparison of Runtime Results
7.7 Summary
Bibliography
Chapter 8 An Application to Solar Photovoltaic Systems
8.1 Introduction
8.2 Problem Statement
8.2.1 PV Array Model
8.2.2 DC‐D C Boost Converter
8.3 Optimal Control of PV Array
8.3.1 Maximum Power Point Tracking Control
8.3.2 Reference Voltage Tracking Control
8.3.3 Piecewise Learning Control
8.4 Application Considerations
8.4.1 Partial Derivative Approximation Procedure
8.4.2 Partial Shading Effect
8.5 Simulation Results
8.5.1 Model and Control Verification
8.5.2 Comparative Results
8.5.3 Model‐Free Approach Results
8.5.4 Piecewise Learning Results
8.5.5 Partial Shading Results
8.6 Summary
Bibliography
Chapter 9 An Application to Low‐level Control of Quadrotors
9.1 Introduction
9.2 Quadrotor Model
9.3 Structured Online Learning with RLS Identifier on Quadrotor
9.3.1 Learning Procedure
9.3.2 Asymptotic Convergence with Uncertain Dynamics
9.3.3 Computational Properties
9.4 Numerical Results
9.5 Summary
Bibliography
Chapter 10 Python Toolbox
10.1 Overview
10.2 User Inputs
10.2.1 Process
10.2.2 Objective
10.3 SOL
10.3.1 Model Update
10.3.2 Database
10.3.3 Library
10.3.4 Control
10.4 Display and Outputs
10.4.1 Graphs and Printouts
10.4.2 3D Simulation
10.5 Summary
Bibliography
A Appendix
A.1 Supplementary Analysis of Remark 5.4
A.2 Supplementary Analysis of Remark 5.5
Index
EULA