Optimal Control

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This textbook, now in its second edition, results from lectures, practical problems, and workshops on Optimal Control, given by the authors at Irkutsk State University, Far Eastern Federal University (both in Vladivostok, Russia), and Kwangwoon University (Seoul, South Korea). In this work, the authors cover the theory of linear and nonlinear systems, touching on the basic problem of establishing the necessary and sufficient conditions of optimal processes. Readers will find two new chapters, with results of potential interest to researchers with a focus on the theory of optimal control, as well as to those interested in applications in Engineering and related sciences. In addition, several improvements have been made through the text. This book is structured in three parts. Part I starts with a gentle introduction to the basic concepts in Optimal Control. In Part II, the theory of linear control systems is constructed on the basis of the separation theorem and the concept of a reachability set. The authors prove the closure of reachability set in the class of piecewise continuous controls and touch on the problems of controllability, observability, identification, performance, and terminal control. Part III, in its turn, is devoted to nonlinear control systems. Using the method of variations and the Lagrange multipliers rule of nonlinear problems, the authors prove the Pontryagin maximum principle for problems with mobile ends of trajectories. Problem sets at the end of chapters and a list of additional tasks, provided in the appendix, are offered for students seeking to master the subject. The exercises have been chosen not only as a way to assimilate the theory but also as to induct the application of such knowledge in more advanced problems.

Author(s): Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal
Edition: 2
Publisher: Springer Nature Switzerland AG
Year: 2021

Language: English
Pages: 251
City: Cham
Tags: Optimal Control, Reachability, Controllability, Observability

Preface
Preface to the Second Edition
Contents
Notations
Part I: Introduction
Chapter 1: The Subject of Optimal Control
1.1 «Mass-Spring» Example
1.2 Subject and Problems of Optimal Control
1.3 Place of Optimal Control
Chapter 2: Mathematical Model for Controlled Object
2.1 Controlled Object
2.2 Control and Trajectory
2.3 Mathematical Model
2.4 Existence and Uniqueness of a Process
2.5 Linear Models
2.6 Example
Part II: Control of Linear Systems
Chapter 3: Reachability Set
3.1 Cauchy Formula
3.2 Properties of the Fundamental Matrix
3.3 Examples
3.4 Definition of a Reachability Set
3.5 Limitation and Convexity
3.5.1 Limitation
3.5.2 Convexity
3.6 Closure
3.7 Continuity
3.8 Extreme Principle
3.9 Application of the Extreme Principle
Chapter 4: Controllability of Linear Systems
4.1 Point-to-Point Controllability
4.2 Analysis of the Point-to-Point Controllability Criteria
4.3 Auxiliary Lemma
4.4 Kalman Theorem
4.5 Control with Minimal Norm
4.6 Construction of Control with Minimum Norm
4.7 Total Controllability of Linear System
4.8 Synthesis of Control with a Minimal Norm
4.9 Krasovskii Theorem
4.10 Total Controllability of Stationary System
4.11 Geometry of a Non-controllable System
4.12 Transformation of Non-controllable System
4.13 Controllability of Transformed System
Chapter 5: Minimum Time Problem
5.1 Statement of the Problem
5.2 Existence of a Solution of the Minimum Time Problem
5.3 Criterion of Optimality
5.4 Maximum Principle for the Minimum Time Problem
5.5 Stationary Minimum Time Problem
Chapter 6: Synthesis of the Optimal System Performance
6.1 General Scheme to Apply the Maximum Principle
6.2 Control of Acceleration of a Material Point
6.3 Concept of Optimal Control Synthesis
6.4 Examples of Synthesis of Optimal Systems Performance
6.4.1 Eigenvalues of Matrix A Are Real and Distinct
6.4.2 The Eigenvalues of Matrix A Are Complex
Chapter 7: The Observability Problem
7.1 Statement of the Problem
7.2 Criterion of Observability
7.3 Observability in Homogeneous System
7.4 Observability in Nonhomogeneous System
7.5 Observability of an Initial State
7.6 Relation Between Controllability and Observability
7.7 Total Observability of a Stationary System
Chapter 8: Identification Problem
8.1 Statement of the Problem
8.2 Criterion of Identifiability
8.3 Restoring the Parameter Vector
8.4 Total Identificaition of Stationary System
Part III: Control of Nonlinear Systems
Chapter 9: Types of Optimal Control Problems
9.1 General Characteristics
9.2 Objective Functionals
Terminal Functional (Mayer Functional)
Mayer-Bolts Functional
9.3 Constraints on the Ends of a Trajectory. Terminology
9.4 The Simplest Problem
9.5 Two-Point Minimum Time Problem
9.6 General Optimal Control Problem
9.7 Problem with Intermediate States
9.8 Common Problem of Optimal Control
Chapter 10: Small Increments of a Trajectory
10.1 Statement of a Problem
10.2 Evaluation of the Increment of Trajectory
10.3 Representation of Small Increments of Trajectory
10.4 Relation of the Ends of Trajectories
Chapter 11: The Simplest Problem of Optimal Control
11.1 Simplest-Problem. Functional Increment Formula
11.2 Maximum Principle for the Simplest Problem
11.3 Boundary Value Problem of the Maximum Principle
11.4 Continuity of the Hamiltonian
11.5 Sufficiency of the Maximum Principle
11.6 Applying the Maximum Principle to the Linear Problem
11.7 Solution of the Mass-Spring Example
Chapter 12: General Optimal Control Problem
12.1 General Problem. Functional Increment Formula
12.2 Variation of the Process
12.3 Necessary Conditions for Optimality
12.4 Lagrange Multiplier Rule
12.5 Universal Lagrange Multipliers
12.6 Maximum Principle for the General Problem
12.7 Comments
12.8 Sufficiency of the Maximum Principle
12.9 Maximum Principle for Minimum Time Problem
12.10 Maximum Principle and Euler-Lagrange Equation
12.11 Maximum Principle and Optimality of a Process
Chapter 13: Problem with Intermediate States
13.1 Problem with Intermediate State. Functional Increment Formula
13.2 Preliminary Necessary Conditions of Optimality
13.3 Maximum Principle for the Problem with an Intermediate State
13.4 Discontinuous Systems
Chapter 14: Extremals Field Theory
14.1 Specifying of the Problem
14.2 Field of Extremals
14.3 Exact Formula for Large Increments of a Functional
14.4 Sufficiency of the Maximum Principle
14.5 Invariance of the Systems
14.6 Examples of an Invariant System
Chapter 15: Sufficient Optimality Conditions
15.1 Common Problem of Optimal Control
15.2 Basic Theorems
15.3 Analytical Construction of the Controller
15.4 Relation with Dynamic Programming
Conclusion
Appendix
A.1. Elements of Multidimensional Geometry
A.1.1. Finite-Dimensional Vector Space
A.1.2. Geometric Objects in Rn
A.2. Elements of Convex Analysis
A.2.1. Separability of Sets
A.2.2. Reference Plane
A.2.3. Representation of a Convex Set
A.2.4. Convex Hull of a Set
A.3. Maximum and Minimum of a Function
A.3.1. Properties of a Maximum and Minimum
A.3.2. Continuity of a Maximum and Minimum
A.4. Tasks and Solutions
A.4.1. Tasks
Fundamental Matrix
Reachability Set
Reference Plane
Point-to-Point Controllability
Total Controllability
Minimum Time Problem
Observation Problem
Identification Problem
Synthesis of Control
Variants of Tasks. Tests
Variants of Tasks. Verification of a Process on Optimality
A.4.2. Examples of a Solution
Point-to-Point Controllability
Non-stationary System. Rank Criterion
Non-stationary System. Krasovsky´s Theorem
Stationary System. Total Controllability
Minimum Time Problem
S-Problem
G-Problem
Variational Problem
Bibliography