Mathematical Control Theory - An Introduction

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This textbook presents, in a mathematically precise manner, a unified introduction to deterministic control theory. With the exception of a few more advanced concepts required for the final part of the book, the presentation requires only a knowledge of basic facts from linear algebra, differential equations, and calculus. In addition to classical concepts and ideas, the author covers the stabilization of nonlinear systems using topological methods, realization theory for nonlinear systems, impulsive control and positive systems, the control of rigid bodies, the stabilization of infinite dimensional systems, and the solution of minimum energy problems. This second edition includes new chapters that introduce a variety of topics, such as controllability with vanishing energy, boundary control systems, and delayed systems. With additional proofs, theorems, results, and a substantially larger index, this new edition will be an invaluable resource for students and researchers of control theory. Mathematical Control Theory: An Introduction will be ideal for a beginning graduate course in mathematical control theory, or for self-study by professionals needing a complete picture of the mathematical theory that underlies the applications of control theory.

Author(s): Jerzy Zabczyk
Series: Systems & Control: Foundations & Applications
Edition: 2
Publisher: Birkhäuser
Year: 2020

Language: English
Tags: Controllability, Observability, Stability, Dynamic Programming

Preface to the first edition
Preface to the second edition
Contents
Introduction
0.1 Problems of mathematical control theory
0.2 Specific models
Bibliographical notes
Part I Elements of Classical Control Theory
1 Controllability and observability
1.1 Linear differential equations
1.2 The controllability matrix
1.3 Rank condition
1.4 A classification of control systems
1.5 Kalman decomposition
1.6 Observability
Bibliographical notes
2 Stability and stabilizability
2.1 Stable linear systems
2.2 Stable polynomials
2.3 The Routh theorem
2.4 Stability, observability and Lyapunov equation
2.5 Stabilizability and controllability
2.6 Hautus lemma
2.7 Detectability and dynamical observers
Bibliographical notes
3 Controllability with vanishing energy
3.1 Characterization theorems
3.2 Orbital Transfer Problem
3.3 NCVE and generalized Liouville's theorem
Bibliographical notes
4 Systems with constraints
4.1 Bounded sets of parameters
4.2 Positive systems
Bibliographical notes
5 Realization theory
5.1 Impulse response and transfer functions
5.2 Realizations of the impulse response function
5.3 The characterization of transfer functions
Bibliographical notes
Part II Nonlinear Control Systems
6 Controllability and observability of nonlinear systems
6.1 Nonlinear differential equations
6.2 Controllability and linearization
6.3 Lie brackets
6.4 The openness of attainable sets
6.5 Observability
Bibliographical notes
7 Stability and stabilizability
7.1 Differential inequalities
7.2 The main stability test
7.3 Linearization
7.4 The Lyapunov function method
7.5 La Salle's theorem
7.6 Topological stability criteria
7.7 Exponential stabilizability and the robustness problem
7.8 Necessary conditions for stabilizability
7.9 Stabilization of the Euler equations
Bibliographical notes
8 Realization theory
8.1 Input–output maps
8.2 Partial realizations
Bibliographical notes
Part III Optimal Control
9 Dynamic programming
9.1 Introductory comments
9.2 Bellman equation and the value function
9.3 The linear regulator problem and the Riccati equation
9.4 The linear regulator and stabilization
Bibliographical notes
10 Viscosity solutions of Bellman equations
10.1 Viscosity solution
10.2 Dynamic programming principle
10.3 Regularity of the value function
10.4 Existence of the viscosity solution
10.5 Uniqueness of the viscosity solution
Bibliographical notes
11 Dynamic programming for impulse control
11.1 Impulse control problems
11.2 An optimal stopping problem
11.3 Iterations of convex mappings
11.4 The proof of Theorem 11.1
Bibliographical notes
12 The maximum principle
12.1 Control problems with fixed terminal time
12.2 An application of the maximum principle
12.3 The maximum principle for impulse control problems
12.4 Separation theorems
12.5 Time-optimal problems
Bibliographical notes
13 The existence of optimal strategies
13.1 A control problem without an optimal solution
13.2 Filippov's theorem
Bibliographical notes
Part IV Infinite-Dimensional Linear Systems
14 Linear control systems
14.1 Introduction
14.2 Semigroups of operators
14.3 The Hille–Yosida theorem
14.4 Phillips' theorem
14.5 Important classes of generators
14.6 Specific examples of generators
14.7 The integral representation of linear systems
14.8 Delay systems
14.9 Existence of solutions to delay equation
14.10 Semigroup approach to delay systems
14.11 State space representation of delay systems
Bibliographical notes
15 Controllability
15.1 Images and kernels of linear operators
15.2 The controllability operator
15.3 Various concepts of controllability
15.4 Systems with self-adjoint generators
15.5 Controllability of the wave equation
Bibliographical notes
16 Stability and stabilizability
16.1 Various concepts of stability
16.2 Spectrum determined growth assumptions
16.3 Stability of delay equations
16.4 Lyapunov's equation
16.5 Stability of abstract hyperbolic systems
16.6 Stabilizability and controllability
Bibliographical notes
17 Linear regulators in Hilbert spaces
17.1 Introduction
17.2 The operator Riccati equation
17.3 The finite horizon case
17.4 Regulator problem for delay systems
17.5 The infinite horizon case: Stabilizability and detectability
Bibliographical notes
18 Boundary control systems
18.1 Semigroup approach to boundary control systems
18.2 Boundary systems with general controls
18.3 Heating systems
18.4 Approximate controllability of the heating systems
18.5 Null controllability for a boundary control system
Bibliographical notes
Appendix
A.1 Measurable spaces
A.2 Metric spaces
A.3 Banach spaces
A.4 Hilbert spaces
A.5 Bochner's integral
A.6 Spaces of continuous functions
A.7 Spaces of measurable functions
References
Index