Optimal Control Theory: An Introduction

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Author(s): Donald E. Kirk
Publisher: Dover Publications
Year: 2004

Language: English
Pages: 472

Preface ......Page 4
Table of Contents ......Page 6
PART I: DESCRIBING THE SYSTEM AND EVALUATING ITS PERFORMANCE ......Page 9
1.1 Problem Formulation ......Page 10
1.2 State Variable Representation of Systems ......Page 23
1.3 Concluding Remarks ......Page 29
Problems ......Page 30
2.1 Performance Measures for Optimal Control Problems ......Page 36
2.2 Selecting a Performance Measure ......Page 41
2.3 Selection of a Performance Measure: The Carrier Landing of a Jet Aircraft ......Page 49
Problems ......Page 54
PART II: DYNAMIC PROGRAMMING ......Page 57
3.1 The Optimal Control Law ......Page 58
3.2 The Principle of Optimality ......Page 59
3.3 Application of the Principle of Optimality to Decision-Making ......Page 60
3.4 Dynamic Programming Applied to a Routing Problem ......Page 61
3.5 An Optimal Control System ......Page 63
3.6 Interpolation ......Page 69
3.7 A Recurrence Relation of Dynamic Programming ......Page 72
3.8 Computational Procedure for Solving Control Problems ......Page 75
3.9 Characteristics of Dynamic Programming Solution ......Page 80
3.10 Analytical Results-Discrete Linear Regulator Problems ......Page 83
3.11 The HamiIton-Jacobi-BeIlman Equation ......Page 91
3.12 Continuous Linear Regulator Problems ......Page 95
3.13 The Hamilton-Jacobi-Bellman Equation-Some Observations ......Page 98
3.14 Summary ......Page 99
References ......Page 100
Problems ......Page 101
PART III: THE CALCULUS OF VARIATIONS AND PONTRYAGIN'S MINIMUM PRINCIPLE ......Page 110
4. The Calculus of Variations ......Page 112
4.1 Fundamental Concepts ......Page 113
4.2 Functionals of a Single Function ......Page 128
4.3 Functionals Involving Several Independent Functions ......Page 148
4.4 Piecewise-Smooth Extremals ......Page 159
4.5 Constrained Extrema ......Page 166
4.6 Summary ......Page 182
Problems ......Page 183
5.1 Necessary Conditions for Optimal Control ......Page 189
5.2 Linear Regulator Problems ......Page 214
5.3 Pontryagin's Minimum Principle and State Inequality Constraints ......Page 232
5.4 Minimum-Time Problems ......Page 245
5.5 Minimum Control-Effort Problems ......Page 264
5.6 Singular Intervals in Optimal Control Problems ......Page 296
5.7 Summary and Conclusions ......Page 313
References ......Page 314
Problems ......Page 315
PART IV: ITERATIVE NUMERICAL TECHNIQUES FOR FINDING OPTIMAL CONTROLS AND TRAJECTORIES ......Page 332
6. Numerical Determination of Optimal Trajectories ......Page 334
6.1 Two-Point Boundary-Value Problems ......Page 335
6.2 The Method of Steepest Descent ......Page 336
6.3 Variation of Extremals ......Page 348
6.4 Quasilinearization ......Page 362
6.5 Summary of Iterative Techniques for Solving Two-Point Boundary-Value Problems ......Page 376
6.6 Gradient Projection ......Page 378
References ......Page 413
Problems ......Page 414
PART V: CONCLUSION ......Page 419
7.1 The Relationship Between Dynamic Programming and the Minimum Principle ......Page 420
7.2 Summary ......Page 426
7.3 Controller Design ......Page 428
References ......Page 430
1. Useful Matrix Properties and Definitions ......Page 432
2. Difference Equation Representation of Linear Sampled-Data Systems ......Page 435
3. Special Types of Euler Equations ......Page 437
4. Answers to Selected Problems ......Page 440
Index ......Page 446