Hard Real-Time Computing Systems: Predictable Scheduling Algorithms and Applications

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This book is a basic treatise on real-time computing, with particular emphasis on predictable scheduling algorithms. The main objectives of the book are to introduce the basic concepts of real-time computing, illustrate the most significant results in the field, and provide the basic methodologies for designing predictable computing systems useful in supporting critical control applications.
Hard Real-Time Computing Systems is written for instructional use and is organized to enable readers without a strong knowledge of the subject matter to quickly grasp the material. Technical concepts are clearly defined at the beginning of each chapter, and algorithm descriptions are corroborated through concrete examples, illustrations, and tables. This new, fourth edition includes new sections to explain the variable-rate task model, how to improve predictability and safety in cyber-physical real-time systems that exploit machine learning algorithms, additional coverage on Response Time Analysis, and a new chapter on implementing periodic real-time tasks under Linux..

Author(s): Giorgio Buttazzo
Edition: 4
Publisher: Springer
Year: 2023

Language: English
Commentary: Publisher PDF | Published: 16 December 2023
Pages: xvii, 492
City: Cham
Tags: Real-Time Computing; Predictable Scheduling Algorithms; Aperiodic Task Scheduling; Critical Control Applications; Dynamic Priority Servers; Embedded Systems; Feedback Scheduling Techniques; Fixed-Priority Servers; Limited Preemptive Scheduling; Periodic Task Scheduling; Predictability; Priority Servers; Real-Time Embedded Systems; Resource Access Protocols; Resource Reservation; Task Scheduling

Preface
Contents of the Chapters
Difference with the Third Edition
Acknowledgments
Contents
1 A General View
1.1 Introduction
1.2 What Does Real Time Mean?
1.2.1 The Concept of Time
1.2.2 Limits of Current Real-Time Systems
1.2.3 Desirable Features of Real-Time Systems
1.3 Achieving Predictability
1.3.1 DMA
1.3.2 Cache
1.3.3 Interrupts
1.3.3.1 Approach A
1.3.3.2 Approach B
1.3.3.3 Approach C
1.3.4 System Calls
1.3.5 Semaphores
1.3.6 Memory Management
1.3.7 Programming Language
Exercises
2 Basic Concepts
2.1 Introduction
2.2 Types of Task Constraints
2.2.1 Timing Constraints
2.2.2 Precedence Constraints
2.2.3 Resource Constraints
2.3 Definition of Scheduling Problems
2.3.1 Classification of Scheduling Algorithms
2.3.1.1 Guarantee-Based algorithms
2.3.1.2 Best-Effort Algorithms
2.3.2 Metrics for Performance Evaluation
2.4 Scheduling Anomalies
Exercises
3 Aperiodic Task Scheduling
3.1 Introduction
3.2 Jackson's Algorithm
3.2.1 Examples
3.2.1.1 Example 1
3.2.1.2 Example 2
3.2.2 Guarantee
3.3 Horn's Algorithm
3.3.1 EDF Optimality
3.3.2 Example
3.3.3 Guarantee
3.4 Non-preemptive Scheduling
3.4.1 Bratley's Algorithm (1 no_preem feasible)
3.4.2 The Spring Algorithm
3.5 Scheduling with Precedence Constraints
3.5.1 Latest Deadline First (1 prec, sync Lmax)
3.5.2 EDF with Precedence Constraints (1 prec, preem Lmax)
3.5.2.1 Modification of the Release Times
3.5.2.2 Modification of the Deadlines
3.5.2.3 Proof of Optimality
3.6 Summary
Exercises
4 Periodic Task Scheduling
4.1 Introduction
4.1.1 Processor Utilization Factor
4.2 Timeline Scheduling
4.3 Rate Monotonic Scheduling
4.3.1 Optimality
4.3.2 Calculation of Ulub for Two Tasks
4.3.3 Calculation of Ulub for N Tasks
4.3.4 Hyperbolic Bound for RM
4.4 Earliest Deadline First
4.4.1 Schedulability Analysis
4.4.2 An Example
4.5 Deadline Monotonic
4.5.1 Schedulability Analysis
4.5.2 Response Time Analysis
4.5.3 Improving RTA
4.5.4 Workload Analysis
4.6 EDF with Deadlines Less Than Periods
4.6.1 The Processor Demand Approach
4.6.2 Reducing Test Intervals
4.6.2.1 Example
4.7 Comparison Between RM and EDF
Exercises
5 Fixed-Priority Servers
5.1 Introduction
5.2 Background Scheduling
5.3 Polling Server
5.3.1 Schedulability Analysis
5.3.2 Dimensioning a Polling Server
5.3.3 Aperiodic Guarantee
5.4 Deferrable Server
5.4.1 Schedulability Analysis
5.4.1.1 Calculation of Ulub for RM+DS
5.4.2 Dimensioning a Deferrable Server
5.4.3 Aperiodic Guarantee
5.5 Priority Exchange
5.5.1 Schedulability Analysis
5.5.2 PE Versus DS
5.6 Sporadic Server
5.6.1 Schedulability Analysis
5.7 Slack Stealing
5.7.1 Schedulability Analysis
5.8 Non-existence of Optimal Servers
5.9 Performance Evaluation
5.10 Summary
Exercises
6 Dynamic Priority Servers
6.1 Introduction
6.2 Dynamic Priority Exchange Server
6.2.1 Schedulability Analysis
6.2.2 Reclaiming Spare Time
6.3 Dynamic Sporadic Server
6.3.1 Schedulability Analysis
6.4 Total Bandwidth Server
6.4.1 Schedulability Analysis
6.5 Earliest Deadline Late Server
6.5.1 EDL Server Properties
6.6 Improved Priority Exchange Server
6.6.1 Schedulability Analysis
6.6.2 Remarks
6.7 Improving TBS
6.7.1 An Example
6.7.2 Optimality
6.8 Performance Evaluation
6.9 The Constant Bandwidth Server
6.9.1 Definition of CBS
6.9.2 Scheduling Example
6.9.3 Formal Definition
6.9.4 CBS Properties
6.9.5 Simulation Results
6.9.6 Dimensioning CBS Parameters
6.9.6.1 Taking Overheads into Account
6.10 Summary
Exercises
7 Resource Access Protocols
7.1 Introduction
7.2 The Priority Inversion Phenomenon
7.3 Terminology and Assumptions
7.4 Non-preemptive Protocol
7.4.1 Blocking Time Computation
7.5 Highest Locker Priority Protocol
7.5.1 Blocking Time Computation
7.6 Priority Inheritance Protocol
7.6.1 Protocol Definition
7.6.1.1 Examples
7.6.2 Properties of the Protocol
7.6.3 Blocking Time Computation
7.6.3.1 Example
7.6.3.2 Exact Method
7.6.4 Implementation Considerations
7.6.5 Unsolved Problems
7.6.5.1 Chained Blocking
7.6.5.2 Deadlock
7.7 Priority Ceiling Protocol
7.7.1 Protocol Definition
7.7.1.1 Example
7.7.2 Properties of the Protocol
7.7.3 Blocking Time Computation
7.7.4 Implementation Considerations
7.8 Stack Resource Policy
7.8.1 Definitions
7.8.1.1 Priority
7.8.1.2 Preemption Level
7.8.1.3 Resource Units
7.8.1.4 Resource Requirements
7.8.1.5 Resource Ceiling
7.8.1.6 System Ceiling
7.8.2 Protocol Definition
7.8.2.1 Observations
7.8.2.2 Example
7.8.3 Properties of the Protocol
7.8.4 Blocking Time Computation
7.8.5 Sharing Runtime Stack
7.8.6 Implementation Considerations
7.9 Schedulability Analysis
7.10 Summary
Exercises
8 Limited Preemptive Scheduling
8.1 Introduction
8.1.1 Terminology and Notation
8.2 Non-preemptive Scheduling
8.2.1 Feasibility Analysis
8.3 Preemption Thresholds
8.3.1 Feasibility Analysis
8.3.2 Selecting Preemption Thresholds
8.4 Deferred Preemptions
8.4.1 Feasibility Analysis
8.4.2 Longest Non-preemptive Interval
8.5 Task Splitting
8.5.1 Feasibility Analysis
8.5.2 Longest Non-preemptive Interval
8.6 Selecting Preemption Points
8.6.1 Selection Algorithm
8.7 Assessment of the Approaches
8.7.1 Implementation Issues
8.7.2 Predictability
8.7.3 Effectiveness
8.8 Conclusions
Exercises
9 Handling Overload Conditions
9.1 Introduction
9.1.1 Load Definitions
9.1.2 Terminology
9.2 Handling Aperiodic Overloads
9.2.1 Performance Metrics
9.2.2 Online Versus Clairvoyant Scheduling
9.2.3 Competitive Factor
9.2.3.1 Task Generation Strategy
9.2.3.2 Proof of the Bound
9.2.3.3 Extensions
9.2.4 Typical Scheduling Schemes
9.2.5 The RED Algorithm
9.2.6 Dover: A Competitive Algorithm
9.2.7 Performance Evaluation
9.3 Handling Overruns
9.3.1 Resource Reservation
9.3.2 Schedulability Analysis
9.3.3 Handling Wrong Reservations
9.3.4 Resource Sharing
9.3.4.1 Solution 1: Budget Check
9.3.4.2 Solution 2: Budget Overrun
9.4 Handling Permanent Overloads
9.4.1 Job Skipping
9.4.1.1 Schedulability Analysis
9.4.1.2 Example
9.4.1.3 Skips and Bandwidth Saving
9.4.1.4 Example
9.4.2 Period Adaptation
9.4.2.1 Examples
9.4.2.2 The Elastic Model
9.4.2.3 Springs with No Length Constraints
9.4.2.4 Introducing Length Constraints
9.4.2.5 Compressing Tasks' Utilizations
9.4.2.6 Decompression
9.4.3 Implementation Issues
9.4.3.1 Period Rescaling
9.4.3.2 Concluding Remarks
9.4.4 Service Adaptation
9.4.4.1 Rate-Adaptive Tasks
Exercises
10 Kernel Design Issues
10.1 Structure of a Real-Time Kernel
10.2 Process States
10.3 Data Structures
10.4 Miscellaneous
10.4.1 Time Management
10.4.2 Task Classes and Scheduling Algorithm
10.4.3 Global Constants
10.4.4 Initialization
10.5 Kernel Primitives
10.5.1 Low-Level Primitives
10.5.2 List Management
10.5.3 Scheduling Mechanism
10.5.4 Task Management
10.5.5 Semaphores
10.5.6 Status Inquiry
10.6 Intertask Communication Mechanisms
10.6.1 Cyclical Asynchronous Buffers
10.6.2 CAB Implementation
10.7 System Overhead
10.7.1 Accounting for Interrupt
11 Application Design Issues
11.1 Introduction
11.2 Time Constraints Definition
11.2.1 Automatic Braking System
11.2.2 Robot Deburring
11.2.3 Multilevel Feedback Control
11.3 Hierarchical Design
11.3.1 Examples of Real-Time Robotics Applications
11.3.1.1 Assembly: Peg-in-Hole Insertion
11.3.1.2 Surface Cleaning
11.3.1.3 Object Tactile Exploration
11.3.1.4 Catching Moving Objects
11.4 A Robot Control Example
11.5 AI-Based Control Systems
11.5.1 Predictability Issues
11.5.1.1 GPU Issues
11.5.1.2 FPGA Issues
11.5.2 Mixed Criticality Issues
11.5.3 Safety and Security Issues
12 Implementing Periodic Tasks in Linux
12.1 Linux and the Pthread Library
12.2 Ptask Design Choices
12.3 Implementing Ptask Functions
12.4 An Example
13 Real-Time Operating Systems and Standards
13.1 Standards for Real-Time Operating Systems
13.1.1 RT-POSIX
13.1.2 OSEK/VDX
13.1.2.1 Details on the OSEK/VDX Standard
13.1.2.2 AUTOSAR OS
13.1.3 ARINC-APEX
13.1.4 Micro-ITRON
13.2 Commercial Real-Time Systems
13.2.1 VxWorks
13.2.2 OSE
13.2.3 QNX Neutrino
13.3 Linux-Related Real-Time Kernels
13.3.1 RTLinux
13.3.2 RTAI
13.3.3 Xenomai
13.3.4 PREEMPT_RT
13.3.5 SCHED_DEADLINE
13.3.6 Linux/RK
13.4 Open-Source Real-Time Research Kernels
13.4.1 Erika Enterprise
13.4.1.1 Efficient EDF Implementation
13.4.1.2 Multicore Support
13.4.2 Shark
13.4.2.1 Kernel Architecture
13.4.2.2 Scheduling Modules
13.4.2.3 Shared Resource Access Protocols
13.4.2.4 Device Management
13.4.3 Marte OS
13.4.3.1 Supported Functionality
13.4.3.2 Application-Defined Scheduling
13.4.3.3 Interrupt Management at Application Level
13.4.3.4 Drivers Framework
13.4.3.5 Ada Support
13.5 Development Tools
13.5.1 Timing Analysis Tools
13.5.2 Schedulability Analysis
13.5.3 Scheduling Simulators
14 Solutions to the Exercises
14.1 Solutions for Chap.1
14.2 Solutions for Chap.2
14.3 Solutions for Chap.3
14.4 Solutions for Chap.4
14.5 Solutions for Chap.5
14.6 Solutions for Chap.6
14.7 Solutions for Chap.7
14.8 Solutions for Chap.8
14.9 Solutions for Chap.9
Glossary
References
Index