Lecture 14 Real Time Concepts Embedded Computing Systems

Lecture 14: Real Time Concepts Embedded Computing Systems Mikko Lipasti Based on slides and textbook from Wayne Wolf High Performance Embedded Computing © 2007 Elsevier

Topics n n Ch. 4 in textbook Real-time scheduling. Scheduling for power/energy. Operating systems mechanisms and overhead © 2006 Elsevier

Real-time scheduling terminology n n n n Process: unique execution of a program Context switch: operating system switch from one process to another. Time quantum: time between OS interrupts. Schedule: sequence of process executions or context switches. Thread: process that shares address space with other threads. Task: a collection of processes. Subtask: one process in a task. © 2006 Elsevier

Real-time scheduling algorithms n Static scheduling algorithms determine the schedule off-line. q q n Constructive algorithms don’t have a complete schedule until the end of the algorithm. Iterative improvement algorithms build a schedule, then modify it. Dynamic scheduling algorithms build the schedule during system operation. q q Priority schedulers assign priorities to processes. Priorities may be static or dynamic. © 2006 Elsevier

Timing requirements n Real-time systems have timing requirements. q q n n Hard: missing a deadline causes system failure. Soft: missing a deadline does not cause failure. Deadline: time at which computation must finish. Release time: first time that computation may start. Period (T): interval between deadlines. Relative deadline: release time to deadline. © 2006 Elsevier

Timing behavior n n Initiation time: time when process actually starts executing. Completion time: time when process finishes. Response time = completion time – release time. Execution time (C): amount of time required to run the process on the CPU. © 2006 Elsevier

Utilization n n Total execution time C required to execute processes 1. . n is the sum of the Cis for the processes. Given available time t, utilization U = C/t. q q Generally expressed as a percentage. CPU can’t deliver more than 100% utilization. © 2006 Elsevier

Static scheduling algorithms n Often take advantage of data dependencies. q n n Resource dependencies come from the implementation. As-soon-as-possible (ASAP): schedule each process as soon as data dependencies allow. As-late-as-possible (ALAP): schedule each process as late as data dependencies and deadlines allow. © 2006 Elsevier

List scheduling n A common form of constructive scheduler. © 2006 Elsevier

Priority-driven scheduling n n Each process has a priority. Processes may be ready or waiting. Highest-priority ready process runs in the current quantum. Priorities may be static or dynamic. © 2006 Elsevier

Rate-monotonic scheduling n n n Liu and Layland: proved properties of static priority scheduling. No data dependencies between processes. Process periods may have arbitrary relationships. Ideal (zero) context switching time. Release time of process is start of period. Process execution time is fixed. © 2006 Elsevier

Critical instant © 2006 Elsevier

Critical instant analysis n Process 1 has shorter period, process 2 has longer period. If process 2 has higher priority, then: Schedulability condition: n Utilization is: n Utilization approaches: n n © 2006 Elsevier

Earliest-deadline-first (EDF) scheduling n Liu and Layland: dynamic priority algorithm. q n n Process closest to its deadline has highest priority. Relative deadline D. Process set must satisfy: © 2006 Elsevier

Least-laxity-first (LLF) scheduling n Laxity or slack: difference between remaining computation time and time until deadline. q n Process with smallest laxity has highest priority. Unlike EDF, takes into account computation time in addition to deadline. © 2006 Elsevier

Priority inversion n RMS and EDF assume no dependencies or outside resources. When processes use external resources, scheduling must take those into account. Priority inversion: external resources can make a low-priority process continue to execute as if it had higher priority. © 2006 Elsevier

Priority inversion example © 2006 Elsevier

Priority inheritance protocols n n Sha et al. : basic priority inheritance protocol, priority ceiling protocol. Process in a critical section executes at highest priority of any process that shares that critical section. q n Priority ceiling protocol: each semaphore has its own priority ceiling. q n Can deadlock. Required priority to obtain semaphore depends on priorities of other locked semaphores. Schedulability: © 2006 Elsevier

Scheduling for dynamic voltage scaling n Dynamic voltage scaling (DVS): change processor voltage to save power. q n Power consumption goes down as V 2, performance goes down as V. Must make sure that the process finishes its deadline. © 2006 Elsevier

Yao et al. DVS for real-time n n Intensity of an interval defines lower bound on average speed required to create a feasible schedule. Interval that maximizes the intensity is the critical interval. Optimal schedule is equal to the intensity of the critical interval. Average rate heuristic: © 2006 Elsevier

DVS with discrete voltages n Ishihara and Yasuura: two voltage levels are sufficient if a finite set of discrete voltage levels are used. © 2006 Elsevier

Procrastination scheduling n Family of algorithms that maximizes lengths of idle periods. q n n n CPU can be turned off during idle periods, further reducing energy consumption. Jejurkar et al. : Power consumption P = PAC + PDC + Pon. Minimum breakeven time tth = Esd/Pidle. Guarantees deadlines if: © 2006 Elsevier

Performance estimation n Multiple processes interfere in the cache. q n n Single-process performance evaluation cannot take into account the effects of a dynamic schedule. Kirk and Strosnider: segment the cache, allow processes to lock themselves into a segment. Mueller: use software methods to partition. © 2006 Elsevier

Cache modeling and scheduling n n Li and Wolf: each process has a stable footprint in the cache. Two-state model: q q n n n Process is in the cache. Process is not in the cache. Characterize execution time in each state off-line. Use CPU time measurements along with cache state to estimate process performance at each quantum. Kastner and Thiesing: scheduling algorithm takes cache state into account. © 2006 Elsevier

General-purpose vs. real-time OS n Schedulers have very different goals in realtime and general-purpose operating systems: q q n Real-time scheduler must meet deadlines. General-purpose scheduler tries to distribute time equally among processes. Early real-time operating systems: q q Hunter/Ready OS for microcontrollers was developed in early 1980 s. Mach ran on VAX, etc. , provided real-time characteristics on large platforms. © 2006 Elsevier

Memory management n Memory management allows RTOS to run outside applications. q n n Cell phones run downloaded, user-installed programs. Memory management helps the RTOS manage a large virtual address space. Flash may be used as a paging device. © 2006 Elsevier

Windows CE memory management n n Flat 32 -bit address space. Top 2 GB for kernel. q n Statically mapped. Bottom 2 GB for user processes. © 2006 Elsevier

Win. CE user memory space n n n 64 slots of 32 MB each. Slot 0 is currently running process. Slots 1 -33 are the processes. q n 32 processes max. Object store, memory mapped files, resource mappings. Slot 63: resource mappings Slots 33 -62: object store, memory mapped files … Slot 3: process Slot 2: process Slot 1: DLLs Slot 0: current process © 2006 Elsevier

Mechanisms for real time operation n Two key mechanisms for real time: q q n Interrupt handler is part of the priority system. q n Interrupt handler. Scheduler. Also introduces overhead. Scheduler determines ability to meet deadlines. © 2006 Elsevier

Interrupt handling in RTOSs n n Interrupts have priorities set in hardware. These priorities supersede process priorities of the processes. We want to spend as little time as possible in the hardware priority space to avoid interfering with the scheduler. Two layers of processing: q q n Interrupt service routine (ISR) is dispatched by hardware. Interrupt service thread (IST) is a process. Spend as little time in the ISR (hardware priorities), do most of the work in the IST (scheduler priorities). © 2006 Elsevier

Windows CE interrupts n Two types of ISRs: q q Static i. SRs are built into kernel, one-way communication to IST. Installable ISR can be dynamically loaded, uses shared memory to communicate with IST. © 2006 Elsevier

Static ISR n Built into the kernel. q n One-way communication from ISR to IST. q n n SHx and MIPS must be written in assembler, limited register availability. Can share a buffer but location must be predefined. Nested ISR support based on CPU, OEM’s initialization. Stack is provided by the kernel. © 2006 Elsevier

Installable ISR n n n Can be dynamically loaded into kernel. Loads a C DLL. Can use shared memory for communication. ISRs are processed in the order they were installed. Limited stack size. © 2006 Elsevier

Win. CE 4. x interrupts ISR ISR ISH All higher enabled All enabled Except ID © 2006 Elsevier Enable ID All enabled HW device Set event thread I-ISR OAL kernel IST processing

Interprocess communication n IPC often used for large-scale communication in general-purpose systems. Mailboxes are specialized memories, used for small, fast transfers. Multimedia systems can be supported by quality-of-service (Qo. S) oriented interprocess communication services. © 2006 Elsevier

Power management n Advanced Configuration and Power Management (ACPI) standard defines power management levels: q q q G 3 mechanical off. G 2 soft off. G 1 sleeping. G 0 working. Legacy state. © 2006 Elsevier

Summary n n Ch. 4 in textbook Real-time scheduling. Scheduling for power/energy. Operating systems mechanisms and overhead © 2006 Elsevier
- Slides: 37