CPS 110 Implementing threads Landon Cox Recap and

  • Slides: 26
Download presentation
CPS 110: Implementing threads Landon Cox

CPS 110: Implementing threads Landon Cox

Recap and looking ahead Applications Where we’ve been Hardware OS Where we’re going

Recap and looking ahead Applications Where we’ve been Hardware OS Where we’re going

Recall, thread interactions 1. Threads can access shared data êE. g. , use locks,

Recall, thread interactions 1. Threads can access shared data êE. g. , use locks, monitors êWhat we’ve done so far 2. Threads also share hardware êCPU and memory êFor this class, assume uni-processor êSingle CPU core: one thread runs at a time êUnrealistic in the multicore era!

Hardware, OS interfaces Thread lectures up to this point Applications Job 1 Job 2

Hardware, OS interfaces Thread lectures up to this point Applications Job 1 Job 2 Job 3 CPU, Mem Hardware OS Memory CPU Memory lectures Remaining thread lectures

The play analogy ê Process is like a play performance ê Program is like

The play analogy ê Process is like a play performance ê Program is like the play’s script ê One CPU is like a one-man-show ê(actor switches between roles) Threads Address space

Threads that aren’t running êWhat is a non-running thread? êthread=“stream of executing instructions” ênon-running

Threads that aren’t running êWhat is a non-running thread? êthread=“stream of executing instructions” ênon-running thread=“paused execution” êBlocked/waiting, or suspended but ready êMust save thread’s private state êLeave stack etc. in memory where it lies êSave registers to memory êReload registers to resume thread

Private vs global thread state êWhat state is private to each thread? êPC (where

Private vs global thread state êWhat state is private to each thread? êPC (where actor is in his/her script) êStack, SP (actor’s mindset) êWhat state is shared? êCode (like lines of a play) êGlobal variables, heap ê(props on set)

Thread control block (TCB) The software that manages threads and schedules/dispatches them is the

Thread control block (TCB) The software that manages threads and schedules/dispatches them is the thread system or “OS” OS must maintain data to describe each thread êThread control block (TCB) êContainer for non-running thread’s private data êValues of PC, SP, other registers (“context”) êEach thread also has a stack Other OS data structures (scheduler queues, locks, waiting lists) reference these TCB objects.

Thread control block TCB 1 Address Space TCB 2 TCB 3 PC Ready queue

Thread control block TCB 1 Address Space TCB 2 TCB 3 PC Ready queue SP registers PC SP registers Code Stack Thread 1 running PC SP registers CPU

Thread states êRunning êCurrently using the CPU êReady(suspended) êReady to run when CPU is

Thread states êRunning êCurrently using the CPU êReady(suspended) êReady to run when CPU is next available êBlocked (waiting or sleeping) êStuck in lock (), wait () or down ()

Switching threads ê What needs to happen to switch threads? 1. Thread returns control

Switching threads ê What needs to happen to switch threads? 1. Thread returns control to OS êFor example, via the “yield” call 2. OS chooses next thread to run 3. OS saves state of current thread êTo its thread control block 4. OS loads context of next thread êFrom its thread control block 5. Run the next thread Project 1: swapcontext

1. Thread returns control to OS êHow does the thread system get control? êVoluntary

1. Thread returns control to OS êHow does the thread system get control? êVoluntary internal events êThread might block inside lock or wait êThread might call into kernel for service ê(system call) êThread might call yield êAre internal events enough?

1. Thread returns control to OS êInvoluntary external events ê(events not initiated by the

1. Thread returns control to OS êInvoluntary external events ê(events not initiated by the thread) êHardware interrupts êTransfer control directly to OS interrupt handlers êFrom 104 êCPU checks for interrupts while executing êJumps to OS code with interrupt mask set êOS may preempt the running thread (force yield) when an interrupt gives the OS control of its CPU êCommon interrupt: timer interrupt

2. Choosing the next thread êIf no ready threads, just spin êModern CPUs: execute

2. Choosing the next thread êIf no ready threads, just spin êModern CPUs: execute a “halt” instruction êProject 1: exit if no ready threads êLoop switches to thread if one is ready êMany ways to prioritize ready threads êWill discuss a little later in the semester

3. Saving state of current thread êWhat needs to be saved? êRegisters, PC, SP

3. Saving state of current thread êWhat needs to be saved? êRegisters, PC, SP êWhat makes this tricky? êSelf-referential sequence of actions êNeed registers to save state êBut you’re trying to save all the registers êSaving the PC is particularly tricky

Saving the PC ê Why won’t this work? Instruction address 100 store PC in

Saving the PC ê Why won’t this work? Instruction address 100 store PC in TCB 101 switch to next thread ê Returning thread will execute instruction at 100 êAnd just re-execute the switch êReally want to save address 102

4. OS loads the next thread êWhere is the thread’s state/context? êThread control block

4. OS loads the next thread êWhere is the thread’s state/context? êThread control block (in memory) êHow to load the registers? êUse load instructions to grab from memory êHow to load the stack? êStack is already in memory, load SP

5. OS runs the next thread êHow to resume thread’s execution? êJump to the

5. OS runs the next thread êHow to resume thread’s execution? êJump to the saved PC êOn whose stack are these steps running? or Who jumps to the saved PC? êThe thread that called yield ê(or was interrupted or called lock/wait) êHow does this thread run again? êSome other thread must switch to it

Example thread switching Thread 1 print “start thread 1” yield () print “end thread

Example thread switching Thread 1 print “start thread 1” yield () print “end thread 1” Thread 2 print “start thread 2” yield () print end thread 2” yield () print “start yield (thread %d)” swapcontext (tcb 1, tcb 2) print “end yield (thread %d)” swapcontext (tcb 1, tcb 2) save regs to tcb 1 load regs from tcb 2 // sp points to tcb 2’s stack now! jump tcb 2. pc // sp must point to tcb 1’s stack! return Thread 1 output Thread 2 output ----------------------start thread 1 start yield (thread 1) start thread 2 start yield (thread 2) end yield (thread 1) end thread 1 end yield (thread 2) end thread 2 Note: this assumes no pre-emptions. If OS is preemptive, then other interleavings are possible.

Thread states Running Thread is scheduled Ready Thread is Pre-empted (or yields) ? Another

Thread states Running Thread is scheduled Ready Thread is Pre-empted (or yields) ? Another thread calls unlock or signal (or I/O completes) Thread calls Lock or wait (or makes I/O request) Blocked

Creating a new thread ê Also called “forking” a thread ê Idea: create initial

Creating a new thread ê Also called “forking” a thread ê Idea: create initial state, put on ready queue 1. Allocate, initialize a new TCB 2. Allocate a new stack 3. Make it look like thread was going to call a function ê PC points to first instruction in function ê SP points to new stack ê Stack contains arguments passed to function ê Project 1: use makecontext 4. Add thread to ready queue

Creating a new thread Parent call return Parent thread_create (parent work) Child (child work)

Creating a new thread Parent call return Parent thread_create (parent work) Child (child work)

Thread join êHow can the parent wait for child to finish? Parent thread_create (parent

Thread join êHow can the parent wait for child to finish? Parent thread_create (parent work) Child (child work) join

Thread join child () { print “child works” } êWill this work? () {

Thread join child () { print “child works” } êWill this work? () { êSometimes, assuming parent create child êUni-processor êNo pre-emptions êChild runs after parent thread print “parent works” yield () print “parent continues” } parent works child works parent continues êNever, ever assume these things! êYield is like slowing the CPU child êProgram must work +- any yields works parent continues

Thread join ê Will this work? parent () { create child thread lock print

Thread join ê Will this work? parent () { create child thread lock print “parent works” wait print “parent continues” unlock } 1 3 2 child () { lock print “child works” signal unlock } ê No. Child can call signal first. ê Would this work with semaphores? ê Yes ê No missed signals (increment sem value) parent works child works parent continues

How can we solve this? êPair off for a couple of minutes parent ()

How can we solve this? êPair off for a couple of minutes parent () { child () { } parent works child works parent continues } child works parent continues