Class 6 CS 170 21 October 2020 On the board ------------ 1. Last time 2. Intro to concurrency wrap up 3. Managing concurrency: protect critical sections 4. Mutexes --------------------------------------------------------------------------- 2. Intro to concurrence wrap up --handout: 3: incorrect count in buffer all of these are called race conditions; not all of them are errors, though --worst part of errors from race conditions is that a program may work fine most of the time but only occasionally show problems. why? (because the instructions of the various threads or processes or whatevever get interleaved in a non-deterministic order.) --and it's worse than that because inserting debugging code may change the timing so that the bug doesn't show up --hardware makes the problem even harder --sequential consistency not always in effect --sequential consistency means: --maintain program order on individual processors --ensuring that writes happen to each memory location (viewed separately) in the order that they are issued --assume sequential consistency until we explicitly relax it 3. Managing concurrency * critical sections * protecting critical sections * implementing critical sections --step 1: the concept of *critical sections* --Regard accesses of shared variables (for example, "count" in the bounded buffer example) as being in a _critical section_ --Critical sections will be protected from concurrent execution --Now we need a solution to _critical section_ problem --Solution must satisfy 3 properties: 1. mutual exclusion only one thread can be in c.s. at a time [this is the notion of atomicity] 2. progress if no threads executing in c.s., one of the threads trying to enter a given c.s. will eventually get in 3. bounded waiting once a thread T starts trying to enter the critical section, there is a bound on the number of other threads that may enter the critical section before T enters --Note progress vs. bounded waiting --If no thread can enter C.S., don't have progress --If thread A waiting to enter C.S. while B repeatedly leaves and re-enters C.S. ad infinitum, don't have bounded waiting --We will be mostly concerned with mutual exclusion --step 2: protecting critical sections. --want lock()/unlock() or enter()/leave() or acquire()/release() --lots of names for the same idea --mutex_init(mutex_t* m), mutex_lock(mutex_t* m), mutex_unlock(mutex_t* m),.... --pthread_mutex_init(), pthread_mutex_lock(), ... --in each case, the semantics are that once the thread of execution is executing inside the critical section, no other thread of execution is executing there --step 3: implementing critical sections --"easy" way, assuming a uniprocessor machine: enter() --> disable interrupts leave() --> reenable interrupts [convince yourself that this provides mutual exclusion] --we will study other implementations later. for now, focus on the use 4. Mutexes --using critical sections --linked list example --bounded buffer example --why are we doing this? --because *atomicity* is required if you want to reason about code without contorting your brain to reason about all possible interleavings --atomicity requires mutual exclusion aka a solution to critical sections --mutexes provide that solution --once you have mutexes, don't have to worry about arbitrary interleavings. critical sections are interleaved, but those are much easier to reason about than individual operations. --why? because of _invariants_. examples of invariants: "list structure has integrity" "'count' reflects the number of entries in the buffer" the meaning of lock.acquire() is that if and only if you get past that line, it's safe to violate the invariants. the meaning of lock.release() is that right _before_ that line, any invariants need to be restored. the above is abstract. let's make it concrete: invariant: "list structure has integrity" so protect the list with a mutex only after acquire() is it safe to manipulate the list