Optimistic Concurrency Control ARIES Database Logging and Recovery
Optimistic Concurrency Control & ARIES: Database Logging and Recovery Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems September 30, 2008 Some content on recovery courtesy Hellerstein, Ramakrishnan, Gehrke
Administrivia Next reading assignments: § System-R* § Mariposa Write-up, due Tuesday: briefly summarize how the two systems differ in their scope of distribution and in their basic strategies 2
Serial Validation § § Simple: writes won’t be interleaved, so test 1. Ti finishes writing before Tj starts reading (serial) 2. WS(Ti) disjoint from RS(Tj) and Ti finishes writing before Tj writes Put into critical section: § § Long critical section limits parallelism of validation, so can optimize: § § § Get TN Test 1 and 2 for everyone up to TN Write Outside critical section, get a TN and validate up to there Before write, in critical section, get new TN, validate up to that, write Reads: no need for TN – just validate up to highest TN at end of read phase (no critical section) 3
Parallel Validation § For allowing interleaved writes § Save active transactions (finished reading, not writing) § Abort if intersect current read/write set § Validate: CRIT: Get TN; copy active set; add self to active set Check (1), (2) against everything from start to finish Check (3) against all active set If OK, write CRIT: Increment TN counter, remove self from active § Drawback: might conflict in condition (3) with someone who gets aborted 4
Who’s the Top Dog? Optimistic vs. Non-Optimistic § Drawbacks of the optimistic approach: § Generally requires some sort of global state, e. g. , TN counter § If there’s a conflict, requires abort and full restart § Study by Agrawal et al. comparing optimistic vs. locking: § Need load control with low resources § Locking is better with moderate resources § Optimistic is better with infinite or high resources 5
Providing ACIDity ACID properties: § Atomicity: transactions may abort (“rollback”) due to error or deadlock (Mohan+) § Consistency: guarantee of consistency between transactions § Isolation: guarantees serializability of schedules (Gray+, Kung & Rob. ) § Durability: guarantees recovery if DBMS stops running (Mohan+) Last time: isolation 6
Review from Last Time – Rollback and Recovery The Recovery Manager provides: Atomicity: Transactions may abort (“rollback”) to start or to a “savepoint”. Durability: What if DBMS stops running? (Causes? ) Desired behavior after system restarts: – T 1, T 2 & T 3 should be durable – T 4 & T 5 should be aborted (effects not seen) crash! T 1 T 2 T 3 T 4 T 5 7
Assumptions in Recovery Schemes We’re using concurrency control via locks § Strict 2 PL at least at the page, possibly record level Updates are happening “in place” § No shadow pages: data is overwritten on (deleted from) the disk ARIES: § Algorithm for Recovery and Isolation Exploiting Semantics § Attempts to provide a simple, systematic simple scheme to guarantee atomicity & durability with good performance Let’s begin with some of the issues faced by any DBMS recovery scheme…
Enforcing Persistence of Buffer Pages Buffer pool is finite, so… § Issue: How do we guarantee durability of committed data? § Solution: Policy on what happens when a transaction completes, what transactions can do to get more pages Force write of buffer pages to disk Force at commit? No Steal Trivial § Provides durability § But poor response time Steal buffer-pool frames from uncommited Xacts? No Force § If not, poor throughput § If so, how can we ensure atomicity? Desired
More on Steal and Force STEAL (why enforcing Atomicity is hard) § To steal frame F: Current page P in F gets written to disk; some Xact holds lock on P What if the Xact with the lock on P aborts? Must remember the old value of P at steal time (to support UNDOing the write to page P) NO FORCE (why enforcing Durability is hard) § What if system crashes before a modified page is written to disk? § Write as little as possible at commit time, to support REDOing modifications
Basic Idea: Logging Record REDO and UNDO information, for every update, in a log § Sequential writes to log (put it on a separate disk) § Minimal info (diff) written to log, so multiple updates fit in a single log page Log: An ordered list of actions to REDO/UNDO § Log record contains: <XID, page. ID, offset, length, old data, new data> § and additional control info (which we’ll see soon) UNDO info will be described in operations, not at page-level
Write-Ahead Logging (WAL) The Write-Ahead Logging Protocol: 1. Force the log record for an update before the corresponding data page gets to disk Guarantees Atomicity 2. Write all log records for a Xact before commit • Guarantees Durability (can always rebuild from the log) Is there a systematic way of doing write-ahead logging (and recovery!)? § The ARIES family of algorithms
The Log in WAL LSNs DB RAM page. LSNs flushed. LSN Each log record has a unique Log Sequence Number (LSN) Log records flushed to disk § LSNs always increase Each data page contains a page. LSN § The LSN of the most recent log record for an update to that page System keeps track of flushed. LSN § The max LSN flushed so far WAL: Before a page is written, § page. LSN £ flushed. LSN page. LSN “Log tail” in RAM
Log Records Log. Record fields: only in update prev. LSN XID type page. ID length offset before-image after-image The log record types: § Update § Commit § Abort § End (signifies end of commit or abort) § Compensation Log Records (CLRs) § Log of UNDO actions § “Cancel out” an update step
Other Log-Related State Transaction Table: § One entry per active Xact § Contains XID, status (running/commited/aborted), and last. LSN Dirty Page Table: § One entry per dirty page in buffer pool § Contains rec. LSN – the LSN of the log record which first caused the page to be dirty
The Big Picture: What’s Stored Where LOG DB Log. Records prev. LSN XID type page. ID length offset before-image after-image master record Data pages each with a page. LSN RAM Xact Table last. LSN status Dirty Page Table rec. LSN flushed. LSN
Normal Execution of a Transaction Series of reads & writes, followed by commit or abort We will assume that write is atomic on disk In practice, additional details to deal with non-atomic writes Strict 2 PL STEAL, NO-FORCE buffer management, with Write- Ahead Logging
Checkpointing Periodically, the DBMS creates a checkpoint § Minimizes recovery time in the event of a system crash § Write to log: begin_checkpoint record: when checkpoint began end_checkpoint record: current Xact table and dirty page table A “fuzzy checkpoint”: s Other Xacts continue to run; so these tables accurate only as of the time of the begin_checkpoint record s No attempt to force dirty pages to disk; effectiveness of checkpoint limited by oldest unwritten change to a dirty page. (So it’s a good idea to periodically flush dirty pages to disk!) Store LSN of checkpoint record in a safe place (master record)
Simple Transaction Abort, 1/2 For now, consider an explicit abort of a Xact § (No crash involved) We want to “play back” the log in reverse order, UNDOing updates § Get last. LSN of Xact from Xact table § Can follow chain of log records backward via the prev. LSN field When do we quit? § Before starting UNDO, write an Abort log record For recovering from crash during UNDO!
Abort, 2/2 To perform UNDO, must have a lock on data! No problem – no one else can be locking it Before restoring old value of a page, write a CLR: § You continue logging while you UNDO!! § CLR has one extra field: undo. Next. LSN Points to the next LSN to undo (i. e. the prev. LSN of the record we’re currently undoing). § CLRs never Undone (but they might be Redone when repeating history: guarantees Atomicity!) At end of UNDO, write an “end” log record
Transaction Commit § Write commit record to log § All log records up to Xact’s last. LSN are flushed § Guarantees that flushed. LSN ³ last. LSN § Note that log flushes are sequential, synchronous writes to disk § Many log records per log page § Commit() returns § Write end record to log
Crash Recovery: Big Picture Oldest log rec. of Xact active at crash § Smallest rec. LSN in dirty page table after Analysis § Start from a checkpoint (found via master record) Three phases: 1. Figure out which Xacts committed since checkpoint, which failed (Analysis) 2. REDO all actions – (repeat history) 3. UNDO effects of failed Xacts Last chkpt CRASH A R U
Recovery: The Analysis Phase Reconstruct state at checkpoint § via end_checkpoint record Scan log forward from checkpoint § End record: Remove Xact from Xact table (no longer active) § Other records: Add Xact to Xact table, set last. LSN=LSN, change Xact status on commit § Update record: If P not in Dirty Page Table, Add P to D. P. T. , set its rec. LSN=LSN
Recovery: The REDO Phase Repeat history to reconstruct state at crash: § Reapply all updates (even of aborted Xacts!), redo CLRs § Puts us in a state where we know UNDO can do right thing Scan forward from log rec containing smallest rec. LSN in D. P. T. For each CLR or update log rec LSN, REDO the action unless: § Affected page is not in the Dirty Page Table, or § Affected page is in D. P. T. , but has rec. LSN > LSN, or § page. LSN (in DB) ³ LSN To REDO an action: § Reapply logged action § Set page. LSN to LSN. Don’t log this!
Recovery: The UNDO Phase To. Undo = { l | l a last. LSN of a “loser” Xact} Repeat: § Choose largest LSN among To. Undo § If this LSN is a CLR and undo. Next. LSN==NULL Write an End record for this Xact § If this LSN is a CLR and undo. Next. LSN != NULL Add undo. Next. LSN to To. Undo § Else this LSN is an update Undo the update, write a CLR, add prev. LSN to To. Undo Until To. Undo is empty
Example of Recovery LSN RAM Xact Table last. LSN status Dirty Page Table rec. LSN flushed. LSN To. Undo LOG 00 begin_checkpoint 05 end_checkpoint 10 update: T 1 writes P 5 20 update T 2 writes P 3 30 T 1 abort 40 CLR: Undo T 1 LSN 10 45 T 1 End 50 update: T 3 writes P 1 60 update: T 2 writes P 5 CRASH, RESTART prev. LSNs
Example: Crash During Restart LSN 00, 05 RAM Xact Table last. LSN status Dirty Page Table rec. LSN flushed. LSN To. Undo LOG begin_checkpoint, end_checkpoint 10 update: T 1 writes P 5 20 update T 2 writes P 3 30 T 1 abort 40, 45 CLR: Undo T 1 LSN 10, T 1 End 50 update: T 3 writes P 1 60 update: T 2 writes P 5 CRASH, RESTART 70 80, 85 CLR: Undo T 2 LSN 60 CLR: Undo T 3 LSN 50, T 3 end CRASH, RESTART 90 CLR: Undo T 2 LSN 20, T 2 end undo. Next. LSN
Additional Crash Issues What happens if system crashes during Analysis? During REDO? How do you limit the amount of work in REDO? § Flush asynchronously in the background § Watch “hot spots”! How do you limit the amount of work in UNDO? § Avoid long-running Xacts
Summary of Logging/Recovery § Recovery Manager guarantees Atomicity & Durability § Use WAL to allow STEAL/NO-FORCE w/o sacrificing correctness § LSNs identify log records; linked into backwards chains per transaction (via prev. LSN) § page. LSN allows comparison of data page and log records
Summary, Continued § Checkpointing: A quick way to limit the amount of log to scan on recovery. § Recovery works in 3 phases: § Analysis: Forward from checkpoint § Redo: Forward from oldest rec. LSN § Undo: Backward from end to first LSN of oldest Xact alive at crash § Upon Undo, write CLRs § Redo “repeats history”: Simplifies the logic!
Reminder: Next Time’s Reading Next reading assignments: § System-R* § Mariposa Write-up, due Tuesday: briefly summarize how the two systems differ in their scope of distribution and in their basic strategies 31
- Slides: 31