Memory Management Memory allocation Garbage collection Memory Allocation

  • Slides: 16
Download presentation
Memory Management -Memory allocation -Garbage collection

Memory Management -Memory allocation -Garbage collection

Memory Allocation • Memory pool: large block of contiguous memory • Memory manager allocates

Memory Allocation • Memory pool: large block of contiguous memory • Memory manager allocates memory by returning a handle to the user • Use the term heap to refer to free memory accessed by a dynamic memory management scheme

Dynamic Allocation • Blocks of any size may be requested in any order from

Dynamic Allocation • Blocks of any size may be requested in any order from the free-list • For a request of m words, and a block size k, between m and k space is used for the request • This can result in fragmentation if m != k

Fragmentation in Dynamic Allocation • External fragmentation: lots of small free blocks • Internal

Fragmentation in Dynamic Allocation • External fragmentation: lots of small free blocks • Internal fragmentation: when all of block of size k is allocated for m words. This type of allocation is easier, if less efficient

Sequential Fit Method • Attempt to find a “good” block • The free-list is

Sequential Fit Method • Attempt to find a “good” block • The free-list is organized as a doublylinked list • Tag bit and block size fields • The memory manager searches the freelist for a block of “suitable” size

Three sequential fit methods • First fit – Start from the beginning (or middle)

Three sequential fit methods • First fit – Start from the beginning (or middle) – May waste larger blocks by breaking them up • Best fit – Examines entire list – Maximizes external fragmentation but will be more likely to be able to service large requests

Three sequential fit methods • Worst fit – Allocates largest block through a sequential

Three sequential fit methods • Worst fit – Allocates largest block through a sequential search – Minimizes external fragmentation • Which is best? Depends on the expected types of memory requests

Sequential Fit • A search of the free-list is in Ө(n) in the worst

Sequential Fit • A search of the free-list is in Ө(n) in the worst case • Want to merge adjacent free blocks • Need additional space to support the memory manager operations/linked list • Is there anything that can be improved?

The Buddy Method • Assume that memory is of size 2ⁿ for some n

The Buddy Method • Assume that memory is of size 2ⁿ for some n • Both free and reserved blocks will be of k size 2 for k ≤ n • The buddy system keeps a separate list of free blocks for each size

The Buddy Method • For a request of size m, find the smallest k

The Buddy Method • For a request of size m, find the smallest k k where 2 ≥ m • If such a k exists, allocate a block of size k 2 from the list • If such a k does not exist, allocate the next larger block on the list, and split it in half k until a block of size 2 is created

The Buddy Method • Advantages – Less external fragmentation – Cheaper search than a

The Buddy Method • Advantages – Less external fragmentation – Cheaper search than a linked list – Merging adjacent blocks is easy (the buddy for k any block of size 2 is another block of the same size with the same address except the th k bit is reversed) • Disadvantage – Allows internal fragmentation

Other memory allocation methods • Segregated storage method: break available memory into several memory

Other memory allocation methods • Segregated storage method: break available memory into several memory zones, each with its own management method • Impose a standard size (cluster scheme) – Example: disk file management – Leads to internal fragmentation – Does not need to be contiguous

Failure Policies • Happens when a memory request for a certain size cannot be

Failure Policies • Happens when a memory request for a certain size cannot be serviced – Due to external fragmentation → compact memory, which physically moves data – Use a handle if the application relies on absolute positions of the data – Can defer the memory request (for example, when several processes are running at once)

Failure policies: Garbage Collection • When no program variable points to a block of

Failure policies: Garbage Collection • When no program variable points to a block of space, it is considered garbage (or a memory leak) • Garbage collection involves determining which memory is garbage and recovering it • Two common methods: – Reference count – Mark/sweep strategy

Garbage Collection • Reference Count: each dynamically allocated memory block has a count field

Garbage Collection • Reference Count: each dynamically allocated memory block has a count field that is incremented and decremented for each pointer pointing to it (or away from it) – When the count reaches zero, the memory becomes garbage and is immediately placed in free store – Used by the UNIX file system, where the memory objects are linked together without cycles – Useful when the objects are large, such as a file

Garbage Collection • Mark/sweep strategy – Uses a single bit marker instead of a

Garbage Collection • Mark/sweep strategy – Uses a single bit marker instead of a count field – Works for cycles, but DFS is recursive • Garbage collection phase occurs when free store is exhausted: – Clear all mark bits – Perform DFS from each pointer on the variable list, turning on the bits – Sweep through the memory pool for unmarked elements (which are garbage and placed in free store)