Monday 6 January 2014

Debugging Heaps and Heap Internals Part 1

Personally, I didn't know really where to start this blog post from at first, but I think it's best to first define the heap and it's purpose. The heap is used by the Memory Manager and the Heap Manager (for User-Mode processes). The heap generally speaking is a area of free memory in which processes can use for allocations for data objects and variables etc. The heap is not be confused with the heap data structure, even though there are data structures we can view to explore the heap.

In case, you didn't know, we have already discussed the Kernel-Mode version of the Heap greatly in my previous posts. Paged Pool and Non-Paged Pool are forms of Kernel-Mode Heaps. I will not continue with the discussion about Kernel-Mode Heaps, and therefore will instead continue with the discussion about User-Mode versions of the Heap.

If your a programmer or study computer science, then this topic should be easy for you to understand. This a good point to say, this is one of the reasons, why I suggest studying a language like C or C++.

Okay, getting back to the point, the heap is a potentially large area of free memory which our processes and programs can use for allocations. Unlike, allocations made on the stack, the allocations made on the heap have to explicitly unallocated, otherwise we will run into problems such as memory leaks. If your using a language like C#, and it's feature of Garbage Collection, then you will not need to worry or need to know about the internals of a heap, since it's managed for you by the run-time. Let's examine a simple program which allocates something onto the heap.


The program creates a pointer called some_pointer of a integer type, and then allocates some space on the heap to store a integer. The pointer would stored on the stack. Since, the heap can become exhausted, we have used a exception handler to handle any allocations which fail because of the heap exhaustion. The space allocated to hold the integer is then unallocated with the delete keyword.

By default, each process has a default heap created by the operating system, and is around 1MB in size, but can expand by using the /HEAP linker flag or by expanding automatically as needs require. Processes can also create private heaps for performance, using HeapCreate and HeapDestroy respectively. The process can then allocate memory blocks from the newly created private heap with HeapAlloc and HeapFree. A private heap is only accessible to the address space of the process which created it.

Using Windbg, we can view all the currently active heaps with the !heap extension.

We can gather further information by using the _HEAP data structure with the heap address. This is called the base heap.


This brings me to the point about the Heap Manager, and it's general structure. The Heap Manager consists of the core heap and the front-end heap, which is optional for User-Mode processes.



The Core Heap Layer provides general core functions, such as heap management (creation of heap blocks), segment management and blocks which belong to those segments and enforcing polices for the growth of the heap. On the other hand, the Front-End Heap Layer provides the functionality of the Low Fragmentation Heap (LFH).

Low Fragmentation Heap (LFH) and Heap Synchronization

The Heap Manager manages all the Heap allocations (Heap Blocks) into 128 different singly linked lists called Look-Aside Lists per a heap. Each list is created when the heap is created. This is also the reason why you see all the LIST_ENTRY data structures within the _HEAP data structure. When a process wishes to allocate a new variable onto the heap, and their isn't a already existing free block, then the Heap Manager will call into the Core Heap Layer and then a new heap block. This can lead to problems, which I will speak about in a moment. Before that, I we should take a look at Heap Synchronization among multiple threads.

With multiple-threaded programs, then threads can create multiple allocations and frees at the same time, leading to problems, since some operations may require the heap in remain in a consistent state. This is achieved with the use of a global heap lock, which protects the heap from access by other threads. The lock is achieved with the use of a Critical Section Object, and the call of the HeapLock function.




The lock is primarily used to execute the HeapWalk function, which enumerates all the heap blocks within a heap.


Now, back onto the discussion of the LFH, and how heap fragmentation can occur and lead to heap exhaustion. The available heap memory is broken into different sizes depending upon the size of the data type, and thus freed when needed. This will eventually lead to fragmentation of the heap, and heap allocations may fail even though there is enough heap memory to satisfy the request. Some of the free allocations won't used since they're too small, and therefore will remain as a potentially unusable space. 

To address, the problem, the Low Fragmentation Heap creates predetermined heap block sizes, and then places these block sizes into certain ranges called buckets. These buckets are managed by lookaside lists and and a tuning algorithm which automatically enables the LFH under certain conditions.  


Each bucket is used for different allocation sizes, with the first bucket being used for sizes between 1 and 8 bytes, and the second bucket used for allocation sizes between 8 and 16 bytes.




The LFH can't be used for heaps which have a fixed size, or heaps which were created with the _HEAP_NO_SERIALIZE flag.



No comments:

Post a Comment