Page Frame Allocation: Difference between revisions

no edit summary
[unchecked revision][unchecked revision]
No edit summary
 
(13 intermediate revisions by 12 users not shown)
Line 1:
==Physical Memory Allocators==
 
These are the algorithms that will provide you with a new page frame when you'll need it. The client of this algorithm is usually indifferent to which frame is returned, and especially, a request for n frames needndoesn't need to return contiguous frames (unless you are allocating memory for DMA operations like network packet buffers).
 
N will be the size of the memory in pages in the following text.
Line 8:
A large array of N/8 bytes is used as a large bit map of the memory usage (that is, bit #i in byte #n define the status of page #n*8+i). Setting the state of a given page is fast (O(1)), allocating a page may take time (O(N)).
 
* aan dworduint32_t comparison can test up to 32 bits at once and thus speed up allocsallocations
* keeping a pointer to the last allocated bit may improve the performance of the next search (keeping information about the fact all the previous bytes were searched unsuccessfully)
 
===Stack/List of pages===
 
The address of each available physical frame is stored in a [[stack]]-like dynamic structure. Allocating a page is fast (O(1)), freeing a page too but checking the state of a page is not practical, unless additional metadata is sorted by physical address.
 
===Sized Portion Scheme===
Line 34:
buddy[3]---> # # # # # 0 free 32K, 5-bits map
 
A buddy for N pages is about twice the size of a bitmap for the same area, but it allows a faster location of collections of pages. FigureThe figure above shows a 4-buddy with free pages/blocks denoted as . and used pages/blocks denoted as #. When a block contains at least one used sub-block, it is itself marked as used and sub-blocks that are part of a larger block are also marked as used (x onin the figure). Say we want to allocate a 12-K region on this buddy, we'll lookuplook up the bitmap of free 16K blocks (which says we have one such starting at page #12 and another starting at page #36). buddy[2]->bit[4] is then set to 'used'. Now we only want 3 pages out of the 4 we got, so the remaining page is returned to the appropriated buddy bitmap (e.g. the single pages map). The resulting buddy is
 
0 4 8 12 16 20 24 28 32 36
Line 65:
[[Image:Flat_list.png]]
 
===Tree-based approach.===
 
Since it is frequent that the list is searched for a given address or a given size, it may be interrestinginteresting to use more efficient data structures. One of them that still keeps the ability of traversing the whole list is the AVL Tree. Each "node" in the AVL tree will describe a memory region and has pointer to the subtree of lower nodes and to the subtree of higher nodes.
 
[[Image:Tree based.png|none]]
Line 77:
==See Also==
===Articles===
* [[Writing A Page Frame Allocator]]
* [[Memory Allocation]]
* [[Memory management]]
Line 93 ⟶ 94:
* [[Topic:10279|Malloc, etc. (tute by curufir)]]
* [[Topic:10747|Physical MM (by Freanan)]]
* [[Post:195116|Concepts and key points on alternative memory management schemes]]
 
===External Links===
*mystran's [http://replay.web.archive.org/20081206102136/http://www.cs.hut.fi/~tvoipio/memtutor.html Basic VMM for Dummies (cached)]
* [[wikipedia:Page replacement algorithm|Page replacement algorithm]] on Wikipedia
 
[[Category:Common_Algorithms]]
[[Category:Memory management]]
[[Category:Physical Memory]]
[[de:Physische Speicherverwaltung]]
Anonymous user