zephyr/doc/kernel_v2/memory/pools.rst

212 lines
7.9 KiB
ReStructuredText

.. _memory_pools_v2:
Memory Pools
############
A :dfn:`memory pool` is a kernel object that allows memory blocks
to be dynamically allocated from a designated memory region.
The memory blocks in a memory pool can be of any size,
thereby reducing the amount of wasted memory when an application
needs to allocate storage for data structures of different sizes.
The memory pool uses a "buddy memory allocation" algorithm
to efficiently partition larger blocks into smaller ones,
allowing blocks of different sizes to be allocated and released efficiently
while limiting memory fragmentation concerns.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of memory pools can be defined. Each memory pool is referenced
by its memory address.
A memory pool has the following key properties:
* A **minimum block size**, measured in bytes.
This must be at least 4 bytes long.
* A **maximum block size**, measured in bytes.
This should be a power of 4 times larger than the minimum block size.
That is, "maximum block size" must equal "minimum block size" times 4^n,
where n is greater than or equal to zero.
* The **number of maximum-size blocks** initially available.
This must be greater than zero.
* A **buffer** that provides the memory for the memory pool's blocks.
This must be at least "maximum block size" times
"number of maximum-size blocks" bytes long.
A thread that needs to use a memory block simply allocates it from a memory
pool. Following a successful allocation, the :c:data:`data` field
of the block descriptor supplied by the thread indicates the starting address
of the memory block. When the thread is finished with a memory block,
it must release the block back to the memory pool so the block can be reused.
If a block of the desired size is unavailable, a thread can optionally wait
for one to become available.
Any number of threads may wait on a memory pool simultaneously;
when a suitable memory block becomes available, it is given to
the highest-priority thread that has waited the longest.
Unlike a heap, more than one memory pool can be defined, if needed. For
example, different applications can utilize different memory pools; this
can help prevent one application from hijacking resources to allocate all
of the available blocks.
Internal Operation
==================
A memory pool's buffer is an array of maximum-size blocks,
with no wasted space between the blocks.
Each of these "level 0" blocks is a *quad-block* that can be
partitioned into four smaller "level 1" blocks of equal size, if needed.
Likewise, each level 1 block is itself a quad-block that can be partitioned
into four smaller "level 2" blocks in a similar way, and so on.
Thus, memory pool blocks can be recursively partitioned into quarters
until blocks of the minimum size are obtained,
at which point no further partitioning can occur.
A memory pool keeps track of how its buffer space has been partitioned
using an array of *block set* data structures. There is one block set
for each partitioning level supported by the pool, or (to put it another way)
for each block size. A block set keeps track of all free blocks of its
associated size using an array of *quad-block status* data structures.
When an application issues a request for a memory block,
the memory pool first determines the size of the smallest block
that will satisfy the request, and examines the corresponding block set.
If the block set contains a free block, the block is marked as used
and the allocation process is complete.
If the block set does not contain a free block,
the memory pool attempts to create one automatically by splitting a free block
of a larger size or by merging free blocks of smaller sizes;
if a suitable block can't be created, the allocation request fails.
.. note::
By default, memory pools will attempt to split a larger block
before trying to merge smaller blocks. However, they can also
be configured to merge smaller blocks first, or to skip
the merging step entirely. In the latter case, merging of smaller
blocks only occurs when the application explicitly issues
a request to defragment the entire memory pool.
The memory pool's block merging and splitting process is done efficiently,
but it is a recursive algorithm that may incur significant overhead.
In addition, the merging algorithm cannot combine adjacent free blocks
of different sizes, nor can it merge adjacent free blocks of the same size
if they belong to different parent quad-blocks. As a consequence,
memory fragmentation issues can still be encountered when using a memory pool.
When an application releases a previously allocated memory block
it is simply marked as a free block in its associated block set.
The memory pool does not attempt to merge the newly freed block,
allowing it to be easily reallocated in its existing form.
Implementation
**************
Defining a Memory Pool
======================
A memory pool is defined using a variable of type :c:type:`struct k_mem_pool`.
However, since a memory pool also requires a number of variable-size data
structures to represent its block sets and the status of its quad-blocks,
the kernel does not support the run-time definition of a memory pool.
A memory pool can only be defined and initialized at compile time
by calling :c:macro:`K_MEM_POOL_DEFINE()`.
The following code defines and initializes a memory pool that has 3 blocks
of 4096 bytes each, which can be partitioned into blocks as small as 64 bytes
and is aligned to a 4-byte boundary.
(That is, the memory pool supports block sizes of 4096, 1024, 256,
and 64 bytes.)
Observe that the macro defines all of the memory pool data structures,
as well as its buffer.
.. code-block:: c
K_MEM_POOL_DEFINE(my_map, 64, 4096, 3, 4);
Allocating a Memory Block
=========================
A memory block is allocated by calling :cpp:func:`k_mem_pool_alloc()`.
The following code builds on the example above, and waits up to 100 milliseconds
for a 200 byte memory block to become available, then fills it with zeroes.
A warning is issued if a suitable block is not obtained.
Note that the application will actually receive a 256 byte memory block,
since that is the closest matching size supported by the memory pool.
.. code-block:: c
struct k_mem_block block;
if (k_mem_pool_alloc(&my_pool, &block, 200, 100) == 0)) {
memset(block.data, 0, 200);
...
} else {
printf("Memory allocation time-out");
}
Releasing a Memory Block
========================
A memory block is released by calling :cpp:func:`k_mem_pool_free()`.
The following code builds on the example above, and allocates a 75 byte
memory block, then releases it once it is no longer needed. (A 256 byte
memory block is actually used to satisfy the request.)
.. code-block:: c
struct k_mem_block block;
k_mem_pool_alloc(&my_pool, &block, 75, K_FOREVER);
... /* use memory block */
k_mem_pool_free(&block);
Manually Defragmenting a Memory Pool
====================================
This code instructs the memory pool to concatenate unused memory blocks
into their parent quad-blocks wherever possible. Doing a full defragmentation
of the entire memory pool before allocating a number of memory blocks
may be more efficient than relying on the partial defragmentation that can
occur automatically each time a memory block allocation is requested.
.. code-block:: c
k_mem_pool_defragment(&my_pool);
Suggested Uses
**************
Use a memory pool to allocate memory in variable-size blocks.
Use memory pool blocks when sending large amounts of data from one thread
to another, to avoid unnecessary copying of the data.
Configuration Options
*********************
Related configuration options:
* CONFIG_MEM_POOL_AD_BEFORE_SEARCH_FOR_BIGGER_BLOCK
* CONFIG_MEM_POOL_AD_AFTER_SEARCH_FOR_BIGGER_BLOCK
* CONFIG_MEM_POOL_AD_NONE
APIs
****
The following memory pool APIs are provided by :file:`kernel.h`:
* :cpp:func:`k_mem_pool_alloc()`
* :cpp:func:`k_mem_pool_free()`
* :cpp:func:`k_mem_pool_defragment()`