k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-10 01:42:39 +08:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2017 Intel Corporation
|
|
|
|
*
|
|
|
|
* SPDX-License-Identifier: Apache-2.0
|
|
|
|
*/
|
|
|
|
|
2022-05-06 17:04:23 +08:00
|
|
|
#include <zephyr/kernel.h>
|
k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-10 01:42:39 +08:00
|
|
|
#include <string.h>
|
2022-05-06 17:04:23 +08:00
|
|
|
#include <zephyr/sys/math_extras.h>
|
|
|
|
#include <zephyr/sys/util.h>
|
k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-10 01:42:39 +08:00
|
|
|
|
2020-11-26 21:19:10 +08:00
|
|
|
static void *z_heap_aligned_alloc(struct k_heap *heap, size_t align, size_t size)
|
k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-10 01:42:39 +08:00
|
|
|
{
|
2021-01-16 11:39:02 +08:00
|
|
|
void *mem;
|
2020-11-26 21:19:10 +08:00
|
|
|
struct k_heap **heap_ref;
|
2021-01-16 11:39:02 +08:00
|
|
|
size_t __align;
|
2020-11-26 21:19:10 +08:00
|
|
|
|
k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-10 01:42:39 +08:00
|
|
|
/*
|
2021-01-16 11:39:02 +08:00
|
|
|
* Adjust the size to make room for our heap reference.
|
|
|
|
* Merge a rewind bit with align value (see sys_heap_aligned_alloc()).
|
|
|
|
* This allows for storing the heap pointer right below the aligned
|
|
|
|
* boundary without wasting any memory.
|
k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-10 01:42:39 +08:00
|
|
|
*/
|
2021-01-16 11:39:02 +08:00
|
|
|
if (size_add_overflow(size, sizeof(heap_ref), &size)) {
|
2018-04-13 03:51:51 +08:00
|
|
|
return NULL;
|
|
|
|
}
|
2021-01-16 11:39:02 +08:00
|
|
|
__align = align | sizeof(heap_ref);
|
2020-10-02 00:43:04 +08:00
|
|
|
|
2021-01-16 11:39:02 +08:00
|
|
|
mem = k_heap_aligned_alloc(heap, __align, size, K_NO_WAIT);
|
2020-11-26 21:19:10 +08:00
|
|
|
if (mem == NULL) {
|
k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-10 01:42:39 +08:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2021-01-16 11:39:02 +08:00
|
|
|
heap_ref = mem;
|
2020-11-26 21:19:10 +08:00
|
|
|
*heap_ref = heap;
|
2021-01-16 11:39:02 +08:00
|
|
|
mem = ++heap_ref;
|
|
|
|
__ASSERT(align == 0 || ((uintptr_t)mem & (align - 1)) == 0,
|
|
|
|
"misaligned memory at %p (align = %zu)", mem, align);
|
k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-10 01:42:39 +08:00
|
|
|
|
2021-01-16 11:39:02 +08:00
|
|
|
return mem;
|
2020-11-26 21:19:10 +08:00
|
|
|
}
|
|
|
|
|
k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-10 01:42:39 +08:00
|
|
|
void k_free(void *ptr)
|
|
|
|
{
|
2020-11-26 21:19:10 +08:00
|
|
|
struct k_heap **heap_ref;
|
|
|
|
|
k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-10 01:42:39 +08:00
|
|
|
if (ptr != NULL) {
|
2021-01-16 11:39:02 +08:00
|
|
|
heap_ref = ptr;
|
|
|
|
ptr = --heap_ref;
|
2021-03-26 20:42:25 +08:00
|
|
|
|
2021-11-08 16:09:46 +08:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_heap_sys, k_free, *heap_ref, heap_ref);
|
2021-03-26 20:42:25 +08:00
|
|
|
|
2020-11-26 21:19:10 +08:00
|
|
|
k_heap_free(*heap_ref, ptr);
|
2021-03-26 20:42:25 +08:00
|
|
|
|
2021-11-08 16:09:46 +08:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_heap_sys, k_free, *heap_ref, heap_ref);
|
k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-10 01:42:39 +08:00
|
|
|
}
|
|
|
|
}
|
2017-11-09 06:40:01 +08:00
|
|
|
|
2018-04-13 07:59:02 +08:00
|
|
|
#if (CONFIG_HEAP_MEM_POOL_SIZE > 0)
|
|
|
|
|
2020-10-02 23:22:03 +08:00
|
|
|
K_HEAP_DEFINE(_system_heap, CONFIG_HEAP_MEM_POOL_SIZE);
|
|
|
|
#define _SYSTEM_HEAP (&_system_heap)
|
2018-04-13 07:59:02 +08:00
|
|
|
|
2020-11-26 21:19:10 +08:00
|
|
|
void *k_aligned_alloc(size_t align, size_t size)
|
2018-04-13 07:59:02 +08:00
|
|
|
{
|
2020-11-26 21:19:10 +08:00
|
|
|
__ASSERT(align / sizeof(void *) >= 1
|
|
|
|
&& (align % sizeof(void *)) == 0,
|
|
|
|
"align must be a multiple of sizeof(void *)");
|
|
|
|
|
|
|
|
__ASSERT((align & (align - 1)) == 0,
|
|
|
|
"align must be a power of 2");
|
|
|
|
|
2021-03-26 20:42:25 +08:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_heap_sys, k_aligned_alloc, _SYSTEM_HEAP);
|
|
|
|
|
|
|
|
void *ret = z_heap_aligned_alloc(_SYSTEM_HEAP, align, size);
|
|
|
|
|
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_heap_sys, k_aligned_alloc, _SYSTEM_HEAP, ret);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
void *k_malloc(size_t size)
|
|
|
|
{
|
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_heap_sys, k_malloc, _SYSTEM_HEAP);
|
|
|
|
|
|
|
|
void *ret = k_aligned_alloc(sizeof(void *), size);
|
|
|
|
|
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_heap_sys, k_malloc, _SYSTEM_HEAP, ret);
|
|
|
|
|
|
|
|
return ret;
|
2018-04-13 07:59:02 +08:00
|
|
|
}
|
|
|
|
|
2017-11-09 06:40:01 +08:00
|
|
|
void *k_calloc(size_t nmemb, size_t size)
|
|
|
|
{
|
|
|
|
void *ret;
|
|
|
|
size_t bounds;
|
|
|
|
|
2021-03-26 20:42:25 +08:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_heap_sys, k_calloc, _SYSTEM_HEAP);
|
|
|
|
|
2019-05-08 01:17:35 +08:00
|
|
|
if (size_mul_overflow(nmemb, size, &bounds)) {
|
2021-03-26 20:42:25 +08:00
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_heap_sys, k_calloc, _SYSTEM_HEAP, NULL);
|
|
|
|
|
2018-04-13 03:53:57 +08:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2017-11-09 06:40:01 +08:00
|
|
|
ret = k_malloc(bounds);
|
2018-09-18 00:39:51 +08:00
|
|
|
if (ret != NULL) {
|
2018-09-12 10:09:03 +08:00
|
|
|
(void)memset(ret, 0, bounds);
|
2017-11-09 06:40:01 +08:00
|
|
|
}
|
2021-03-26 20:42:25 +08:00
|
|
|
|
|
|
|
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_heap_sys, k_calloc, _SYSTEM_HEAP, ret);
|
|
|
|
|
2017-11-09 06:40:01 +08:00
|
|
|
return ret;
|
|
|
|
}
|
2018-04-13 08:12:15 +08:00
|
|
|
|
|
|
|
void k_thread_system_pool_assign(struct k_thread *thread)
|
|
|
|
{
|
2020-10-02 23:22:03 +08:00
|
|
|
thread->resource_pool = _SYSTEM_HEAP;
|
2018-04-13 08:12:15 +08:00
|
|
|
}
|
2019-05-23 01:38:43 +08:00
|
|
|
#else
|
2020-10-02 23:22:03 +08:00
|
|
|
#define _SYSTEM_HEAP NULL
|
k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-10 01:42:39 +08:00
|
|
|
#endif
|
2018-04-13 08:12:15 +08:00
|
|
|
|
2020-12-16 05:37:11 +08:00
|
|
|
void *z_thread_aligned_alloc(size_t align, size_t size)
|
2018-04-13 08:12:15 +08:00
|
|
|
{
|
|
|
|
void *ret;
|
2020-10-02 23:22:03 +08:00
|
|
|
struct k_heap *heap;
|
2018-04-13 08:12:15 +08:00
|
|
|
|
2019-05-23 01:38:43 +08:00
|
|
|
if (k_is_in_isr()) {
|
2020-10-02 23:22:03 +08:00
|
|
|
heap = _SYSTEM_HEAP;
|
2019-05-23 01:38:43 +08:00
|
|
|
} else {
|
2020-10-02 23:22:03 +08:00
|
|
|
heap = _current->resource_pool;
|
2019-05-23 01:38:43 +08:00
|
|
|
}
|
|
|
|
|
2021-03-30 05:13:47 +08:00
|
|
|
if (heap != NULL) {
|
2020-12-16 05:37:11 +08:00
|
|
|
ret = z_heap_aligned_alloc(heap, align, size);
|
2018-04-13 08:12:15 +08:00
|
|
|
} else {
|
|
|
|
ret = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|