SYS_DLIST_FOR_EACH_CONTAINER is preferable over using
SYS_DLIST_FOR_EACH_NODE as that avoid casting directly which assumes the
node field is always at the beginning.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Fixes sparse warnings:
<snip>/zephyr/kernel/timer.c:15:16: warning: symbol '_trace_list_k_timer' was not declared. Should it be static?
<snip>/zephyr/kernel/sem.c:32:14: warning: symbol'_trace_list_k_sem' was not declared. Should it be static?
<snip>/zephyr/kernel/stack.c:24:16: warning: symbol '_trace_list_k_stack' was not declared. Should it be static?
<snip>/zephyr/kernel/queue.c:27:16: warning: symbol '_trace_list_k_queue' was not declared. Should it be static?
<snip>/zephyr/kernel/pipes.c:40:15: warning: symbol '_trace_list_k_pipe' was not declared. Should it be static?
<snip>/zephyr/kernel/mutex.c:46:16: warning: symbol '_trace_list_k_mutex' was not declared. Should it be static?
<snip>/zephyr/kernel/msg_q.c:26:15: warning: symbol '_trace_list_k_msgq' was not declared. Should it be static?
<snip>/zephyr/kernel/mem_slab.c:20:19: warning: symbol '_trace_list_k_mem_slab' was not declared. Should it be static?
<snip>/zephyr/kernel/mailbox.c:53:15: warning: symbol '_trace_list_k_mbox' was not declared. Should it be static?
Change-Id: I42d55aea9855b9c1dd560852ca033c9a19f1ac21
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types. This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies. We also convert the PRI printf formatters in the arch
code over to normal formatters.
Jira: ZEP-2051
Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
They are not part of the API, so rename from K_<state> to
_THREAD_<state>.
Change-Id: Iaebb7d3083b80b9769bee5616e0f96ed2abc5c56
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.
Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.
Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file. Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.
Jira: ZEP-1457
Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.
- prio can fit in one byte, limiting the priorities range to -128 to 127
- recursive scheduler locking can be limited to 255; a rollover results
most probably from a logic error
- flags are split into execution flags and thread states; 8 bits is
enough for each of them currently, with at worst two states and four
flags to spare (on x86, on other archs, there are six flags to spare)
Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.
Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Use a short name for this option CONFIG_OBJECT_TRACING.
Change-Id: Id27de7ef9ca299492b6b7d2324d9f5bcf8059a31
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.
Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>