Use this short header style in all Kconfig files:
# <description>
# <copyright>
# <license>
...
Also change all <description>s from
# Kconfig[.extension] - Foo-related options
to just
# Foo-related options
It's clear enough that it's about Kconfig.
The <description> cleanup was done with this command, along with some
manual cleanup (big letter at the start, etc.)
git ls-files '*Kconfig*' | \
xargs sed -i -E '1 s/#\s*Kconfig[\w.-]*\s*-\s*/# /'
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
When small blocks are recombined to create a single block at a shallower
level, it is sufficient to remove those blocks from the free list. There
is no need to mark those small blocks as allocated in the bitmap.
This, in turn, removes the need to mark small blocks back as unallocated
when splitting up a big blocks as they'll already be so marked.
Only the first small block needs to be marked allocated and the
remaining blocks only need to be added to the free list.
This makes the code smaller and more efficient, especially since those
removed bit manipulations were located within loops.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This turns the free-bit flag into an alloc-bit flag effectively
reversing its semantic. This is to make further changes more natural
and easier to understand.
No need to clear the alloc bits at init time as they're located in .bss
and all clear already.
The code remains functionally equivalent after this change.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The realloc function was a bit too intimate with the mempool accounting.
Abstract that knowledge away and move it where it belongs.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Some modules use snprintk to format the settings keys. Unfortunately
snprintk is tied with printk which is very large for some embedded
systems.
To be able to have settings enabled without also enabling printk
support, change creation of settings key strings to use bin2hex, strlen
and strcpy instead.
A utility function to make decimal presentation of a byte value is
added as u8_to_dec in lib/os/dec.c
Add new Kconfig setting BT_SETTINGS_USE_PRINTK
Signed-off-by: Kim Sekkelund <ksek@oticon.com>
The algorithm for converting broken-down civil time to seconds in the
POSIX epoch time scale would produce undefined behavior on a toolchain
that uses a 32-bit time_t in cases where the referenced time could not
be represented exactly.
However, there are use cases in Zephyr for civil time conversions
outside the 32-bit representable range of 1901-12-13T20:45:52Z through
2038-01-19T03:14:07Z inclusive.
Add new API that specifically returns a 64-bit signed seconds count, and
revise the existing API to detect out-of-range values and convert them
to a diagnosible error.
Closes#18465
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
timeutil_timegm() does not modify the passed structure, so it should
indicate that in the signature (even though the GNU extension does not).
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
The semi-automated API changes weren't checkpatch aware. Fix up
whitespace warnings that snuck into the previous patches. Really this
should be squashed, but that's somewhat difficult given the structure
of the series.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
For systems with userspace, the sys_sem exist in user memory working
as counter semaphore for user mode thread. The implemention of sys_sem
is based on k_futex. And the majority of the synchronization operations
are performed in user mode to reduce the calling of system call.
And for systems without userspace enabled, sys_sem behaves like k_sem.
Fixes: #15139.
Signed-off-by: Wentong Wu <wentong.wu@intel.com>
User mode isn't allowed to generate a panic and this would
lead to a confusing privilege violation exception.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add a generic API to provide the inverse operation for gmtime and as a
home for future generic time-related functions that are not in POSIX.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
The mempool allocator implementation recursively breaks a memory block
into 4 sub-blocks until it minimally fits the requested memory size.
The size of each sub-blocks is rounded up to the next word boundary to
preserve word alignment on the returned memory, and this is a problem.
Let's consider max_sz = 2072 and n_max = 1. That's our level 0.
At level 1, we get one level-0 block split in 4 sub-blocks whose size
is WB_UP(2072 / 4) = 520. However 4 * 520 = 2080 so we must discard the
4th sub-block since it doesn't fit inside our 2072-byte parent block.
We're down to 3 * 520 = 1560 bytes of usable memory.
Our memory usage efficiency is now 1560 / 2072 = 75%.
At level 2, we get 3 level-1 blocks, and each of them may be split
in 4 sub-blocks whose size is WB_UP(520 / 4) = 132. But 4 * 132 = 528
so the 4th sub-block has to be discarded again.
We're down to 9 * 132 = 1188 bytes of usable memory.
Our memory usage efficiency is now 1188 / 2072 = 57%.
At level 3, we get 9 level-2 blocks, each split into WB_UP(132 / 4)
= 36 bytes. Again 4 * 36 = 144 so the 4th sub-block is discarded.
We're down to 27 * 36 = 972 bytes of usable memory.
Our memory usage efficiency is now 972 / 2072 = 47%.
What should be done instead, is to round _down_ sub-block sizes
not _up_. This way, sub-blocks still align to word boundaries, and
they always fit within their parent block as the total size may
no longer exceed the initial size.
Using the same max_sz = 2072 would yield a memory usage efficiency of
99% at level 3, so let's demo a worst case 2044 instead.
Level 1: 4 sub-blocks of WB_DN(2044 / 4) = 508 bytes.
We're down to 4 * 508 = 2032 bytes of usable memory.
Our memory usage efficiency is now 2032 / 2044 = 99%.
Level 2: 4 * 4 sub-blocks of WB_DN(508 / 4) = 124 bytes.
We're down to 16 * 124 = 1984 bytes of usable memory.
Our memory usage efficiency is now 1984 / 2044 = 97%.
Level 3: 16 * 4 sub-blocks of WB_DN(124 / 4) = 28 bytes.
We're down to 64 * 28 = 1792 bytes of usable memory.
Our memory usage efficiency is now 1792 / 2044 = 88%.
Conclusion: if max_sz is a power of 2 then we get 100% efficiency at
all levens in both cases. But if not, then the rounding-up method has
a far worse degradation curve than the rounding-down method, wasting
more than 50% of memory in some cases.
So let's round sub-block sizes down rather than up, and remove
block_fits() which purpose was to identify sub-blocks that didn't
fit within their parent block and is now useless.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Move duplicate hex2bin and add bin2hex function so that application can
use the functions and avoid code duplication.
Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
The accounting data stored at the beginning of a memory block used by
malloc must push the returned memory address to a word boundary. This
is already the case on 32-bit systems, but not on 64-bit systems where
e.g. struct k_mem_block_id still has a size of 4.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The "bits" field in struct sys_mem_pool_lvl is unioned with a pointer.
That leaves more space for inline free bits on 64-bit targets.
Let's declare it as an array and adjust its size based on the pointer
size. On 32-bit targets the generated code remains identical.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Minimum alignment and rounding must be done on a word boundary. Let's
replace _ALIGN4() with WB_UP() which is equivalent on 32-bit targets,
and 64-bit aware.
Also enforce a minimal alignment on the memory pool. This is making
a difference mostly on64-bit targets where the widely used 4-byte
alignment is not sufficient.
The _ALIGN4() macro has no users left so it is removed.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This function doesn't do anything, and only exists so that
it can be overridden later, exclude from coverage reports.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
move misc/util.h to sys/util.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/speculation.h to sys/speculation.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/rb.h to sys/rb.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/printk.h to sys/printk.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/mutex.h to sys/mutex.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/mempool.h to sys/mempool.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/mempool_base.h to sys/mempool_base.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/fdtable.h to sys/fdtable.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move misc/__assert.h to sys/__assert.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move ring_buffer.h to sys/ring_buffer.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move json.h to data/json.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move crc.h to sys/crc.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move base64.h to sys/base64.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The free block bitmap uses either extra memory specified by a pointer
in struct sys_mem_pool_lvl or the space occupied by that pointer
directly if the bitmap length is small enough to fit it.
But the test is wrong. the inline bitmap should be used if the number
of required bits is smaller or _equal_ to the pointer size. Not doing so
would wrongly bounce the free block bitmap to extra memory when the
number of blocks is exactly 32, which is in disagreement with
Z_MPOOL_LBIT_WORDS() that correctly returns 0 in that case.
In theory that mean that this bug would causes an overflow of the free
block bitmap whenever one level has exactly 32 blocks. But right now
there is a separate bug fixed separately that over-sizes the extra block
bitmap mitigating this bug.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The block_fits() predicate was borked. It would check that a block
fits within the bounds of the whole heap. But that's not enough:
because of alignment changes between levels the sub-blocks may be
adjusted forward. It needs to fit inside the PARENT block that it was
split from.
What could happen at runtime is that the last subblocks of a
misaligned parent block would overlap memory from subsequent blocks,
or even run off the end of the heap. That's bad.
Change the API of block_fits() a little so it can extract the parent
region and do this properly.
Fixes#15279. Passes test introduced in #16728 to demonstrate what
seems like the same issue.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
In z_sys_mem_pool_block_alloc() the size of the first level block
allocation is rounded up to the next 4-bite boundary. This means one
or more of the trailing blocks could overlap the free block bitmap.
Let's consider this code from kernel.h:
#define K_MEM_POOL_DEFINE(name, minsz, maxsz, nmax, align) \
char __aligned(align) _mpool_buf_##name[_ALIGN4(maxsz * nmax) \
+ _MPOOL_BITS_SIZE(maxsz, minsz, nmax)]; \
The static pool allocation rounds up the product of maxsz and nmax not
size of individual blocks. If we have, say maxsz = 10 and nmax = 20,
the result of _ALIGN4(10 * 20) is 200. That's the offset at which the
free block bitmap will be located.
However, because z_sys_mem_pool_block_alloc() does this:
lsizes[0] = _ALIGN4(p->max_sz);
Individual level 0 blocks will have a size of 12 not 10. That means
the 17th block will extend up to offset 204, 18th block up to 216, 19th
block to 228, and 20th block to 240. So 4 out of the 20 blocks are
overflowing the static pool area and 3 of them are even located
completely outside of it.
In this example, we have only 20 blocks that can't be split so there is
no extra free block bitmap allocation beyond the bitmap embedded in the
sys_mem_pool_lvl structure. This means that memory corruption will
happen in whatever data is located alongside the _mpool_buf_##name
array. But even with, say, 40 blocks, or larger blocks, the extra bitmap
size would be small compared to the extent of the overflow, and it would
get corrupted too of course.
And the data corruption will happen even without allocating any memory
since z_sys_mem_pool_base_init() stores free_list pointer nodes into
those blocks, which in turn may get corrupted if that other data is
later modified instead.
Fixing this issue is simple: rounding on the static pool allocation is
"misparenthesized". Let's turn
_ALIGN4(maxsz * nmax)
into
_ALIGN4(maxsz) * nmax
But that's not sufficient.
In z_sys_mem_pool_base_init() we have:
size_t buflen = p->n_max * p->max_sz, sz = p->max_sz;
u32_t *bits = (u32_t *)((u8_t *)p->buf + buflen);
Considering the same parameters as above, here we're locating the extra
free block bitmap at offset `buflen` which is 20 * 10 = 200, again below
the reach of the last 4 memory blocks. If the number of blocks gets past
the size of the embedded bitmap, it will overlap memory blocks.
Also, the block_ptr() call used here to initialize the free block linked
list uses unrounded p->max_sz, meaning that it is initially not locating
dlist nodes within the same block boundaries as what is expected from
z_sys_mem_pool_block_alloc(). This opens the possibility for allocated
adjacent blocks to overwrite dlist nodes, leading to random crashes in
the future.
So a complete fix must round up p->max_sz here too.
Given that runtime usage of max_sz should always be rounded up, it is
then preferable to round it up once at compile time instead and avoid
further mistakes of that sort. The existing _ALIGN4() usage on p->max_sz
at run time are then redundant.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
If multithreading is disabled, thread_entry() never runs
since we cannot create threads; the non-multithreading case
was simply dead code.
Indicate to code coverage that CODE_UNREACHABLE should be
skipped.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
On 64-bit systems the most notable difference is due to longs and
pointers being 64-bit wide. Therefore there must be a distinction
between ints and longs. Similar to the prf.c case, this patch properly
implements the h, hh, l, ll and z length modifiers as well as some small
cleanups.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Per guidelines, all statements should have braces around them. We do not
have a CI check for this, so a few went in unnoticed.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The struct json_obj_descr definition allocates only 2 bits for type
alignment. Instead of using them literally minus 1 to encode 1, 2, or 4,
let's store the alignment's shift value instead so that 1, 2, 4 or 8 can
be encoded with the same 2 bits to accommodate 64-bit builds.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Update the files which contain no license information with the
'Apache-2.0' SPDX license identifier. Many source files in the tree are
missing licensing information, which makes it harder for compliance
tools to determine the correct license.
By default all files without license information are under the default
license of Zephyr, which is Apache version 2.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Rename reserved function names in the subsys/ subdirectory except
for static _mod_pub_set and _mod_unbind functions in bluetooth mesh
cfg_srv.c which clash with the similarly named global functions.
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
Permission management no longer necessary, the former
parameter for the mutex is now simply ignored.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
For systems without userspace enabled, these work the same
as a k_mutex.
For systems with userspace, the sys_mutex may exist in user
memory. It is still tracked as a kernel object, but has an
underlying k_mutex that is looked up in the kernel object
table.
Future enhancements will optimize sys_mutex to not require
syscalls for uncontended sys_mutexes, using atomic ops
instead.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
MISRA-C 8.10.2 defines essential operand types and how to handle them
through rules 10.1 .. 10.5. This commit adds an U to unsigned constants
to avoid implicit casts and make if/while statements evaluate a boolean
to avoid other types being casted to boolean.
MISRA-C rules 10.1, 10.2 and 10.3
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>