Commit Graph

68 Commits

Author SHA1 Message Date
Ulf Magnusson bd6e04411e kconfig: Clean up header comments and make them consistent
Use this short header style in all Kconfig files:

    # <description>

    # <copyright>
    # <license>

    ...

Also change all <description>s from

    # Kconfig[.extension] - Foo-related options

to just

    # Foo-related options

It's clear enough that it's about Kconfig.

The <description> cleanup was done with this command, along with some
manual cleanup (big letter at the start, etc.)

    git ls-files '*Kconfig*' | \
        xargs sed -i -E '1 s/#\s*Kconfig[\w.-]*\s*-\s*/# /'

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-11-04 17:31:27 -05:00
Nicolas Pitre bb7c2e82b1 mempool: remove redundant bit set/clear within loops
When small blocks are recombined to create a single block at a shallower
level, it is sufficient to remove those blocks from the free list. There
is no need to mark those small blocks as allocated in the bitmap.

This, in turn, removes the need to mark small blocks back as unallocated
when splitting up a big blocks as they'll already be so marked.
Only the first small block needs to be marked allocated and the
remaining blocks only need to be added to the free list.

This makes the code smaller and more efficient, especially since those
removed bit manipulations were located within loops.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-10-04 13:42:59 -04:00
Nicolas Pitre 1b193e9ece mempool: reverse free bit semantic
This turns the free-bit flag into an alloc-bit flag effectively
reversing its semantic. This is to make further changes more natural
and easier to understand.

No need to clear the alloc bits at init time as they're located in .bss
and all clear already.

The code remains functionally equivalent after this change.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-10-04 13:42:59 -04:00
Nicolas Pitre 2129937d3d realloc(): move mempool internal knowledge out of generic lib code
The realloc function was a bit too intimate with the mempool accounting.
Abstract that knowledge away and move it where it belongs.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-09-30 10:57:24 -07:00
Anas Nashif 50d5e37b8a tests: move util test to be unit tests
Move to a unit test, no need to build this for every platform we have.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-09-27 15:23:26 -04:00
Kim Sekkelund 0450263393 Bluetooth: Host: Remove printk dependency from settings
Some modules use snprintk to format the settings keys. Unfortunately
snprintk is tied with printk which is very large for some embedded
systems.
To be able to have settings enabled without also enabling printk
support, change creation of settings key strings to use bin2hex, strlen
and strcpy instead.
A utility function to make decimal presentation of a byte value is
added as u8_to_dec in lib/os/dec.c
Add new Kconfig setting BT_SETTINGS_USE_PRINTK

Signed-off-by: Kim Sekkelund <ksek@oticon.com>
2019-09-25 17:36:39 +02:00
Peter A. Bigot 55ace13c32 lib/timeutil: avoid implementation-defined behavior
The algorithm for converting broken-down civil time to seconds in the
POSIX epoch time scale would produce undefined behavior on a toolchain
that uses a 32-bit time_t in cases where the referenced time could not
be represented exactly.

However, there are use cases in Zephyr for civil time conversions
outside the 32-bit representable range of 1901-12-13T20:45:52Z through
2038-01-19T03:14:07Z inclusive.

Add new API that specifically returns a 64-bit signed seconds count, and
revise the existing API to detect out-of-range values and convert them
to a diagnosible error.

Closes #18465

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-09-19 20:49:51 -04:00
Peter A. Bigot cc1594a59a lib/timeutil: support const correctness for pointer parameter
timeutil_timegm() does not modify the passed structure, so it should
indicate that in the signature (even though the GNU extension does not).

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-09-19 20:49:51 -04:00
Andy Ross 643701aaf8 kernel: syscalls: Whitespace fixups
The semi-automated API changes weren't checkpatch aware.  Fix up
whitespace warnings that snuck into the previous patches.  Really this
should be squashed, but that's somewhat difficult given the structure
of the series.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Andy Ross 6564974bae userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words.  So
passing wider values requires splitting them into two registers at
call time.  This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.

Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths.  So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.

Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types.  So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*().  The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function.  It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.

This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs.  Future commits will port the less testable code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Wentong Wu 715369350d lib: os: add sys_sem data type
For systems with userspace, the sys_sem exist in user memory working
as counter semaphore for user mode thread. The implemention of sys_sem
is based on k_futex. And the majority of the synchronization operations
are performed in user mode to reduce the calling of system call.
And for systems without userspace enabled, sys_sem behaves like k_sem.

Fixes: #15139.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2019-07-24 10:12:25 -07:00
Andrew Boie 39425eaada assert: generate oops if invoked from usermode
User mode isn't allowed to generate a panic and this would
lead to a confusing privilege violation exception.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-20 08:29:39 -04:00
Peter A. Bigot 9d25b671bc sys: timeutil: add module
Add a generic API to provide the inverse operation for gmtime and as a
home for future generic time-related functions that are not in POSIX.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-07-17 14:04:44 +02:00
Nicolas Pitre 629bd85612 mempool: significant reduction of memory waste
The mempool allocator implementation recursively breaks a memory block
into 4 sub-blocks until it minimally fits the requested memory size.

The size of each sub-blocks is rounded up to the next word boundary to
preserve word alignment on the returned memory, and this is a problem.

Let's consider max_sz = 2072 and n_max = 1. That's our level 0.

At level 1, we get one level-0 block split in 4 sub-blocks whose size
is WB_UP(2072 / 4) = 520. However 4 * 520 = 2080 so we must discard the
4th sub-block since it doesn't fit inside our 2072-byte parent block.

We're down to 3 * 520 = 1560 bytes of usable memory.
Our memory usage efficiency is now 1560 / 2072 = 75%.

At level 2, we get 3 level-1 blocks, and each of them may be split
in 4 sub-blocks whose size is WB_UP(520 / 4) = 132. But 4 * 132 = 528
so the 4th sub-block has to be discarded again.

We're down to 9 * 132 = 1188 bytes of usable memory.
Our memory usage efficiency is now 1188 / 2072 = 57%.

At level 3, we get 9 level-2 blocks, each split into WB_UP(132 / 4)
= 36 bytes. Again 4 * 36 = 144 so the 4th sub-block is discarded.

We're down to 27 * 36 = 972 bytes of usable memory.
Our memory usage efficiency is now 972 / 2072 = 47%.

What should be done instead, is to round _down_ sub-block sizes
not _up_. This way, sub-blocks still align to word boundaries, and
they always fit within their parent block as the total size may
no longer exceed the initial size.

Using the same max_sz = 2072 would yield a memory usage efficiency of
99% at level 3, so let's demo a worst case 2044 instead.

Level 1: 4 sub-blocks of WB_DN(2044 / 4) = 508 bytes.
We're down to 4 * 508 = 2032 bytes of usable memory.
Our memory usage efficiency is now 2032 / 2044 = 99%.

Level 2: 4 * 4 sub-blocks of WB_DN(508 / 4) = 124 bytes.
We're down to 16 * 124 = 1984 bytes of usable memory.
Our memory usage efficiency is now 1984 / 2044 = 97%.

Level 3: 16 * 4 sub-blocks of WB_DN(124 / 4) = 28 bytes.
We're down to 64 * 28 = 1792 bytes of usable memory.
Our memory usage efficiency is now 1792 / 2044 = 88%.

Conclusion: if max_sz is a power of 2 then we get 100% efficiency at
all levens in both cases. But if not, then the rounding-up method has
a far worse degradation curve than the rounding-down method, wasting
more than 50% of memory in some cases.

So let's round sub-block sizes down rather than up, and remove
block_fits() which purpose was to identify sub-blocks that didn't
fit within their parent block and is now useless.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-07-16 14:21:21 -07:00
Joakim Andersson 7a93e948a9 kernel: lib: Add convert functions for hex strings and binary arrays
Move duplicate hex2bin and add bin2hex function so that application can
use the functions and avoid code duplication.

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2019-07-16 12:44:18 +02:00
Nicolas Pitre 39cd2ebef7 malloc: make sure returned memory is properly aligned
The accounting data stored at the beginning of a memory block used by
malloc must push the returned memory address to a word boundary. This
is already the case on 32-bit systems, but not on 64-bit systems where
e.g. struct k_mem_block_id still has a size of 4.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-07-03 14:17:29 -07:00
Nicolas Pitre fc4ca923bb mempool: fully use the inline free block bitmap on 64-bit targets
The "bits" field in struct sys_mem_pool_lvl is unioned with a pointer.
That leaves more space for inline free bits on 64-bit targets.
Let's declare it as an array and adjust its size based on the pointer
size. On 32-bit targets the generated code remains identical.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-07-02 19:41:20 -07:00
Nicolas Pitre cf974371fb mempool: make alignment/rounding 64-bit compatible
Minimum alignment and rounding must be done on a word boundary. Let's
replace _ALIGN4() with WB_UP() which is equivalent on 32-bit targets,
and 64-bit aware.

Also enforce a minimal alignment on the memory pool. This is making
a difference mostly on64-bit targets where the widely used 4-byte
alignment is not sufficient.

The _ALIGN4() macro has no users left so it is removed.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-07-02 19:41:20 -07:00
Andrew Boie d045bd7673 lib: os: exclude z_arch_printk_char_out()
This function doesn't do anything, and only exists so that
it can be overridden later, exclude from coverage reports.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-06-28 20:04:29 -07:00
Andrew Boie 05212e823f lib: os: fix vsnprintk coverage
vsnprintk() was uncovered. Simply adjust snprintk() to use
it, instead of duplicating logic.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-06-28 20:04:29 -07:00
Anas Nashif a2fd7d70ec cleanup: include/: move misc/util.h to sys/util.h
move misc/util.h to sys/util.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif d222553931 cleanup: include/: move misc/speculation.h to sys/speculation.h
move misc/speculation.h to sys/speculation.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 1859244b64 cleanup: include/: move misc/rb.h to sys/rb.h
move misc/rb.h to sys/rb.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 9ab2a56751 cleanup: include/: move misc/printk.h to sys/printk.h
move misc/printk.h to sys/printk.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 0c9e280547 cleanup: include/: move misc/mutex.h to sys/mutex.h
move misc/mutex.h to sys/mutex.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 08ee8b09ba cleanup: include/: move misc/mempool.h to sys/mempool.h
move misc/mempool.h to sys/mempool.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 1ed300b318 cleanup: include/: move misc/mempool_base.h to sys/mempool_base.h
move misc/mempool_base.h to sys/mempool_base.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 8be9f5de03 cleanup: include/: move misc/fdtable.h to sys/fdtable.h
move misc/fdtable.h to sys/fdtable.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 5eb90ec169 cleanup: include/: move misc/__assert.h to sys/__assert.h
move misc/__assert.h to sys/__assert.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 7435e5e089 cleanup: include/: move ring_buffer.h to sys/ring_buffer.h
move ring_buffer.h to sys/ring_buffer.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 0abdacf3a4 cleanup: include/: move json.h to data/json.h
move json.h to data/json.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 4e48e87fd2 cleanup: include/: move crc.h to sys/crc.h
move crc.h to sys/crc.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif bd977d06f8 cleanup: include/: move base64.h to sys/base64.h
move base64.h to sys/base64.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Nicolas Pitre 1140bd090c mempool: properly use the inline free block bitmap
The free block bitmap uses either extra memory specified by a pointer
in struct sys_mem_pool_lvl or the space occupied by that pointer
directly if the bitmap length is small enough to fit it.

But the test is wrong. the inline bitmap should be used if the number
of required bits is smaller or _equal_ to the pointer size. Not doing so
would wrongly bounce the free block bitmap to extra memory when the
number of blocks is exactly 32, which is in disagreement with
Z_MPOOL_LBIT_WORDS() that correctly returns 0 in that case.

In theory that mean that this bug would causes an overflow of the free
block bitmap whenever one level has exactly 32 blocks. But right now
there is a separate bug fixed separately that over-sizes the extra block
bitmap mitigating this bug.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-25 23:24:05 -04:00
Andy Ross d0490fe9f9 lib/os/mempool: Fix corruption case with block splitting
The block_fits() predicate was borked.  It would check that a block
fits within the bounds of the whole heap.  But that's not enough:
because of alignment changes between levels the sub-blocks may be
adjusted forward.  It needs to fit inside the PARENT block that it was
split from.

What could happen at runtime is that the last subblocks of a
misaligned parent block would overlap memory from subsequent blocks,
or even run off the end of the heap.  That's bad.

Change the API of block_fits() a little so it can extract the parent
region and do this properly.

Fixes #15279.  Passes test introduced in #16728 to demonstrate what
seems like the same issue.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-06-25 18:51:08 -07:00
Nicolas Pitre 465b2cf31b mempool: fix corruption of the free block bitmap and beyond
In z_sys_mem_pool_block_alloc() the size of the first level block
allocation is rounded up to the next 4-bite boundary. This means one
or more of the trailing blocks could overlap the free block bitmap.

Let's consider this code from kernel.h:

  #define K_MEM_POOL_DEFINE(name, minsz, maxsz, nmax, align) \
       char __aligned(align) _mpool_buf_##name[_ALIGN4(maxsz * nmax) \
                              + _MPOOL_BITS_SIZE(maxsz, minsz, nmax)]; \

The static pool allocation rounds up the product of maxsz and nmax not
size of individual blocks. If we have, say maxsz = 10 and nmax = 20,
the result of _ALIGN4(10 * 20) is 200. That's the offset at which the
free block bitmap will be located.

However, because z_sys_mem_pool_block_alloc() does this:

        lsizes[0] = _ALIGN4(p->max_sz);

Individual level 0 blocks will have a size of 12 not 10. That means
the 17th block will extend up to offset 204, 18th block up to 216, 19th
block to 228, and 20th block to 240. So 4 out of the 20 blocks are
overflowing the static pool area and 3 of them are even located
completely outside of it.

In this example, we have only 20 blocks that can't be split so there is
no extra free block bitmap allocation beyond the bitmap embedded in the
sys_mem_pool_lvl structure. This means that memory corruption will
happen in whatever data is located alongside the _mpool_buf_##name
array. But even with, say, 40 blocks, or larger blocks, the extra bitmap
size would be small compared to the extent of the overflow, and it would
get corrupted too of course.

And the data corruption will happen even without allocating any memory
since z_sys_mem_pool_base_init() stores free_list pointer nodes into
those blocks, which in turn may get corrupted if that other data is
later modified instead.

Fixing this issue is simple: rounding on the static pool allocation is
"misparenthesized". Let's turn

	_ALIGN4(maxsz * nmax)

into

	_ALIGN4(maxsz) * nmax

But that's not sufficient.

In z_sys_mem_pool_base_init() we have:

        size_t buflen = p->n_max * p->max_sz, sz = p->max_sz;
        u32_t *bits = (u32_t *)((u8_t *)p->buf + buflen);

Considering the same parameters as above, here we're locating the extra
free block bitmap at offset `buflen` which is 20 * 10 = 200, again below
the reach of the last 4 memory blocks. If the number of blocks gets past
the size of the embedded bitmap, it will overlap memory blocks.

Also, the block_ptr() call used here to initialize the free block linked
list uses unrounded p->max_sz, meaning that it is initially not locating
dlist nodes within the same block boundaries as what is expected from
z_sys_mem_pool_block_alloc(). This opens the possibility for allocated
adjacent blocks to overwrite dlist nodes, leading to random crashes in
the future.

So a complete fix must round up p->max_sz here too.

Given that runtime usage of max_sz should always be rounded up, it is
then preferable to round it up once at compile time instead and avoid
further mistakes of that sort. The existing _ALIGN4() usage on p->max_sz
at run time are then redundant.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-24 12:10:09 -07:00
Andrew Boie db84a76379 lib: os: remove dead code
If multithreading is disabled, thread_entry() never runs
since we cannot create threads; the non-multithreading case
was simply dead code.

Indicate to code coverage that CODE_UNREACHABLE should be
skipped.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-06-18 09:08:01 -04:00
Nicolas Pitre 2b32059a61 printk: make it 64-bit compatible
On 64-bit systems the most notable difference is due to longs and
pointers being 64-bit wide. Therefore there must be a distinction
between ints and longs. Similar to the prf.c case, this patch properly
implements the h, hh, l, ll and z length modifiers as well as some small
cleanups.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-17 10:28:44 -07:00
Anas Nashif 4c32258606 style: add braces around if/while statements
Per guidelines, all statements should have braces around them. We do not
have a CI check for this, so a few went in unnoticed.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-06 15:20:21 +02:00
Nicolas Pitre 4323d381e7 json: make it 64-bit compatible
The struct json_obj_descr definition allocates only 2 bits for type
alignment. Instead of using them literally minus 1 to encode 1, 2, or 4,
let's store the alignment's shift value instead so that 1, 2, 4 or 8 can
be encoded with the same 2 bits to accommodate 64-bit builds.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-05 07:47:41 -04:00
Anas Nashif 3ae52624ff license: cleanup: add SPDX Apache-2.0 license identifier
Update the files which contain no license information with the
'Apache-2.0' SPDX license identifier.  Many source files in the tree are
missing licensing information, which makes it harder for compliance
tools to determine the correct license.

By default all files without license information are under the default
license of Zephyr, which is Apache version 2.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-04-07 08:45:22 -04:00
Patrik Flykt 4aa48833d8 subsystems: Rename reserved function names
Rename reserved function names in the subsys/ subdirectory except
for static _mod_pub_set and _mod_unbind functions in bluetooth mesh
cfg_srv.c which clash with the similarly named global functions.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-04-03 17:31:00 -04:00
Andrew Boie c8aee7b413 sys_mem_pool: use sys_mutex
Permission management no longer necessary, the former
parameter for the mutex is now simply ignored.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-03 13:47:45 -04:00
Andrew Boie f0835674a3 lib: os: add sys_mutex data type
For systems without userspace enabled, these work the same
as a k_mutex.

For systems with userspace, the sys_mutex may exist in user
memory. It is still tracked as a kernel object, but has an
underlying k_mutex that is looked up in the kernel object
table.

Future enhancements will optimize sys_mutex to not require
syscalls for uncontended sys_mutexes, using atomic ops
instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-03 13:47:45 -04:00
Pawel Dunaj 2189d9b56d lib: mempool: Alloc and break must happen atomically
This fixes a regression caused by 41e90630d.

Signed-off-by: Pawel Dunaj <pawel.dunaj@nordicsemi.no>
2019-04-03 12:36:36 -04:00
Patrik Flykt 21358baa72 all: Update unsigend 'U' suffix due to multiplication
As the multiplication rule is updated, new unsigned suffixes
are added in the code.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Patrik Flykt 24d71431e9 all: Add 'U' suffix when using unsigned variables
Add a 'U' suffix to values when computing and comparing against
unsigned variables.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Flavio Ceolin c2b25151cb lib: printk: Make if/iterations evaluate boolean operands
MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 22:06:45 -04:00
Flavio Ceolin 44fc55e209 lib: crc16_sw: Add missing U to unsigned constants
Add U to unsigned integer constants to avoid implicit cast.

MISRA-C rule 10.1

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 22:06:45 -04:00
Flavio Ceolin ce696e9aa2 lib: rb: Make operands have an appropriate essential type
MISRA-C 8.10.2 defines essential operand types and how to handle them
through rules 10.1 .. 10.5. This commit adds an U to unsigned constants
to avoid implicit casts and make if/while statements evaluate a boolean
to avoid other types being casted to boolean.

MISRA-C rules 10.1, 10.2 and 10.3

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 22:06:45 -04:00