Commit Graph

24 Commits

Author SHA1 Message Date
Andy Ross 46dc8a0813 include: Add documentation for spinlocks
The kernel spinlock API didn't have proper API documentation.  Fix
that.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-05-08 10:46:44 +02:00
Andy Ross 0dd83b8c2e kernel: Add k_heap synchronized memory allocator
This adds a k_heap data structure, a synchronized wrapper around a
sys_heap memory allocator.  As of this patch, it is an alternative
implementation to k_mem_pool() with somewhat better efficiency and
performance and more conventional (and convenient) behavior.

Note that commit involves some header motion to break dependencies.
The declaration for struct k_spinlock moves to kernel_structs.h, and a
bunch of includes were trimmed.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-04-14 10:05:55 -07:00
Oleg Zhurakivskyy b1e1f64d14 global: Replace BUILD_ASSERT_MSG() with BUILD_ASSERT()
Replace all occurences of BUILD_ASSERT_MSG() with BUILD_ASSERT()
as a result of merging BUILD_ASSERT() and BUILD_ASSERT_MSG().

Signed-off-by: Oleg Zhurakivskyy <oleg.zhurakivskyy@intel.com>
2020-03-31 07:18:06 +02:00
Carles Cufi 4b37a8f3a4 Revert "global: Replace BUILD_ASSERT_MSG() with BUILD_ASSERT()"
This reverts commit 8739517107.

Pull Request #23437 was merged by mistake with an invalid manifest.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2020-03-19 18:45:13 +01:00
Oleg Zhurakivskyy 8739517107 global: Replace BUILD_ASSERT_MSG() with BUILD_ASSERT()
Replace all occurences of BUILD_ASSERT_MSG() with BUILD_ASSERT()
as a result of merging BUILD_ASSERT() and BUILD_ASSERT_MSG().

Signed-off-by: Oleg Zhurakivskyy <oleg.zhurakivskyy@intel.com>
2020-03-19 15:47:53 +01:00
Andrew Boie c1fdf98ba5 kernel: show what spinlock was used incorrectly
Also helps identify corruption cases where the spinlock pointer
used wasn't actually a spinlock.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 10:17:16 -05:00
Danny Oerndrup c9d78401cc spinlock: Make SPIN_VALIDATE a Kconfig option.
SPIN_VALIDATE is, as it was previously, enabled per default when having
less than 4 CPUs and either having no flash or a flash size greater than
32kB.

Small targets, which needs to have asserts enabled, can chose to have
the spinlock validation enabled or not and thereby decide whether the
overhead added is acceptable or not.

Signed-off-by: Danny Oerndrup <daor@demant.com>
2019-12-20 19:51:16 -05:00
Andrew Boie 4f77c2ad53 kernel: rename z_arch_ to arch_
Promote the private z_arch_* namespace, which specifies
the interface between the core kernel and the
architecture code, to a new top-level namespace named
arch_*.

This allows our documentation generation to create
online documentation for this set of interfaces,
and this set of interfaces is worth treating in a
more formal way anyway.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 15:21:46 -08:00
Anas Nashif 529791dff7 ztest: add missing headers
Recent changes to architecture headers did not address ztest headers due
to this bug in sanitycheck. Fixing them now.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-10-23 10:47:22 -04:00
Erwin Rol e6ffb3fdc4 spinlock: Make sure C and C++ have the same sizeof(k_spinlock) value
If CONFIG_SMP and SPIN_VALIDATE are both not defined the k_spinlock
struct will have no members. The result is that in C the sizeof
value of struct k_spinlock is 0 and in C++ it is 1.

This size difference causes problems when the k_spinlock
is embedded into another struct like k_msgq, because C and
C++ will have different ideas on the offsets of the members
that come after the k_spinlock member.

To prevent this we add a 1 byte dummy member to k_spinlock
when the user selects C++ support and k_spinlock would
otherwise be empty.

Signed-off-by: Erwin Rol <erwin@erwinrol.com>
2019-09-16 14:34:24 -05:00
Anas Nashif 5eb90ec169 cleanup: include/: move misc/__assert.h to sys/__assert.h
move misc/__assert.h to sys/__assert.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif e1e05a2eac cleanup: include/: move atomic.h to sys/atomic.h
move atomic.h to sys/atomic.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Nicolas Pitre 0b5d9f71f2 thread_cpu: make it 64-bit compatible
This stores a combination of a pointer and a CPU number in the low
2 bits. On 64-bit systems, the pointer part won't fit in an int.
Let's use uintptr_t for this purpose.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-05-30 09:42:23 -04:00
Flavio Ceolin 625ac2e79f spinlock: Change function signature to return bool
Functions z_spin_lock_valid and z_spin_unlock_valid are essentially
boolean functions, just change their signature to return a bool instead
of an integer.

MISRA-C rule 10.1

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 22:06:45 -04:00
Andy Ross f37e0c6e4d kernel/spinlock: Fix race in spinlock validation
The k_spin_lock() validation was setting the new owner of the spinlock
BEFORE the actual lock was taken, so it could race against other
processors trying the same thing.  Split the modification step out
into a separate function that can be called after we affirmatively
have the lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Patrik Flykt 4344e27c26 all: Update reserved function names
Update reserved function names starting with one underscore, replacing
them as follows:
   '_k_' with 'z_'
   '_K_' with 'Z_'
   '_handler_' with 'z_handl_'
   '_Cstart' with 'z_cstart'
   '_Swap' with 'z_swap'

This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.

Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.

Various generator scripts have also been updated as well as perf,
linker and usb files. These are
   drivers/serial/uart_handlers.c
   include/linker/kobject-text.ld
   kernel/include/syscall_handler.h
   scripts/gen_kobject_list.py
   scripts/gen_syscall_header.py

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-11 13:48:42 -04:00
Andy Ross 9c2c115716 kernel/spinlock: Predicate spinlock validation on flash size
The spinlock validation isn't super lightweight -- it adds only a few
tens of bytess per call, but there are a LOT of locking calls.  On
smaller platforms with 32kb of flash, we're bumping into code size
limits on the bigger tests (tests/kernel/poll is a particular
offender).

Check the declared flash size before enabling it.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-27 14:22:06 -08:00
Andy Ross fb505b3cfd spinlock: Support ztest mocking
Spinlocks are written above the arch-provided _arch_irq_un/lock()
calls.  But those aren't stubbed by the mocking layer, and as it's not
an "arch" I don't see an obvious place to put them.  Handle them in
spinlock.h.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross 5aa7460e5c kernel/spinlock: Move validation out of header inlines
The validation checking recently added to spinlocks is useful, but
requires kernel-internals like _current and _current_cpu in a header
context that tends to be needed before those are declared (or where we
don't want them declared), and is causing big header dependency
headaches.

Move it to C code, it's just a validation tool, not a performance
thing.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross aa6e21c24c kernel: Split _Swap() API into irqlock and spinlock variants
We want a _Swap() variant that can atomically release/restore a
spinlock state in addition to the legacy irqlock.  The function as it
was is now named "_Swap_irqlock()", while _Swap() now refers to a
spinlock and takes two arguments.  The former will be going away once
existing users (not that many!  Swap() is an internal API, and the
long port away from legacy irqlocking is going to be happening mostly
in drivers) are ported to spinlocks.

Obviously on uniprocessor setups, these produce identical code.  But
SMP requires that the correct API be used to maintain the global lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross 4ff2dfce09 kernel/spinlock: Force inlining
Something is going wrong with code generation here, potentially the
inline assembly generated by _arch_irq_un/lock(), and these calls are
not being inlined by gcc.  So what should be a ~3 instruction sequence
on most uniprocessor architectures is turning into 8-20 cycles worth
of work to implement the API as written.

Use an ALWAYS_INLINE, which is sort of ugly semantically but produces
much better code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 15:57:21 -05:00
Andy Ross 7367b84f8e kernel/spinlock: Augment runtime validation
There was an existing validation layer in the spinlock implementation,
but it was only enabled when both SMP and CONFIG_DEBUG were enabled,
which meant that nothing was using it.  Replace it with a more
elaborate framework that ensures that every lock taken is not already
taken by the current CPU and is released on the same CPU by the same
thread.

This catches the much more common goof of locking a spinlock
recursively, which would "work" on uniprocessor setups but have the
side effect of releasing the lock prematurely at the end of the inner
lock.  We've done that in two spots already.

Note that this patch causes k_spinlock_t to have non-zero size on
builds with CONFIG_ASSERT, so expect a little data and code size
increase.  Worth it IMHO.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-30 13:29:42 -08:00
Flavio Ceolin 67ca176754 headers: Fix headers across the project
Any word started with underscore followed by and uppercase letter or a
second underscore is a reserved word according with C99.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-17 15:49:26 -04:00
Andy Ross 7a023cfb89 kernel: Simple spinlock API
Minimal spinlock API based on the existing atomic.h layer.  Usage
works just like irq_lock(), but takes an argument to a specific struct
k_spinlock_t to un/lock.  No attempt at implementing fairness or
backoff semantics.  No attempt made at architecture-specific assembly.

When CONFIG_SMP is not enabled, this code falls back to a zero-size
struct and becomes functionally identical to irq_lock/unlock().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00