In order to release irq_offload semaphore outside kernel/thread.c, we
make it visible by modifying it non-static under ztest. This would be
needed such as when call irq_offload() to enter interrupt context and
a fatal error happened, then you have to release it in your fatal
handler, or the irq_offload will still be locked and no longer be
using again.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Cleanup code for power management and remove some duplication and
isolate power management code from the kernel code.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
- Remove SYS_ prefix
- shorten POWER_MANAGEMENT to just PM
- DEVICE_POWER_MANAGEMENT -> PM_DEVICE
and use PM_ as the prefix for all PM related Kconfigs
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
k_heap did not have an aligned alloc function, even though
this is supported by the internal sys_heap.
Signed-off-by: Maximilian Bachmann <m.bachmann@acontis.com>
These implemented a k_mem_pool in terms of the now universal k_heap
utility. That's no longer necessary now that the k_mem_pool API has
been removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The mailbox and msgq utilities had API variants that could pass old
mem_pool blocks through the data structure. That API is being
deprected (and the features were obscure), so remove the internal
support.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The k_mem_pool allocator is no more, and the z_mem_pool compatibility
API is going away. The internal allocator should be a k_heap always.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These were implemented in terms of the mem_pool/block API directly
(for complicated reasons, the pointers returned from this API may have
been allocated from allocators other than the single system heap).
Have them use a k_heap instead.
Requires a tweak to one test which had hard-coded an assumption about
the header size.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Mark all k_mem_pool APIs deprecated for future code. Remaining
internal usage now uses equivalent "z_mem_pool" symbols instead.
Fixes#24358
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Remove the MEM_POOL_HEAP_BACKEND kconfig, treating it as true always.
Now the legacy mem_pool cannot be enabled and all usage uses the
k_heap/sys_heap backend.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Changed algorithm of checking for pending thread in mem_slab.
Check for pending thread now is done only if there is
no memory left.
Signed-off-by: Kamil Lazowski <Kamil.Lazowski@nordicsemi.no>
z_tick_sleep was using int32_t what could cause a possible overflow
when converting from k_ticks_t.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This patch activates FPU feature for main thread if FPU related
configs (FPU, FPU_SHARING) are enabled.
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
With MMU features enabled, we are using 248 out of 256
available bytes on 32-bit. This is extremely uncomfortable, relax
to a larger value like several other arches.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Adds a K_DELAYED_WORK_DEFINE, matching the K_WORK_DEFINE macro, with
accompanying Z_DELAYED_WORK_INITIALIZER macro.
Makes k_delayed_work_init a static inline function, like its K_WORK
counterpart.
Signed-off-by: Trond Einar Snekvik <Trond.Einar.Snekvik@nordicsemi.no>
Move banner and boot delay handling out of init.c
The code for banner was all over the place in init.c making it
unreadable.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Most of kernel files where declaring os module without providing
log level. Because of that default log level was used instead of
CONFIG_KERNEL_LOG_LEVEL.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Add new function to mem_slab API that enables user
to get maximum number of slabs used so far.
Signed-off-by: Kamil Lazowski <Kamil.Lazowski@nordicsemi.no>
Use GEN_OFFSET_SYM macro to genarate absolute symbols for the
_callee_saved struct and use these new symbols in the assembly code.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This uses the timing functions to gather execution cycles of
threads. This provides greater details if arch/SoC/board
uses timer with higher resolution.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Since the tracing of thread being switched in/out has the same
instrumentation points, we can roll the tracing function calls
into the one for thread stats gathering functions.
This avoids duplicating code to call another function.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the bits to gather the first thread runtime statictic:
thread execution time. It provides a rough idea of how much time
a thread is spent in active execution. Currently it is not being
used, pending following commits where it combines with the trace
points on context switch as they instrument the same locations.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Documentation for kconfig SYS_CLOCK_TICKS_PER_SEC has some outdated
recommendations. Changing them to align with other documentation
under kernel timing.
Fixes: #25482
Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
This legacy struct still had a non-standard name. Clean it up to
conform to currrent naming guidelines.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Fix the issue where the kernel poll code would place the tracking
struct on the caller stack and share it with other threads, thus
creating a cache coherence issue on systems where KERNEL_COHERENCE is
enabled.
This works by eliminating the thread backpointer in struct _poller and
simply placing the (now just two-byte!) struct directly into the
thread struct.
Note that this doesn't attempt to fix the API paradigm that the
natural way to structure a call to k_poll() is to use an array of
k_poll_events on the CALLER's stack. So it's likely that most
"typical" k_poll code is still going to have problems with
KERNEL_COHERENCE. But at least now the kernel internals aren't
fundamentally broken.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The poll code was playing this weird trick where the thread pointer in
the "struct _poller" object for a triggered work item. It would not
be a thread to wake up, but instead a pointer to the (non-polling)
thread operated by the work queue being triggered. The code would
never touch this thread, just use it as a way to get a pointer to the
enclosing work queue struct.
Just store the work queue pointer in the first place. It's much
simpler, and makes future modifications to remove that thread pointer
possible.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The triggered work scheme uses a trick where it overloads the thread
pointer field of the struct poller (which normally stores the ID of
the thread that is blocked in k_poll()) to be able to find the work
queue to which it will submit.
Give it its own pointer field to break this false dependency.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Somewhat weirdly, k_poll() can do one of two things when the set of
events is signaled: it can wake up a sleeping thread, or it can submit
an unrelated work item to a work queue. The difference in behaviors
is currently captured by a callback, but as there are only two it's
cleaner to put this into a "mode" enumerant. That also shrinks the
size of the data such that the poller struct can be moved somewhere
other than the calling stack.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Some platforms may have multiple RAM regions which are
dis-continuous in the physical memory map. We really want
these to be in a continuous virtual region, and we need to
stop assuming that there is just one SRAM region that is
identity-mapped.
We no longer use CONFIG_SRAM_BASE_ADDRESS and CONFIG_SRAM_SIZE
as the bounds of kernel RAM, and no longer assume in the core
kernel that these are identity mapped at boot.
Two new Kconfigs, CONFIG_KERNEL_VM_BASE and
CONFIG_KERNEL_RAM_SIZE now indicate the bounds of this region
in virtual memory.
We are currently only memory-mapping physical device driver
MMIO regions so we do not need virtual-to-physical calculations
to re-map RAM yet. When the time comes an architecture interface
will be defined for this.
Platforms which just have one RAM region may continue to
identity-map it.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Replaces all existing variants of value clamping with the MIN and MAX
macros with the CLAMP macro.
Signed-off-by: Trond Einar Snekvik <Trond.Einar.Snekvik@nordicsemi.no>
k_object_alloc(K_OBJ_THREAD) returns a usable struct k_thread pointer.
This pointer is 4 byte aligned. On x86 and x86_64 struct _thread_arch
has a member which requires alignment. Since this is currently not
supported k_object_alloc(K_OBJ_THREAD) now returns an error instead of
a misaligned pointer.
Signed-off-by: Maximilian Bachmann <m.bachmann@acontis.com>
Toolchains other than Zephyr SDK may not support generating
code with thread local storage. So limit TLS to Zephyr SDK
for now, and only enable TLS on other toolchains as needed.
Fixes#29541
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
k_queue_append and k_queue_alloc_append are in a race
condition.
Scenario:
Using either of the above mentioned functions from a
preemptive context, can lead to the element being
appended to the queue to be lost. The element loss happens
when the k_queue_append is interrupted at a critical point,
when the list only contains one element and k_queue_get is
called, before the k_queue_append is allowed to complete.
Fix:
Move sys_sflist_peek_tail inside queue_insert(). Add additional
bool paramenter to queue_insert(), to indicate append/prepend
operation.
Fixes#29257
Signed-off-by: iva kik <megatheriumiva@gmail.com>
For threads that run in supervisor mode for some time before
synchronously dropping to user mode, re-initialize the TLS
area to prevent leakage of potentially sensitive information.
We did this already for CONFIG_THREAD_USERSPACE_LOCAL_DATA
but not the new CONFIG_THREAD_LOCAL_STORAGE.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This enables storing errno in the thread local storage area.
With this enabled, a syscall to access errno can be avoided
when userspace is also enabled.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.
Note that since Cortex-M does not have the thread ID or
process ID register needed to store TLS pointer at runtime
for toolchain to access thread data, a global variable is
used instead.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the common struct fields and functions to support
the implementation of thread local storage in individual
architecture. This uses the thread stack to store TLS data.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add kconfigs to indicate whether an architecture has support
for thread local storage (TLS), and to enable TLS in kernel.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Arches which have custom swap to main can have _current be
NULL very, very early in the boot process. Check this to
avoid an infinite loop of fatal errors.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The core kernel does not use this yet, but it will be later used
as part of infrastructure for memory-mapping stacks, as detailed
in #28899.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>