It looks like, at some point in the past, initializing thread stacks
was the responsibility of the arch layer. After that was centralized,
we forgot to remove the related conditional header inclusion. Fixed.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
* z_NanoFatalErrorHandler() is now moved to common kernel code
and renamed z_fatal_error(). Arches dump arch-specific info
before calling.
* z_SysFatalErrorHandler() is now moved to common kernel code
and renamed k_sys_fatal_error_handler(). It is now much simpler;
the default policy is simply to lock interrupts and halt the system.
If an implementation of this function returns, then the currently
running thread is aborted.
* New arch-specific APIs introduced:
- z_arch_system_halt() simply powers off or halts the system.
* We now have a standard set of fatal exception reason codes,
namespaced under K_ERR_*
* CONFIG_SIMPLE_FATAL_ERROR_HANDLER deleted
* LOG_PANIC() calls moved to k_sys_fatal_error_handler()
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Fix a race which seems to have been presenting itself
very sporadically on loaded systems.
The race seems to have caused tests/kernel/sched/schedule_api
to fail at random on native_posix.
The case is a bit convoluted:
When the kernel calls z_new_thread(), the POSIX arch saves
the new thread entry call in that new Zephyr thread stack
together with a bit of extra info for the POSIX arch.
And spawns a new pthread (posix_thread_starter()) which
will eventually (after the Zephyr kernel swapped to it),
call that entry function.
(Note that in principle a thread spawned by pthreads may
be arbitrarily delayed)
The POSIX arch does not try to synchronize to that new
pthread (because why should it) until the first time the
Zephyr kernel tries to swap to that thread.
But, the kernel may never try to swap to it.
And therefore that thread's posix_thread_starter() may never
have got to run before the thread was aborted, and its
Zephyr stack reused for something else by the Zephyr app.
As posix_thread_starter() was relaying on looking into that
thread stack, it may now be looking into another thread stack
or anything else.
So, this commit fixes it by having posix_thread_starter()
get the input it always needs not from the Zephyr stack,
but from its own pthread_create() parameter pointing to a
structure kept by the POSIX arch.
Note that if the thread was aborted before reaching that point
posix_thread_starter() will NOT call the Zephyr thread entry
function, but just cleanup.
With this change all "asynchronous" parts of the POSIX arch
should relay only on the POSIX arch own structures.
Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
Correct the storage type of the thread status pointer
not assuming 32bit pointer and integer size
Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
move misc/printk.h to sys/printk.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
move tracing.h to debug/tracing.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Update the files which contain no license information with the
'Apache-2.0' SPDX license identifier. Many source files in the tree are
missing licensing information, which makes it harder for compliance
tools to determine the correct license.
By default all files without license information are under the default
license of Zephyr, which is Apache version 2.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This macro is slated for complete removal, as it's not possible
on arches with an MPU stack guard to know the true buffer bounds
without also knowing the runtime state of its associated thread.
As removing this completely would be invasive to where we are
in the 1.14 release, demote to a private kernel Z_ API instead.
The current way that the macro is being used internally will
not cause any undue harm, we just don't want any external code
depending on it.
The final work to remove this (and overhaul stack specification in
general) will take place in 1.15 in the context of #14269Fixes: #14766
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Rename reserved function names in arch/ subdirectory. The Python
script gen_priv_stacks.py was updated to follow the 'z_' prefix
naming.
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
Added an option to stop the execution of the posix arch based
executable on the first fault, even if the fault stemmed from a
non essential thread.
Having it fail faster, in the first fault, will ease debugging
in many cases.
The option is disabled by default to preserve the old behavior.
Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
Update reserved function names starting with one underscore, replacing
them as follows:
'_k_' with 'z_'
'_K_' with 'Z_'
'_handler_' with 'z_handl_'
'_Cstart' with 'z_cstart'
'_Swap' with 'z_swap'
This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.
Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.
Various generator scripts have also been updated as well as perf,
linker and usb files. These are
drivers/serial/uart_handlers.c
include/linker/kobject-text.ld
kernel/include/syscall_handler.h
scripts/gen_kobject_list.py
scripts/gen_syscall_header.py
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
The thread switching tracing calls are done by the kernel,
and not by the archs. So, remove the redundant trace call.
Related to #13357
Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch. The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.
Just refactoring. No logic changes.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
We want a _Swap() variant that can atomically release/restore a
spinlock state in addition to the legacy irqlock. The function as it
was is now named "_Swap_irqlock()", while _Swap() now refers to a
spinlock and takes two arguments. The former will be going away once
existing users (not that many! Swap() is an internal API, and the
long port away from legacy irqlocking is going to be happening mostly
in drivers) are ported to spinlocks.
Obviously on uniprocessor setups, these produce identical code. But
SMP requires that the correct API be used to maintain the global lock.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There were many platforms where this function was doing nothing. Just
merging its functionality with _PrepC function.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
To allow apps to compile with CONFIG_REBOOT,
add a weak stub for sys_arch_reboot which does nothing
in the POSIX arch
POSIX boards may choose to provide a more sensible implementation
if relevant.
Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
MISRA-C requires that all declarations of a specific function, or
object, use the same names and type qualifiers.
MISRA-C rule 8.3
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Added LOG_PANIC to fault handlers to ensure that log is flush and
logger processes messages in a blocking way in fault handler.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Any word started with underscore followed by and uppercase letter or a
second underscore is a reserved word according with C99.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The return of memset is never checked. This patch explicitly ignore
the return to avoid MISRA-C violations.
The only directory excluded directory was ext/* since it contains
only imported code.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
__swap function was returning -EAGAIN in some case, though its return
value was declared as unsigned int.
This commit changes this function to return int since it can return a
negative value and its return was already been propagate as int.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Move to more generic tracing hooks that can be implemented in different
ways and do not interfere with the kernel.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.
This is problematic:
- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.
Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().
And some places were just calling _reschedule_threads(), which did all
this already.
Unify all this madness. The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield(). The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor. I'm not at all convinced it
should exist...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.
Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.
Break out the _Swap definition (only) into a separate header and use
that instead. Cleaner. Seems not to have any more hidden gotchas.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This feature is X86 only and is not used or being tested. It is legacy
feature and no one can prove it actually works. Remove it until we have
proper documentation and samples and multi architecture support.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This feature is X86 only and is not used or being tested. It is legacy
feature and no one can prove it actually works. Remove it until we have
proper documentation and samples and multi architecture support.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Just some exclusions to coverage in code which cannot be
reached, or can only be reached in error conditions
Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
Added possibility to reconfigure CONFIG_SYS_CLOCK_TICKS_PER_SEC
for the native_posix board (before it could only be 100)
+
Fixed tickless idle support
+
Minor fixes in irq wrapping
Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
Some code in the POSIX architecture is only meant to handle
safely errors which should never occur and therefore
are not covered.
=> We exclude them from the coverage reports.
Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
Some code in the POSIX arch core will only be executed
in some very atypical cases depending on the host load.
To avoid confusing developers, let's exclude it from the
coverage reports.
Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
Rename the nano_internal.h to kernel_internal.h and modify the
header file name accordingly wherever it is used.
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
A new arch (posix) which relies on pthreads to emulate the context
switching
A new soc for it (inf_clock) which emulates a CPU running at an
infinely high clock (so when the CPU is awaken it runs till completion
in 0 time)
A new board, which provides a trivial system tick timer and
irq generation.
Origin: Original
Fixes#1891
Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>