The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:
* System call handlers that acquire resources in the handler
have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
to the caller instead of just killing the calling thread,
even though the base API doesn't do these checks.
These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.
At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/
The macros now use the Z_ notation for private APIs.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().
And some places were just calling _reschedule_threads(), which did all
this already.
Unify all this madness. The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield(). The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor. I'm not at all convinced it
should exist...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Traditionally k_thread_abort() of the current thread has done a
synchronous _Swap() to the new context. Doing this from an ISR has
never worked portably (some architectures can do it, some can't) for
this reason.
But on Xtensa/asm2, exception handlers now run in interrupt context
and it's a very reasonable requirement for them to abort the excepting
thread.
So simply don't swap, but do the rest of the bookeeping, returning to
the calling context. As a side effect it's now possible to terminate
threads from interrupts, even if they have been interrupted.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.
Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.
Break out the _Swap definition (only) into a separate header and use
that instead. Cleaner. Seems not to have any more hidden gotchas.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Rename the nano_internal.h to kernel_internal.h and modify the
header file name accordingly wherever it is used.
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
Use some preprocessor trickery to automatically deduce the amount of
arguments for the various _SYSCALL_HANDLERn() macros. Makes the grunt
work of converting a bunch of kernel APIs to system calls slightly
easier.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
We now have macros which should significantly reduce the amount of
boilerplate involved with defining system call handlers.
- Macros which define the proper prototype based on number of arguments
- "SIMPLE" variants which create handlers that don't need anything
other than object verification
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Use new _SYSCALL_OBJ/_SYSCALL_OBJ_INIT macros.
Use new _SYSCALL_MEMORY_READ/_SYSCALL_MEMORY_WRITE macros.
Some non-obvious checks changed to use _SYSCALL_VERIFY_MSG.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Previously, this was only done if an essential thread self-exited,
and was a runtime check that generated a kernel panic.
Now if any thread has k_thread_abort() called on it, and that thread
is essential to the system operation, this check is made. It is now
an assertion.
_NANO_ERR_INVALID_TASK_EXIT checks and printouts removed since this
is now an assertion.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.
Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.
Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file. Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.
Jira: ZEP-1457
Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.
Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>