Move to more generic tracing hooks that can be implemented in different
ways and do not interfere with the kernel.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.
This is problematic:
- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.
Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The interrupt stack area wasn't being set to a repeating
0xAA pattern at boot as it should be. This is now done in
kernel_arch_init(), which runs before interrupts are
enabled for the first time.
Fixes#7327
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This was a little embarassing. The swap code got this right, and the
interrupt exit path got it right, but on entry we weren't ever saving
the shift and loop registers for the interrupted context.
This almost always worked anyway as the loop registers aren't ever
used in any Zephyr code (gcc won't generate this style of loop AFAICT)
and the SAR shift amount register is generally used only in two pairs
of adjacent instructions making the chance of hitting that exact cycle
quite low in general.
But of course we have shift-happy crypto code in our tests, so this
got caught, thankfully.
See https://github.com/zephyrproject-rtos/zephyr/issues/6470
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When returning into a different thread than we interrupted, we
obviously need to spill all the existing register windows to make sure
all their values are in the old thread's stack. But the code to do
this forgot to reset the current stack pointer to the value it had at
interrupt time (it was still pointing to the saved context below
that), so the caller of the interrupted function was spilling to the
wrong spot.
This wouldn't show up as an instant failure, it would only happen when
switching BACK to the improperly-spilled thread. And even then it
would be a noop if the original interrupt handler was deep enough to
have spilled that function naturally.
In practice, this happened only in some instances on ESP-32 (which has
more windowed registers than qemu) when interrupting the idle thread
(which is very shallow) with a (very simple) timer interrupt. Trivial
to see, hard to find.
See https://github.com/zephyrproject-rtos/zephyr/issues/6346 for more
detail.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The xtensa headers use this for simplicity when SMP is not enabled.
It should still build on older platforms that don't include the
asm2-style CPU pointer scheme.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When in SMP mode, the nested/irq_stack/current fields are specific to
the current CPU and not to the kernel as a whole, so we need an array
of these. Place them in a _cpu_t struct and implement a
_arch_curr_cpu() function to retrieve the pointer.
When not in SMP mode, the first CPU's fields are defined as a unioned
with the first _cpu_t record. This permits compatibility with legacy
assembly on other platforms. Long term, all users, including
uniprocessor architectures, should be updated to use the new scheme.
Fundamentally this is just renaming: the structure layout and runtime
code do not change on any existing platforms and won't until someone
defines a second CPU.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Xtensa register windows have a special exception that happens when the
stack pointer needs to be moved, but the caller function has already
spilled its registers below it.
I thought these were unexercised in Zephyr code, but they turn out to
be thrown by the existing mem_pool tests when run in the 32-register
qemu environment (but not on 64-register hardwre). Because the effect
of the exception is to unspill the caller, there is no good way to
handle this in a traditional handler. Instead put a 5-instruction
stub in front of the user exception handler (i.e. incurring that cost
on every trap and every L1 interrupt) to test before doing the normal
entry.
Works, but would be nicer to optimize this in the future so that only
true alloca exceptions take that cost.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When using _arch_switch() context switching, the thread return value
is a generic hook and not provided by the architecture.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This adds vectors for all interrupt levels defined by core-isa.h.
Modify the entry code a little bit to select correct linker sections
(levels 1, 6 and 7 get special names for... no particularly good
reason) and to constructed the interrupted PS value correctly (no EPS1
register for exceptions since they had to have interrupted level 0
code and thus differ only in the EXCM bit).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Legacy xtensa had a rather complicated implementation of en/disabling
interrupts, owing to the "software priority" feature (which plays
games with INTENABLE and INTLEVEL to allow for interrupts to interrupt
each other outside their normal priorities). But that's not a Zephyr
feature, it's enabled by a XT_USE_SWPRI value that comes from platform
headers and isn't enabled on any of our boards. Dead code, basically.
Replace with the obvious implementation when asm2 is in use.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This was a dead API. Nothing ever used it, it wasn't exposed in any
API headers. It never appeared in documentation. It's not
particularly clear why a Zephy app would want to hook
architecture-specific exceptions instead of simply using the portable
error framework anyway. And it's not supported by asm2. Delete.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The existing __swap() mechanism is too high level for some
applications because of its scheduler-awareness. This introduces a
new _arch_switch() mechanism, which is a simpler primitive that looks
like:
void _arch_switch(void *handle, void **old_handle_out);
The new thread handle (typically just a stack pointer) is specified
explicitly instead of being picked up from the scheduler by
per-architecture code, and on return the "old" thread handle that got
switched out is returned through the pointer.
The new primitive (currently available only on xtensa) is selected
when CONFIG_USE_SWITCH is "y". A new C _Swap() implementation based
on this primitive is then added which operates compatibly.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
SMP needs a new context switch primitive (to disentangle _swap() from
the scheduler) and new interrupt entry behavior (to be able to take a
global spinlock on behalf of legacy drivers). The existing code is
very obtuse, and working with it led me down a long path of "this
would be so much better if..." So this is a new context and entry
framework, intended to replace the code that exists now, at least on
SMP platforms.
New features:
* The new context switch primitive is xtensa_switch(), which takes a
"new" context handle as an argument instead of getting it from the
scheduler, returns an "old" context handle through a pointer
(e.g. to save it to the old thread context), and restores the lock
state(PS register) exactly as it is at entry instead of taking it as
an argument.
* The register spill code understands wrap-around register windows and
can avoid spilling A4-A15 registers when they are unused by the
interrupted function, saving as much as 48 bytes of stack space on
the interrupted stacks.
* The "spill register windows" routine is entirely different, using a
different mechanism, and is MUCH FASTER (to the tune of almost 200
cycles). See notes in comments.
* Even better, interrupt entry can be done via a clever "cross stack
call" I worked up, meaning that the interrupted thread's registers
do not need to be spilled at all until they are naturally pushed out
by the interrupt handler or until we return from the interrupt into
a different thread. This is a big efficiency win for tiny
interrupts (e.g. timers), and a big latency win for all interrupts.
* Interrupt entry is 100% symmetric with respect to medium/high
interrupts, avoiding the problems seen with hooking high priority
interrupts with the current code (e.g. ESP-32's watchdog driver).
* Much smaller code size. No cut and paste assembly. No use of HAL
calls.
* Assumes "XEA2" interrupt architecture, the register window extension
(i.e. no CALL0 ABI), and the "high priority interrupts" extension.
Does not support the legacy processor variants for which we have no
targets. The old code has some stuff in there to support this, but
it seems bitrotten, untestable, and I'm all but certain it doesn't
work.
Note that this simply adds the primitives to the existing tree in a
form where they can be unit tested. It does not replace the existing
interrupt/exception handling or _Swap() implementation.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This feature is X86 only and is not used or being tested. It is legacy
feature and no one can prove it actually works. Remove it until we have
proper documentation and samples and multi architecture support.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Rename the nano_internal.h to kernel_internal.h and modify the
header file name accordingly wherever it is used.
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
This needs to be in <arch/cpu.h> so that it can be called
from the k_panic()/k_oops() macros in kernel.h.
Fixes build errors on these arches when using k_panic() or
k_oops().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Unconditionally use CONFIG_SIMULATOR_XTENSA to determine if XT_SIMULATOR
or XT_BOARD should be defined.
If CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC, also define XT_CLOCK_FREQ. This
isn't ideal as the clock frequency might be changed in runtime and this
effectively makes it a constant.
Until we can control the clock frequency in runtime, this will suffice.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
A bad rebase of a patch that moved these defines around
unintentionally reverted a necessary change to the coprocessor
save area.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Historically, space for struct k_thread was always carved out of the
thread's stack region. However, we want more control on where this data
will reside; in memory protection scenarios the stack may only be used
for actual stack data and nothing else.
On some platforms (particularly ARM), including kernel_arch_data.h from
the toplevel kernel.h exposes intractable circular dependency issues.
We create a new per-arch header "kernel_arch_thread.h" with very limited
scope; it only defines the three data structures necessary to instantiate
the arch-specific bits of a struct k_thread.
Change-Id: I3a55b4ed4270512e58cf671f327bb033ad7f4a4f
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types. There are few places we dont convert over to the new
types because of compatiability with ext/HALs or for ease of transition
at this point. Fixup a few of the PRI formatters so we build with newlib.
Jira: ZEP-2051
Change-Id: I7d2d3697cad04f20aaa8f6e77228f502cd9c8286
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This is a start to move away from the C99 {u}int{8,16,32,64}_t types to
Zephyr defined u{8,16,32,64}_t and s{8,16,32,64}_t. This allows Zephyr
to define the sized types in a consistent manor across all the
architectures we support and not conflict with what various compilers
and libc might do with regards to the C99 types.
We introduce <zephyr/types.h> as part of this and have it include
<stdint.h> for now until we transition all the code away from the C99
types.
We go with u{8,16,32,64}_t and s{8,16,32,64}_t as there are some
existing variables defined u8 & u16 as well as to be consistent with
Zephyr naming conventions.
Jira: ZEP-2051
Change-Id: I451fed0623b029d65866622e478225dfab2c0ca8
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This commit should fix the concern about uninitialized memory of main
thread that was raised in https://gerrit.zephyrproject.org/r/#/c/12920/
The issue is more general, if it happens that the content of the
CPENABLE flag of any thread is set then any other thread using the CP
may cause a memory corruption.
I'd prefer to avoid the issue by initializing the CP descriptor to 0.
The descriptor itself is few words. We set them to 0 up to CP_ASA, which
is set to a real value.
As the dummy thread instantiated at the kernel startup does not use CP,
there is no CP area in its thread memory buffer. However it is mandatory
that it have the CP descriptor and that cpEnable in that descripot is
set to null. This is ensured by adding XT_CP_DESCR_SIZE to
_K_THREAD_NO_FLOAT_SIZEOF.
Change-Id: I6a36b5b363600ea1e6d98ab679981182b2b5a236
Signed-off-by: Mazen NEIFER <mazen@nestwave.com>
The issue was that cpStack was changed to a memory buffer by commit
https://gerrit.zephyrproject.org/r/#/c/12816
However the assembly code was expecting it to be a pointer and thus
issuing an indirection, that leads to wrong addresses.
The fix removed this unnecessary indirection and thus the inherent
invalid memory access exception.
Issue: ZEP-1997
Change-Id: I843f049212f2d116a01b05367a284209f463a5e7
Signed-off-by: Mazen NEIFER <mazen@nestwave.com>
The CP context area was before on the bottom of the stack just
after the thread descriptor. Now it is moved inside the thread
descriptor to support some kind of memory protection.
Change-Id: Id3ebeaecfd9c2475899713fdc8da583a1f9121f9
Signed-off-by: Mazen NEIFER <mazen@nestwave.com>
fibers/tasks are now just threads and we should not be using
struct *tcs any more.
Change-Id: Iee5369abcc66b4357a0c75537025fe8edb0ffbb4
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The retval field shall hold the return value itself not a pointer on its
location.
Change-Id: I3f9e225f2bdd501f88441946b5187ebbd17a71e3
Signed-off-by: Mazen NEIFER <mazen@nestwave.com>
This function needs to be decalred in a file included by _thread_entry.
It also needs to have exit function declared as not returning.
Change-Id: I2a01e7408cf70266351ae5089f45b5d9d009fabe
Signed-off-by: Mazen NEIFER <mazen@nestwave.com>
Master branch changed requirements for license headers while this
branch has been in development.
Change-Id: I9bce16ff275057a4bb664019628fc9b6de7aef7c
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>