The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.
However, this caused two problems:
1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.
2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.
To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.
On the ARM FRDM K64F board, this improvement is seen:
Before:
1- Measure time to switch from ISR back to interrupted task
switching time is 215 tcs = 1791 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 315 tcs = 2625 nsec
After:
1- Measure time to switch from ISR back to interrupted task
switching time is 130 tcs = 1083 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 225 tcs = 1875 nsec
These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.
Fixes ZEP-1401.
Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Cortex-M0/M0+ do not have other faults than the hard fault at priority
-1, so they do not need to reserve a priority to allow exceptions to
trigger during handling of ISRs.
Change-Id: I479e439f7bcac70b4b2b787bcd744a4c65437e80
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
This allows using it in _EXC_PRIO() instead of hardcoding 2 and 3.
Change-Id: I3549be54602643e06823ba63beb6a6992f39f776
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Use it to flag which CPUs can do zero latency interrupts, which depend
on being able to lock up to a specific interrupt priority.
Change-Id: I09f71366ea1d05486e38c513a09abc270884879f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Add a "memory" clobber to inline asm SVC call to ensure the compiler
does not reorder the instruction relative to other memory accesses.
Issue found by inspect the source code. There is no evidence to
suggest that this bug will manifest for any current ARM target using a
current compiler.
Change-Id: I32b1e5ede02a6dbea02bb8f98729fff1cca1ef2a
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
This patch fixes the unused parameter warning found at the
arch/arm/core/fault.c and arch/arm/soc/st_stm32/stm32f1/soc_gpio.c
files.
Change-Id: I5b3013c1514cff30f4e98feb31169fb28546c534
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
Initializing the interrupt stack before initializing (turning off) the
watchdog on the FRDM board pushed the initialization of the watchdog too
late, causing it to fire and reset the board. The board would be kept in
a reboot loop.
Move the initialization of the watchdog earlier: this runs on the main
stack now, instead of the interrupt stack, the same stack the interrupt
stack initalization code runs on.
Change-Id: Ic0006f4f4f4090393571d8355a80dc9390c9fbc6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Make the systick feature optional that can be selected by the SoC.
Change-Id: I4a405640b84daecc17fc1882743d3cafb78ff861
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
An IRQ would always register as a ZIL interrupt.
Change-Id: If82a85f472a60512745652aacc7e8b7dfacaa268
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The assembler was passed immediate values that are too large for the
limited Cortex-M0 thumb assembly. Load values in registers instead of
using immediate values.
Change-Id: Ib5541c92dea03e0efb1b88ab91eeb408d151a71b
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
This patch enables REBOOT when RUNTIME_NMI is selected via defconfig
file. This action is required to prevent compilation errors.
Change-Id: I67c18b2860ac34ba8f96e780737b4857a6063ece
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@linaro.org>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
If CORTEX_M_SYSTICK is not selected, do not reference
_timer_int_handler. SoC will need to define a custom system
clock implementation.
Change-Id: I655f3abf66953e434fef69ed16db2d9c2dcc486e
Signed-off-by: Vinayak Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Zephyr kernel is unable to compile when CONFIG_RUNTIME_NMI is enabled in
defconfig on ARM's architectures.
This patch addresses the following issues:
* In nmi.c _DefaultHandler() is referencing a function
(_ScbSystemReset()) not defined in Zephyr. This has now been replaced
with sys_arch_reboot.
* nmi.h is included in ASM files and due to the usage of "extern" the
compilation ends with an error. Added the directive _ASMLANGUAGE to
prevent the problem.
Jira: ZEP-1319
Change-Id: I7623ca97523cde04e4c6db40dc332d93ca801928
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@linaro.org>
Move _thread_base initialization to _init_thread_base(), remove mention
of "nano" in timeouts init and move timeout init to _init_thread_base().
Initialize all base fields via the _init_thread_base in semaphore groups
code.
Change-Id: I05b70b06261f4776bda6d67f358190428d4a954a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Use the main stack during very early boot so that we can call memset on
the interrupt stack. Initialize the interrupt stack before it is used
for the rest of the pre-kernel initialization.
Change-Id: I6fcc9a08678afdb82e83465cda1c7a2a8c849c9b
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The ARM Cortex-M early boot was using a custom stack at the end of the
SRAM instead of the interrupt stack. This works as long as no static
data that needs a known initial value occupies that stack space. This
has probably not been an issue because the .noinit section is at the
very end of the image, but it was still wrong to use that region of
memory for that initial stack.
To be able to use the interrupt stack during early boot, the stack has
to be released before an interrupt can happen. Since ARM Cortex-M uses
PendSV as a very low priority exception for context switching, if a
device driver installs and enables an interrupt during the PRE_KERNEL
initialization points, an interrupt could take precedence over PendSV
while the initial dummy thread has not yet been context switched of and
thus released the interrupt stack. To address this, rather than using
_Swap() and thus triggering PendSV, the initialization logic switches to
the main stack and branches to _main() directly instead.
Fixes ZEP-1309
Change-Id: If0b62cc66470b45b601e63826b5b3306e6a25ae9
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Artifact from microkernel, for handling multiple pending tasks on
nanokernel objects.
Change-Id: I3c2959ea2b87f568736384e6534ce8e275f1098f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Prio should be an int, since values are small integers, not a fixed-size
int32_t. It aligns with the prio parameters of the other APIs.
Stack size should be size_t.
Change-Id: Id29751b86c4ad7a7c2a7ffe446c2a96ae83c77bf
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
There was a possible race condition when setting the return value of a
thread that is pending, from an ISR.
A kernel function causes a thread to pend, with the following series of
steps:
- disable interrupts
- move current thread to wait_q
- call _Swap
Depending if running on M3/4 or M0+, _Swap will either issue a svc #0,
or pend PendSV directly. The same problem exists in both cases.
M3/4:
__svc will:
- enable interrupts
- trigger __pendsv
M0+:
_Swap() will enable interrupts.
__pendsv will:
- save register context including PSP into the thread struct
If an interrupt occurs between interrupts being enabled them and
__pendsv saving PSP, and the ISR sets the pending thread's return value,
this will happen:
- sees the thread in a wait_q
- removes it
- makes it ready
- calls _set_thread_return_value
- _set_thread_return_value looks at the thread's saved PSP to poke
the value
In this scenario, PSP hasn't yet been updated by __pendsv so it's a
stale value from the previous context switch, resulting in unpredictable
word on the stack getting set to the return value.
There is no way to fix this issue and still have the return value being
delivered directly in the pending thread's exception stack frame, in the
M0+ case. There will always be a window between the unlocking of
interrupts and PendSV being handled. On M3/4, it could be possible with
the mix of SVC and PendSV, since the exception stack frame is created in
the __svc handler. However, because we want to keep the two
implementations as close as possible, and there were talks of moving
M3/4 to using PendSV only, to save an exception, the approach taken
solves both cases.
The approach taken is similar to the ARC and Nios2 ports, where
there is a field in the thread structure that holds the return value.
_Swap() then loads r0/a1 with that value just before returning.
Fixes ZEP-1289.
Change-Id: Iee7e06fe3f8ded84aff918fd43408c7f589344d9
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
When a thread dies, at least print the pointer to it, so we can debug
better.
Change-Id: Ief6bbc0c221e2d5271c240a4b73df16413aa5e22
Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
This reverts commit
"kernel/arm: add comment about _is_next_thread_current"
and fixes the interrupt locking issue.
The comment would have been right if only reads were done the ready
queue, but that is not the case. It turns out that the comment was written
ignoring the fact that _is_next_thread_current() updates the next thread
cache when fetching the next thread.
Change-Id: I21c9230f85f4f87a6bbf14fd4a9eb7e19b59f8c5
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Normally, _is_next_thread_current() must be called with interrupts
locked, but the ARM interrupt exit code does not have to do that. Add
explanation why.
Change-Id: Id383b47a055fdd6fbd5afffa52772e92febde98f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
There was a lot of duplication between architectures for the definition
of threads and the "nanokernel" guts. These have been consolidated.
Now, a common file kernel/unified/include/kernel_structs.h holds the
common definitions. Architectures provide two files to complement it:
kernel_arch_data.h and kernel_arch_func.h. The first one contains at
least the struct _thread_arch and struct _kernel_arch data structures,
as well as the struct _callee_saved and struct _caller_saved register
layouts. The second file contains anything that needs what is provided
by the common stuff in kernel_structs.h. Those two files are only meant
to be included in kernel_structs.h in very specific locations.
The thread data structure has been separated into three major parts:
common struct _thread_base and struct k_thread, and arch-specific struct
_thread_arch. The first and third ones are included in the second.
The struct s_NANO data structure has been split into two: common struct
_kernel and arch-specific struct _kernel_arch. The latter is included in
the former.
Offsets files have also changed: nano_offsets.h has been renamed
kernel_offsets.h and is still included by the arch-specific offsets.c.
Also, since the thread and kernel data structures are now made of
sub-structures, offsets have to be added to make up the full offset.
Some of these additions have been consolidated in shorter symbols,
available from kernel/unified/include/offsets_short.h, which includes an
arch-specific offsets_arch_short.h. Most of the code include
offsets_short.h now instead of offsets.h.
Change-Id: I084645cb7e6db8db69aeaaf162963fe157045d5a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The unified kernel is now the only supported kernel, so this
option is unnessary. Eliminating this option also enables
the removal of some legacy code that is no longer required.
Change-Id: Ibfc339d643c8de16a2ed2009c9b468848b8b4972
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Verify the thread priorities are within the bounds when starting a new
thread and when changing the priority of a thread.
Change-Id: I007b3b249e4b80235b6439cbee44cad2f31973bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Symbols now use the K_ prefix which is now standard for the
unified kernel. Legacy support for these symbols is retained
to allow existing applications to build successfully.
Change-Id: I3ff12c96f729b535eecc940502892cbaa52526b6
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
This change is required to support unified kernel.
Change-Id: I47bd644239eb3e624c7a5cb456eedad5aca79e8e
Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
The unified kernel expect the default return value from a _Swap() call
to be set to -EAGAIN by the architecture code. Cortex-M3/M4 does this in
the SVC call handler, and it was missing from the Cortex-M0/M0+ before
pending PendSV.
Change-Id: I3316901186ab409f49043eb4f1972c4b0dd9a4a2
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Adds standard prefix to symbolic option that flags a thread
as essential to system operation.
Change-Id: Ia904a81ce343fdd1cd44caaaeae641d822777f9b
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
All M7 features common to M3/M4 are working. New features like Tightly
Coupled Memory (TCM) are not yet supported.
Change-Id: I5f7b292e70843aec415728f24c973bb003014f4b
Jira: ZEP-977
Signed-off-by: Piotr Mienkowski <Piotr.Mienkowski@schmid-telecom.ch>
Existing code wasn't removing a thread from the kernel's list
of active threads if the thread terminated or aborted. (It did
remove it if the delayed starting of a thread was cancelled.)
Change-Id: Icc97917e33765696480d0e9bf31e882ef555d095
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Gets rid of unnecessary THREAD_MONITOR_INIT() macro, to be
consistent with the approach taken by _thread_monitor_exit().
Aligns x86 code with the approach used on other architectures.
Revises the associated comments and removes unnecessary
doxygen tags.
Change-Id: Ied1aebcd476afb82f61862b77264efb8a7dc66c9
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Renames _thread_exit() to _thread_monitoring_exit() to make
its purpose clearer. Revises the associated comments and
removes unnecessary doxygen tags.
Change-Id: I010a328d35d2d79d2a29b9d0b6c02097bb655989
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Not disabling SysTick as it is optional by the spec.
SVC not used as there is no priority-based interrupt masking (only
PendSV is used).
Largely based on a previous work done by Euan Mutch <euan@abelon.com>.
Jira: ZEP-783
Change-Id: I38e29bfcf0624c1aea5f9fd7a74230faa1b59e8b
Signed-off-by: Ricardo Salveti <ricardo.salveti@linaro.org>
Build breaks when enabling CONFIG_NEWLIB_LIBC because it has its own
sched.h file.
This is a bad symptom of a greater issue: the build system passes many
'-I<path>' options to the compiler, and that allows including header
files by simply specifying their names (when located somewhere else than
<zephyr>/include/) and can cause clashes when several files in different
locations have the same name, like in this case.
Fixes ZEP-1062.
Change-Id: I81d1d69ee6669a609cd0c420b1b8f870d17dcb67
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Builtin might not be available for ARMv6 (Cortex-M0/M0+) depending on
the toolchain used (not available by Zephyr's SDK GCC), so move the
atomic operations selection to the Cortex-M family Kconfig file.
Change-Id: I20a5a0c5fdd2bcff2d304139f5a7e8502fdb1cb3
Signed-off-by: Ricardo Salveti <ricardo.salveti@linaro.org>
The unified kernel calls a function (_get_next_ready_thread) to fetch
the next thread to run: it thus must save caller-saved registers that
are expected to hold a value before calling such function.
The callee-saved registers are available after saving them in the
outgoing thread's stack, so use some of those instead to reduce the
number of registers to save before calling _get_next_ready_thread. Also,
save caller-saved registers in callee-saved registers instead of on the
stack to reduce memory accesses.
This issue did not show up previously, probably because
_get_next_ready_thread did not use the regsiters that had to be saved,
but an upcoming optimization to that function stomps on them.
Change-Id: I27dcededace846e623c3870d907f0d4c464173bf
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Not used anymore as the support for dynamic IRQs was removed by
commit 844e212.
Change-Id: I624bbd777b2dbcbf905cedc61e63cef21e8a0bce
Signed-off-by: Ricardo Salveti <ricardo.salveti@linaro.org>
Add _arch_irq_is_enabled external interrupt API to find out
if an IRQ is enabled.
Change-id: I8ccbaa6d4640c1ab8369d2d35c01a2cfbb02f6cd
Signed-off-by: Vinayak Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
Do not clear pending IRQ when enabling an IRQ on ARM
architectures. Peripherals and S/W ISRs may be required to
be deferred until they are enabled later and the pended
IRQ should be retained for the ISR to be vectored to when
enabled.
Change-id: I808183018d8a2cc58390a1de3b4797b2bb7c6ec9
Signed-off-by: Vinayak Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
The ARM architecture port is fitted with support for the unified kernel,
namely:
- the interrupt/exception exit code now pends PendSV if the current
thread is not a coop thread and if the scheduler is not locked
- fiber_abort is replaced by k_thread_abort(), which takes a thread ID
as a parameter (i.e. does not only operate on the current thread)
- the _nanokernel.flags cache of _current.flags is not used anymore
(could be a source of bugs) and is not needed in the scheduling algo
- there is no 'task' field in the _nanokernel anymore: PendSV not calls
_get_next_ready_thread instead
- the _nanokernel.fiber field is replaced by a more sophisticated
ready_q, based on the microkernel's priority-bitmap-based one
- thread initialization initializes new fields in the tcs, and does not
initialize obsolete ones
- nano_private includes nano_internal.h from the unified directory
- The FIBER, TASK and PREEMPTIBLE flags do not exist anymore: the thread
priority drives the behaviour
- the tcs uses a dlist for queuing in both ready and wait queues instead
of a custom singly-linked list
- other new fields in the tcs include a schedule-lock count, a
back-pointer to init data (when the task is static) and a pointer to
swap data, needed when a thread pending on _Swap() must be passed more
then just one value (e.g. k_stack_pop() needs an error code and data)
- the 'fiber' and 'task' fields of _nanokernel are replaced with an O(1)
ready queue (taken from the microkernel)
- fiberRtnValueSet() is aliased to _set_thread_return_value since it
also operates on preempt threads now
- _set_thread_return_value_with_data() sets the swap_data field in
addition to a return value from _Swap()
- convenience aliases are created for shorter names:
- _current is defined as _nanokernel.current
- _ready_q is defined as _nanokernel.ready_q
- _Swap() sets the threads's return code to -EAGAIN before swapping out
to prevent timeouts to have to set it (solves hard issues in some
kernel objects).
Change-Id: I36c03c362bc2908dae064ec67e6b8469fc573983
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
They use data that is unavailable to the unified kernel, and are not
used anymore really. For now, only compile them when CONFIG_GDB_INFO=y,
and never enable CONFIG_GDB_INFO when building the unified kernel.
These files should go away when the unified kernel is made the only
kernel type.
Change-Id: I0a2a917dd453ecaae729125008756e0f676df16d
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Zephyr uses the last priority level for the PendSV exception, but
sharing prio can still be useful on systems with a reduced set of
priorities, like Cortex-M0/M0+.
Change-Id: I767527419dcd8f67c2b406756b9208afd3b96de0
Signed-off-by: Ricardo Salveti <ricardo.salveti@linaro.org>