They are not part of the API, so rename from K_<state> to
_THREAD_<state>.
Change-Id: Iaebb7d3083b80b9769bee5616e0f96ed2abc5c56
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Replace _scs_relocate_vector_table with direct CMSIS register access and
use of __ISB/__DSB routinues. We also cleanup the code a little bit to
just have one implentation of relocate_vector_table() on ARMv7-M.
Jira: ZEP-1568
Change-Id: I088c30e680a7ba198c1527a5822114b70f10c510
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
CMSIS provides a complete implentation for reboot, we can utilize it
directly and reduce zephyr specific code.
Jira: ZEP-1568
Change-Id: Ia9d1abd5c1e02e724423b94867ea452bc806ef79
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
As a first step towards removing the custom ARM Cortex-M Core code
present in Zephyr in benefit of using CMSIS, this change replaces
the use of the custom core code with CMSIS macros in
enable_floating_point().
Jira: ZEP-1568
Change-id: I544a712bf169358c826a3b2acd032c6b30b2801b
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Support using CMSIS defines and functions, we either pull the expect
defines/enum from the SoC HAL layers via <soc.h> for the SoC or we
provide a default set based on __NVIC_PRIO_BITS is defined.
We provide defaults in the case for:
IRQn_Type enum
*_REV define (set to 0)
__MPU_PRESENT define (set to 0 - no MPU)
__NVIC_PRIO_BITS define (set to CONFIG_NUM_IRQ_PRIO_BITS)
__Vendor_SysTickConfig (set to 0 - standard SysTick)
Jira: ZEP-1568
Change-Id: Ibc203de79f4697b14849b69c0e8c5c43677b5c6e
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
In preperation for removing the scb/scs layers and using CMSIS directly
lets remove all the _Scb* and _Scs* functions that are not currently
used.
Jira: ZEP-1568
Change-Id: If4641fb9a6de616b4b8793d4678aaaed48e794bc
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
On other targets, CONFIG_TEXT_SECTION_OFFSET allows the entire image to
be moved in memory to allow space for some type of header. The Mynewt
project bootloader prepends a small header, and this config needs to be
supported for this to work.
The specific alignment requirements of the vector table are chip
specific, and generally will be a power of two larger than the size of
the vector table.
Change-Id: I631a42ff64fb8ab86bd177659f2eac5208527653
Signed-off-by: David Brown <david.brown@linaro.org>
It is called before early SoC initialization, so remove the duplicated
code from other boards and just set it by default when using XIP.
This can later be used when adding bootloader support, as an
additional option could be created to move the VTOR offset to a
different address.
Change-Id: Ia1f5d9a066de61858ee287215cefdd58596b6b1c
Signed-off-by: Ricardo Salveti <ricardo.salveti@linaro.org>
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.
Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.
Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file. Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.
Jira: ZEP-1457
Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
The cortex-m7 is an implementation of armv7-m. Adjust the Kconfig
support for cortex-m7 to reflect this and drop the unnecessary,
explicit, conditional compilation.
Change-Id: I6ec20e69c8c83c5a80b1f714506f7f9e295b15d5
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
Precursor patches have arranged that conditional compilation hanging
on CONFIG_CPU_CORTEX_M3_M4 provides support for ARMv7-M, rename the
config variable to reflect this.
Change-Id: Ifa56e3c1c04505d061b2af3aec9d8b9e55b5853d
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
Precursor patches have arranged all conditional compilation hanging on
CONFIG_CPU_CORTEX_M0_M0PLUS such that it actually represents support
for ARM ARMv6-M, rename the config variable to reflect this.
Change-Id: I553fcf3e606b350a9e823df31bac96636be1504f
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
The ARM code base provides for three mutually exclusive ARM
architecture related conditional compilation choices. M0_M0PLUS,
M3_M4 and M7. Throughout the code base we have conditional
compilation gated around these three choices. Adjust the form of this
conditional compilation to adopt a uniform structure. The uniform
structure always selects code based on the definition of an
appropriate config option rather the the absence of a definition.
Removing the extensive use of #else ensures that when support for
other ARM architecture versions is added we get hard compilation
failures rather than attempting to compile inappropriate code for the
added architecture with unexpected runtime consequences.
Adopting this uniform structure makes it straight forward to replace
the adhoc CPU_CORTEX_M3_M4 and CPU_CORTEX_M0_M0PLUS configuration
variables with ones that directly represent the actual underlying ARM
architectures we provide support for. This change also paves the way
for folding adhoc conditional compilation related to CPU_CORTEX_M7
directly in support for ARMv7-M.
This change is mechanical in nature involving two transforms:
1)
#if !defined(CONFIG_CPU_CORTEX_M0_M0PLUS)
...
is transformed to:
#if defined(CONFIG_CPU_CORTEX_M0_M0PLUS)
#elif defined(CONFIG_CPU_CORTEX_M3_M4) || defined(CONFIG_CPU_CORTEX_M7)
...
2)
#if defined(CONFIG_CPU_CORTEX_M0_M0PLUS)
...
#else
...
#endif
is transformed to:
#if defined(CONFIG_CPU_CORTEX_M0_M0PLUS)
...
#elif defined(CONFIG_CPU_CORTEX_M3_M4) || defined(CONFIG_CPU_CORTEX_M7)
...
#else
#error Unknown ARM architecture
#endif
Change-Id: I7229029b174da3a8b3c6fb2eec63d776f1d11e24
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
Adjust the layout of various ARM assember files to conform to the norm
used in the majority of files.
Change-Id: Ia5007628be5ad36ef587946861c6ea90a8062585
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
These two fields in the thread structure control the preemptibility of a
thread.
sched_locked is decremented when the scheduler gets locked, which means
that the scheduler is locked for values 0xff to 0x01, since it can be
locked recursively. A thread is coop if its priority is negative, thus
if the prio field value is 0x80 to 0xff when looked at as an unsigned
value.
By putting them end-to-end, this means that a thread is non-preemptible
if the bundled value is greater than or equal to 0x0080. This is the
only thing the interrupt exit code has to check to decide to try a
reschedule or not.
Change-Id: I902d36c14859d0d7a951a6aa1bea164613821aca
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.
- prio can fit in one byte, limiting the priorities range to -128 to 127
- recursive scheduler locking can be limited to 255; a rollover results
most probably from a logic error
- flags are split into execution flags and thread states; 8 bits is
enough for each of them currently, with at worst two states and four
flags to spare (on x86, on other archs, there are six flags to spare)
Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.
Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
This will allow for an enhancement when checking if the thread is
preemptible when exiting an interrupt.
Change-Id: If93ccd1916eacb5e02a4d15b259fb74f9800d6f4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The Cortex-M0(+) and in general processors that support only the ARMv6-M
instruction set have a reduced set of registers and fields compared to
the ARMv7-M compliant processors.
This change goes through all core registers and disables or removes
everything that is not part of the ARMv6-M architecture when compiling
for Cortex-M0.
Jira: ZEP-1497
Change-id: I13e2637bb730e69d02f2a5ee687038dc69ad28a8
Signed-off-by: Vinayak Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
replace include <nanokernel.h> with <kernel.h> everywhere and also fix
any remaining mentions of nanokernel.
Keep the legacy samples/tests as is.
Change-Id: Iac48447bd191e83f21a719c69dc26233216d08dc
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Obsolete, replaced by _set_thread_return_value().
Change-Id: I23e9cfc07e43542f0965817edc3552d456fd2ef3
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.
Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Also remove some old cflags referencing directories that do not exist
anymore.
Also replace references to legacy APIs in doxygen documentation of
various functions.
Change-Id: I8fce3d1fe0f4defc44e6eb0ae09a4863e33a39db
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
That module is not used anymore: it was introduced pre-Zephyr to add
some kind of awareness when debugging ARM Cortex-M3 code with GDB but
was never really used by anyone. It has bitrotted, and with the recent
move of the tTCS and tNANO data structures to common _kernel and
k_thread, it does not even compile anymore.
Jira: ZEP-1284, ZEP-951
Change-Id: Ic9afed00f4229324fe5d2aa97dc6f1c935953244
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
And also remove now obsolete ARCH_HAS_TASK_ABORT.
ARC does not need the options either.
Change-Id: Ie52d63178a367ce12b911dacfe2d389f4f75ed2d
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
- does not pull in printk(), for potential footprint gain
- does not pull in k_thread_abort(), for single-threaded systems
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Ibc6a198b81a6cd73117d1e85aa05b92a4501a34d
Some kernel operations, like scheduler locking can be optmized out,
since coop threads lock the scheduler by their very nature. Also, the
interrupt exit path for all architecture does not have to do any
rescheduling, again by the nature of non-preemptible threads.
Change-Id: I270e926df3ce46e11d77270330f2f4b463971763
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
nano_cpu_idle/nano_cpu_atomic_idle were not ported to the unified
kernel, and only the old APIs were available. There was no real impact
since, in the unified kernel, only the idle thread should really be
doing power management. However, with a single-threaded kernel, these
functions can be useful again.
The kernel internals now make use of these APIs instead of the legacy
ones.
Change-Id: Ie8a6396ba378d3ddda27b8dd32fa4711bf53eb36
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.
However, this caused two problems:
1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.
2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.
To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.
On the ARM FRDM K64F board, this improvement is seen:
Before:
1- Measure time to switch from ISR back to interrupted task
switching time is 215 tcs = 1791 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 315 tcs = 2625 nsec
After:
1- Measure time to switch from ISR back to interrupted task
switching time is 130 tcs = 1083 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 225 tcs = 1875 nsec
These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.
Fixes ZEP-1401.
Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Cortex-M0/M0+ do not have other faults than the hard fault at priority
-1, so they do not need to reserve a priority to allow exceptions to
trigger during handling of ISRs.
Change-Id: I479e439f7bcac70b4b2b787bcd744a4c65437e80
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
This allows using it in _EXC_PRIO() instead of hardcoding 2 and 3.
Change-Id: I3549be54602643e06823ba63beb6a6992f39f776
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Use it to flag which CPUs can do zero latency interrupts, which depend
on being able to lock up to a specific interrupt priority.
Change-Id: I09f71366ea1d05486e38c513a09abc270884879f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Add a "memory" clobber to inline asm SVC call to ensure the compiler
does not reorder the instruction relative to other memory accesses.
Issue found by inspect the source code. There is no evidence to
suggest that this bug will manifest for any current ARM target using a
current compiler.
Change-Id: I32b1e5ede02a6dbea02bb8f98729fff1cca1ef2a
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
This patch fixes the unused parameter warning found at the
arch/arm/core/fault.c and arch/arm/soc/st_stm32/stm32f1/soc_gpio.c
files.
Change-Id: I5b3013c1514cff30f4e98feb31169fb28546c534
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
Initializing the interrupt stack before initializing (turning off) the
watchdog on the FRDM board pushed the initialization of the watchdog too
late, causing it to fire and reset the board. The board would be kept in
a reboot loop.
Move the initialization of the watchdog earlier: this runs on the main
stack now, instead of the interrupt stack, the same stack the interrupt
stack initalization code runs on.
Change-Id: Ic0006f4f4f4090393571d8355a80dc9390c9fbc6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Make the systick feature optional that can be selected by the SoC.
Change-Id: I4a405640b84daecc17fc1882743d3cafb78ff861
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
An IRQ would always register as a ZIL interrupt.
Change-Id: If82a85f472a60512745652aacc7e8b7dfacaa268
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The assembler was passed immediate values that are too large for the
limited Cortex-M0 thumb assembly. Load values in registers instead of
using immediate values.
Change-Id: Ib5541c92dea03e0efb1b88ab91eeb408d151a71b
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
This patch enables REBOOT when RUNTIME_NMI is selected via defconfig
file. This action is required to prevent compilation errors.
Change-Id: I67c18b2860ac34ba8f96e780737b4857a6063ece
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@linaro.org>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
If CORTEX_M_SYSTICK is not selected, do not reference
_timer_int_handler. SoC will need to define a custom system
clock implementation.
Change-Id: I655f3abf66953e434fef69ed16db2d9c2dcc486e
Signed-off-by: Vinayak Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Zephyr kernel is unable to compile when CONFIG_RUNTIME_NMI is enabled in
defconfig on ARM's architectures.
This patch addresses the following issues:
* In nmi.c _DefaultHandler() is referencing a function
(_ScbSystemReset()) not defined in Zephyr. This has now been replaced
with sys_arch_reboot.
* nmi.h is included in ASM files and due to the usage of "extern" the
compilation ends with an error. Added the directive _ASMLANGUAGE to
prevent the problem.
Jira: ZEP-1319
Change-Id: I7623ca97523cde04e4c6db40dc332d93ca801928
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@linaro.org>
Move _thread_base initialization to _init_thread_base(), remove mention
of "nano" in timeouts init and move timeout init to _init_thread_base().
Initialize all base fields via the _init_thread_base in semaphore groups
code.
Change-Id: I05b70b06261f4776bda6d67f358190428d4a954a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Use the main stack during very early boot so that we can call memset on
the interrupt stack. Initialize the interrupt stack before it is used
for the rest of the pre-kernel initialization.
Change-Id: I6fcc9a08678afdb82e83465cda1c7a2a8c849c9b
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The ARM Cortex-M early boot was using a custom stack at the end of the
SRAM instead of the interrupt stack. This works as long as no static
data that needs a known initial value occupies that stack space. This
has probably not been an issue because the .noinit section is at the
very end of the image, but it was still wrong to use that region of
memory for that initial stack.
To be able to use the interrupt stack during early boot, the stack has
to be released before an interrupt can happen. Since ARM Cortex-M uses
PendSV as a very low priority exception for context switching, if a
device driver installs and enables an interrupt during the PRE_KERNEL
initialization points, an interrupt could take precedence over PendSV
while the initial dummy thread has not yet been context switched of and
thus released the interrupt stack. To address this, rather than using
_Swap() and thus triggering PendSV, the initialization logic switches to
the main stack and branches to _main() directly instead.
Fixes ZEP-1309
Change-Id: If0b62cc66470b45b601e63826b5b3306e6a25ae9
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Artifact from microkernel, for handling multiple pending tasks on
nanokernel objects.
Change-Id: I3c2959ea2b87f568736384e6534ce8e275f1098f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Prio should be an int, since values are small integers, not a fixed-size
int32_t. It aligns with the prio parameters of the other APIs.
Stack size should be size_t.
Change-Id: Id29751b86c4ad7a7c2a7ffe446c2a96ae83c77bf
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
There was a possible race condition when setting the return value of a
thread that is pending, from an ISR.
A kernel function causes a thread to pend, with the following series of
steps:
- disable interrupts
- move current thread to wait_q
- call _Swap
Depending if running on M3/4 or M0+, _Swap will either issue a svc #0,
or pend PendSV directly. The same problem exists in both cases.
M3/4:
__svc will:
- enable interrupts
- trigger __pendsv
M0+:
_Swap() will enable interrupts.
__pendsv will:
- save register context including PSP into the thread struct
If an interrupt occurs between interrupts being enabled them and
__pendsv saving PSP, and the ISR sets the pending thread's return value,
this will happen:
- sees the thread in a wait_q
- removes it
- makes it ready
- calls _set_thread_return_value
- _set_thread_return_value looks at the thread's saved PSP to poke
the value
In this scenario, PSP hasn't yet been updated by __pendsv so it's a
stale value from the previous context switch, resulting in unpredictable
word on the stack getting set to the return value.
There is no way to fix this issue and still have the return value being
delivered directly in the pending thread's exception stack frame, in the
M0+ case. There will always be a window between the unlocking of
interrupts and PendSV being handled. On M3/4, it could be possible with
the mix of SVC and PendSV, since the exception stack frame is created in
the __svc handler. However, because we want to keep the two
implementations as close as possible, and there were talks of moving
M3/4 to using PendSV only, to save an exception, the approach taken
solves both cases.
The approach taken is similar to the ARC and Nios2 ports, where
there is a field in the thread structure that holds the return value.
_Swap() then loads r0/a1 with that value just before returning.
Fixes ZEP-1289.
Change-Id: Iee7e06fe3f8ded84aff918fd43408c7f589344d9
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
When a thread dies, at least print the pointer to it, so we can debug
better.
Change-Id: Ief6bbc0c221e2d5271c240a4b73df16413aa5e22
Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
This reverts commit
"kernel/arm: add comment about _is_next_thread_current"
and fixes the interrupt locking issue.
The comment would have been right if only reads were done the ready
queue, but that is not the case. It turns out that the comment was written
ignoring the fact that _is_next_thread_current() updates the next thread
cache when fetching the next thread.
Change-Id: I21c9230f85f4f87a6bbf14fd4a9eb7e19b59f8c5
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Normally, _is_next_thread_current() must be called with interrupts
locked, but the ARM interrupt exit code does not have to do that. Add
explanation why.
Change-Id: Id383b47a055fdd6fbd5afffa52772e92febde98f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
There was a lot of duplication between architectures for the definition
of threads and the "nanokernel" guts. These have been consolidated.
Now, a common file kernel/unified/include/kernel_structs.h holds the
common definitions. Architectures provide two files to complement it:
kernel_arch_data.h and kernel_arch_func.h. The first one contains at
least the struct _thread_arch and struct _kernel_arch data structures,
as well as the struct _callee_saved and struct _caller_saved register
layouts. The second file contains anything that needs what is provided
by the common stuff in kernel_structs.h. Those two files are only meant
to be included in kernel_structs.h in very specific locations.
The thread data structure has been separated into three major parts:
common struct _thread_base and struct k_thread, and arch-specific struct
_thread_arch. The first and third ones are included in the second.
The struct s_NANO data structure has been split into two: common struct
_kernel and arch-specific struct _kernel_arch. The latter is included in
the former.
Offsets files have also changed: nano_offsets.h has been renamed
kernel_offsets.h and is still included by the arch-specific offsets.c.
Also, since the thread and kernel data structures are now made of
sub-structures, offsets have to be added to make up the full offset.
Some of these additions have been consolidated in shorter symbols,
available from kernel/unified/include/offsets_short.h, which includes an
arch-specific offsets_arch_short.h. Most of the code include
offsets_short.h now instead of offsets.h.
Change-Id: I084645cb7e6db8db69aeaaf162963fe157045d5a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The unified kernel is now the only supported kernel, so this
option is unnessary. Eliminating this option also enables
the removal of some legacy code that is no longer required.
Change-Id: Ibfc339d643c8de16a2ed2009c9b468848b8b4972
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Verify the thread priorities are within the bounds when starting a new
thread and when changing the priority of a thread.
Change-Id: I007b3b249e4b80235b6439cbee44cad2f31973bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Symbols now use the K_ prefix which is now standard for the
unified kernel. Legacy support for these symbols is retained
to allow existing applications to build successfully.
Change-Id: I3ff12c96f729b535eecc940502892cbaa52526b6
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
This change is required to support unified kernel.
Change-Id: I47bd644239eb3e624c7a5cb456eedad5aca79e8e
Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
The unified kernel expect the default return value from a _Swap() call
to be set to -EAGAIN by the architecture code. Cortex-M3/M4 does this in
the SVC call handler, and it was missing from the Cortex-M0/M0+ before
pending PendSV.
Change-Id: I3316901186ab409f49043eb4f1972c4b0dd9a4a2
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Adds standard prefix to symbolic option that flags a thread
as essential to system operation.
Change-Id: Ia904a81ce343fdd1cd44caaaeae641d822777f9b
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
All M7 features common to M3/M4 are working. New features like Tightly
Coupled Memory (TCM) are not yet supported.
Change-Id: I5f7b292e70843aec415728f24c973bb003014f4b
Jira: ZEP-977
Signed-off-by: Piotr Mienkowski <Piotr.Mienkowski@schmid-telecom.ch>
Existing code wasn't removing a thread from the kernel's list
of active threads if the thread terminated or aborted. (It did
remove it if the delayed starting of a thread was cancelled.)
Change-Id: Icc97917e33765696480d0e9bf31e882ef555d095
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Gets rid of unnecessary THREAD_MONITOR_INIT() macro, to be
consistent with the approach taken by _thread_monitor_exit().
Aligns x86 code with the approach used on other architectures.
Revises the associated comments and removes unnecessary
doxygen tags.
Change-Id: Ied1aebcd476afb82f61862b77264efb8a7dc66c9
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Renames _thread_exit() to _thread_monitoring_exit() to make
its purpose clearer. Revises the associated comments and
removes unnecessary doxygen tags.
Change-Id: I010a328d35d2d79d2a29b9d0b6c02097bb655989
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Not disabling SysTick as it is optional by the spec.
SVC not used as there is no priority-based interrupt masking (only
PendSV is used).
Largely based on a previous work done by Euan Mutch <euan@abelon.com>.
Jira: ZEP-783
Change-Id: I38e29bfcf0624c1aea5f9fd7a74230faa1b59e8b
Signed-off-by: Ricardo Salveti <ricardo.salveti@linaro.org>
Build breaks when enabling CONFIG_NEWLIB_LIBC because it has its own
sched.h file.
This is a bad symptom of a greater issue: the build system passes many
'-I<path>' options to the compiler, and that allows including header
files by simply specifying their names (when located somewhere else than
<zephyr>/include/) and can cause clashes when several files in different
locations have the same name, like in this case.
Fixes ZEP-1062.
Change-Id: I81d1d69ee6669a609cd0c420b1b8f870d17dcb67
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Builtin might not be available for ARMv6 (Cortex-M0/M0+) depending on
the toolchain used (not available by Zephyr's SDK GCC), so move the
atomic operations selection to the Cortex-M family Kconfig file.
Change-Id: I20a5a0c5fdd2bcff2d304139f5a7e8502fdb1cb3
Signed-off-by: Ricardo Salveti <ricardo.salveti@linaro.org>
The unified kernel calls a function (_get_next_ready_thread) to fetch
the next thread to run: it thus must save caller-saved registers that
are expected to hold a value before calling such function.
The callee-saved registers are available after saving them in the
outgoing thread's stack, so use some of those instead to reduce the
number of registers to save before calling _get_next_ready_thread. Also,
save caller-saved registers in callee-saved registers instead of on the
stack to reduce memory accesses.
This issue did not show up previously, probably because
_get_next_ready_thread did not use the regsiters that had to be saved,
but an upcoming optimization to that function stomps on them.
Change-Id: I27dcededace846e623c3870d907f0d4c464173bf
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Not used anymore as the support for dynamic IRQs was removed by
commit 844e212.
Change-Id: I624bbd777b2dbcbf905cedc61e63cef21e8a0bce
Signed-off-by: Ricardo Salveti <ricardo.salveti@linaro.org>
Add _arch_irq_is_enabled external interrupt API to find out
if an IRQ is enabled.
Change-id: I8ccbaa6d4640c1ab8369d2d35c01a2cfbb02f6cd
Signed-off-by: Vinayak Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
Do not clear pending IRQ when enabling an IRQ on ARM
architectures. Peripherals and S/W ISRs may be required to
be deferred until they are enabled later and the pended
IRQ should be retained for the ISR to be vectored to when
enabled.
Change-id: I808183018d8a2cc58390a1de3b4797b2bb7c6ec9
Signed-off-by: Vinayak Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
The ARM architecture port is fitted with support for the unified kernel,
namely:
- the interrupt/exception exit code now pends PendSV if the current
thread is not a coop thread and if the scheduler is not locked
- fiber_abort is replaced by k_thread_abort(), which takes a thread ID
as a parameter (i.e. does not only operate on the current thread)
- the _nanokernel.flags cache of _current.flags is not used anymore
(could be a source of bugs) and is not needed in the scheduling algo
- there is no 'task' field in the _nanokernel anymore: PendSV not calls
_get_next_ready_thread instead
- the _nanokernel.fiber field is replaced by a more sophisticated
ready_q, based on the microkernel's priority-bitmap-based one
- thread initialization initializes new fields in the tcs, and does not
initialize obsolete ones
- nano_private includes nano_internal.h from the unified directory
- The FIBER, TASK and PREEMPTIBLE flags do not exist anymore: the thread
priority drives the behaviour
- the tcs uses a dlist for queuing in both ready and wait queues instead
of a custom singly-linked list
- other new fields in the tcs include a schedule-lock count, a
back-pointer to init data (when the task is static) and a pointer to
swap data, needed when a thread pending on _Swap() must be passed more
then just one value (e.g. k_stack_pop() needs an error code and data)
- the 'fiber' and 'task' fields of _nanokernel are replaced with an O(1)
ready queue (taken from the microkernel)
- fiberRtnValueSet() is aliased to _set_thread_return_value since it
also operates on preempt threads now
- _set_thread_return_value_with_data() sets the swap_data field in
addition to a return value from _Swap()
- convenience aliases are created for shorter names:
- _current is defined as _nanokernel.current
- _ready_q is defined as _nanokernel.ready_q
- _Swap() sets the threads's return code to -EAGAIN before swapping out
to prevent timeouts to have to set it (solves hard issues in some
kernel objects).
Change-Id: I36c03c362bc2908dae064ec67e6b8469fc573983
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
They use data that is unavailable to the unified kernel, and are not
used anymore really. For now, only compile them when CONFIG_GDB_INFO=y,
and never enable CONFIG_GDB_INFO when building the unified kernel.
These files should go away when the unified kernel is made the only
kernel type.
Change-Id: I0a2a917dd453ecaae729125008756e0f676df16d
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Zephyr uses the last priority level for the PendSV exception, but
sharing prio can still be useful on systems with a reduced set of
priorities, like Cortex-M0/M0+.
Change-Id: I767527419dcd8f67c2b406756b9208afd3b96de0
Signed-off-by: Ricardo Salveti <ricardo.salveti@linaro.org>
The toolchain headers included an abstraction for defining symbol
names in assembly context in the situation where we're using a
DOS-style assembler that automatically prepends an underscore to
symbol names.
We aren't. Zephyr is an ELF platform. None of our toolchains do
this. Nothing sets the "TOOL_PREPENDS_UNDERSCORE" macro from within
the project, and it surely isn't an industry standard. Yank it out.
Now we can write assembler labels in natural syntax, and a few other
things fall out to simplify too.
(NOTE: these headers contain assembly code and will fail checkpatch.
That is an expected false positive.)
Change-Id: Ic89e74422b52fe50b3b7306a0347d7a560259581
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Completing the terminology change started with change 4008
by updating the Kconfig files processed to produce the
online documentation, plus header files processed by
doxygen. References to 'platform' are change to 'board'
Change-Id: Id0ed3dc1439a0ea0a4bd19d4904889cf79bec33e
Jira: ZEP-534
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Add a Kconfig option to select between the Hard and Soft Float ABIs. We
also default to the Hard Float ABI as this is what older SDK versions
supported.
JIRA: ZEP-555
Change-Id: I2180c98cd7556ab49f5ca9b46b31add2c11bd07b
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Arches now select whether they want to use the GCC built-ins,
their own assembly implementation, or the generic C code.
At the moment, the SDK compilers only support builtins for ARM
and X86. ZEP-557 opened to investigate further.
Change-Id: I53e411b4967d87f737338379bd482bd653f19422
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Used by ARC, ARM, Nios II. x86 has alternate code done in assembly.
Linker scripts had some alarming comments about data/BSS overlap,
but the beginning of BSS is aligned so this can't happen even if
the end of data isn't.
The common code doesn't use fake pointer values for the number of
words in these sections, don't compute or export them.
Change-Id: I4291c2a6d0222d0a3e95c140deae7539ebab3cc3
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Adds Kconfig options CPU_HAS_FPU, FLOAT and FP_SHARING for the arm
architecture.
NOTE: All SOCs in the MK64F12 family have an FPU so that makes it a
convenient location to enable the hidden CPU_HAS_FPU option.
Change-Id: I71771d24f20f52079314bb8db9bf8a0aa827ab41
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
Enable floating point in the Cortex-M4 processor.
Change-Id: Ibadae12b9197d6486fd5c6a3d4e177fa9e1c71bc
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
Saves and loads the non-volatile FP registers (s16-s31) when switching
between threads.
Change-Id: Ib3190452d9a70d722032ac83176eb4fbb92aca3d
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
Adds the preemptive floating point registers to the ARM's thread
control structure.
Change-Id: I65fbee6303091ce0658bbc442c4707d306b68e92
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
Updates the exception stack frame structure to include floating point
registers.
Change-Id: I0fef784cf4d91dda245180abd75bfd9221825fba
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
The ReST parser dislikes enumerations with no empty lines in
between. Whatever.
Change-Id: I480c08fe5b69f0d0f3ebfacdc64fc9e3ec94da21
Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
Fibers initialize this back pointer to NULL as they are (by definition)
not microkernel tasks. Microkernel tasks initialize it to their
corresponding 'ktask_t'.
However for nanokernel systems, the back pointer is always NULL. This
is because there is only one task in a nanokernel system (the background
task) and it can not pend on a nanokernel object--it must poll.
Change-Id: I9840fecc44224bef63d09d587d703720cf33ad57
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
We really should have more faith in the compiler, it generates
code to implement this exactly like the arch-specific assembly
versions, and on ARM is actually 4 bytes shorter.
FUNC_NO_FP used to disable the usual C preamble to update the
frame/stack pointers, which is how the sizes are still the same
or less. It's debatable how useful the occasional use of
FUNC_NO_FP is in practice since it hinders debugging and in a
production build frame pointers should be globally disabled, but
we can address that later.
Change-Id: I6c4b64ab3e3a9b6f91d52fa8c92e6e79a986fc77
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Of the 3 related functions;
_thread_essential_set()
_thread_essential_clear()
_is_thread_essential()
The first two are parameter-less and always operate on
"_nanokernel.current". The last one takes a 'thread' parameter but will
operate on _nanokernel.current if the parameter is NULL. All calls to
_is_thread_essential() pass NULL!
This change makes the 3 functions consistent by removing the parameter
to the 3rd function. This should also be marginally more efficient,
though consistency was the motivation. This change corrects the doc
preamble to all 3 functions.
(These functions would probably be better as inlines. Also, the choice
of when to use wrappers seems a bit arbitrary. E.g. there's nothing
for setting/testing the "FIBER" flag.)
Change-Id: Ie3589f8a28b227c6d7a3a31b664d3b3e6e9c6d17
Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>
Use SOC_FAMILY and SOC_SERIES to identify soc families and series
and to point to the correct linker files and files related to a
specific SoC.
Change-Id: I8b1a7339f37d6ea4161d03073d36557a40c0b4a6
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
GCC enables unaligned memory access for ARMv6 and above.
Once unaligned memory access is allowed, there's no need
for the unaligned memory access trap.
The change prevents the situation when the compiler
generates code that uses unaligned memory access, but
the CPU generates an exception when this code runs.
Change-Id: Id33f2264c631772e5c561e76fb579d8b7bc26e1e
Signed-off-by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
Changed names of Kconfig flags, variables, functions, files and
return codes consistent with names used in the RFC. Updated
relevant comments to match the changes.
Origin: Original
Change-Id: Ie7941032d7ad7af61fc02928f74538745e7966e8
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Use of `ldr` triggers unaligned memory access when loading SVC
instruction to r0. This is caused by the fact that SVC is a 16-bit
instruction, hence with a 2 byte offset, we are performing an non-word
aligned access. Prevent this by using `ldrh` to load a halfwords rather
than full words.
Change-Id: Ieae60c2ce86c6cfe15c89627d3a450797ce7e714
Signed-off-by: Maciej Borzecki <maciek.borzecki@gmail.com>
Make kconfig look the same for all architectures.
JIRA: ZEP-107
Change-Id: Ia8100194ec333fc07a1dff4f6f90364ce8bef4d3
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The nmi_on_reset.S functions are used by all ARM platforms. It
makes no sense to repeat the same code for all platforms. Moving
the code from each SOC implementation to arch/arm/core.
The same treatment for the NMI_INIT() macro. Moving it from a per
SOC implementation to the include/arch/arm/cortex_m/nmi.h.
Change-Id: I574d8880a44046cc7b9e1b635e80d6e83657b8c1
Signed-off-by: Dan Kalowsky <daniel.kalowsky@intel.com>
The thread monitor allows to iterate over the thread context
structures for each existing thread (fiber/task) in the system.
Thread context structures do not expose thread entry information
directly. Although all the information can be scavenged from memory
stacks. Besides, accessing the information depends on the stack
implementation for each architecture.
By extending the tcs we allow a direct access to the thread
entry point and its parameters, only when thread monitor is
enabled.
It also allows a task to access its kernel task structure
through the first parameter of the thread.
This allows a debugger application to access the information directly
from the thread context structures list.
Change-Id: I0a435942b80eddffdf405016ac4056eb7aa1239c
Signed-off-by: Juan Manuel Cruz <juan.m.cruz.alcaraz@intel.com>