Introducing CMake is an important step in a larger effort to make
Zephyr easy to use for application developers working on different
platforms with different development environment needs.
Simplified, this change retains Kconfig as-is, and replaces all
Makefiles with CMakeLists.txt. The DSL-like Make language that KBuild
offers is replaced by a set of CMake extentions. These extentions have
either provided simple one-to-one translations of KBuild features or
introduced new concepts that replace KBuild concepts.
This is a breaking change for existing test infrastructure and build
scripts that are maintained out-of-tree. But for FW itself, no porting
should be necessary.
For users that just want to continue their work with minimal
disruption the following should suffice:
Install CMake 3.8.2+
Port any out-of-tree Makefiles to CMake.
Learn the absolute minimum about the new command line interface:
$ cd samples/hello_world
$ mkdir build && cd build
$ cmake -DBOARD=nrf52_pca10040 ..
$ cd build
$ make
PR: zephyrproject-rtos#4692
docs: http://docs.zephyrproject.org/getting_started/getting_started.html
Signed-off-by: Sebastian Boe <sebastian.boe@nordicsemi.no>
Currently this is defined as a k_thread_stack_t pointer.
However this isn't correct, stacks are defined as arrays. Extern
references to k_thread_stack_t doesn't work properly as the compiler
treats it as a pointer to the stack array and not the array itself.
Declaring as an unsized array of k_thread_stack_t doesn't work
well either. The least amount of confusion is to leave out the
pointer/array status completely, use pointers for function prototypes,
and define K_THREAD_STACK_EXTERN() to properly create an extern
reference.
The definitions for all functions and struct that use
k_thread_stack_t need to be updated, but code that uses them should
be unchanged.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
In various places, a private _thread_entry_t, or the full prototype
were being used. Be consistent and use the same typedef everywhere.
Signen-off-by: Andrew Boie <andrew.p.boie@intel.com>
Previously, this was only done if an essential thread self-exited,
and was a runtime check that generated a kernel panic.
Now if any thread has k_thread_abort() called on it, and that thread
is essential to the system operation, this check is made. It is now
an assertion.
_NANO_ERR_INVALID_TASK_EXIT checks and printouts removed since this
is now an assertion.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Historically, stacks were just character buffers and could be treated
as such if the user wanted to look inside the stack data, and also
declared as an array of the desired stack size.
This is no longer the case. Certain architectures will create a memory
region much larger to account for MPU/MMU guard pages. Unfortunately,
the kernel interfaces treat both the declared stack, and the valid
stack buffer within it as the same char * data type, even though these
absolutely cannot be used interchangeably.
We introduce an opaque k_thread_stack_t which gets instantiated by
K_THREAD_STACK_DECLARE(), this is no longer treated by the compiler
as a character pointer, even though it really is.
To access the real stack buffer within, the result of
K_THREAD_STACK_BUFFER() can be used, which will return a char * type.
This should catch a bunch of programming mistakes at build time:
- Declaring a character array outside of K_THREAD_STACK_DECLARE() and
passing it to K_THREAD_CREATE
- Directly examining the stack created by K_THREAD_STACK_DECLARE()
which is not actually the memory desired and may trigger a CPU
exception
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This needs to be in <arch/cpu.h> so that it can be called
from the k_panic()/k_oops() macros in kernel.h.
Fixes build errors on these arches when using k_panic() or
k_oops().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Stack sentinel doesn't prevent corruption, it just notices when
it happens. Any memory could be in a bad state and it's more
appropriate to take the entire system down rather than just kill
the thread.
Fatal testcase will still work since it installs its own
_SysFatalErrorHandler.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
One of the stack sentinel policies was to check the sentinel
any time a cooperative context switch is done (i.e, _Swap is
called).
This was done by adding a hook to _check_stack_sentinel in
every arch's __swap function.
This way is cleaner as we just have the hook in one inline
function rather than implemented in several different assembly
dialects.
The check upon interrupt is now made unconditionally rather
than checking if we are calling __swap, since the check now
is only called on cooperative _Swap(). The interrupt is always
serviced first.
Issue: ZEP-2244
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This places a sentinel value at the lowest 4 bytes of a stack
memory region and checks it at various intervals, including when
servicing interrupts or context switching.
This is implemented on all arches except ARC, which supports stack
bounds checking directly in hardware.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Unline k_thread_spawn(), the struct k_thread can live anywhere and not
in the thread's stack region. This will be useful for memory protection
scenarios where private kernel structures for a thread are not
accessible by that thread, or we want to allow the thread to use all the
stack space we gave it.
This requires a change to the internal _new_thread() API as we need to
provide a separate pointer for the k_thread.
By default, we still create internal threads with the k_thread in stack
memory. Forthcoming patches will change this, but we first need to make
it easier to define k_thread memory of variable size depending on
whether we need to store coprocessor state or not.
Change-Id: I533bbcf317833ba67a771b356b6bbc6596bf60f5
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Future tickless kernel patches would be inserting some
code before call to Swap. To enable this it will create
a mcro named as the current _Swap which would call first
the tickless kernel code and then call the real __swap()
Jira: ZEP-339
Change-Id: Id778bfcee4f88982c958fcf22d7f04deb4bd572f
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
Historically, space for struct k_thread was always carved out of the
thread's stack region. However, we want more control on where this data
will reside; in memory protection scenarios the stack may only be used
for actual stack data and nothing else.
On some platforms (particularly ARM), including kernel_arch_data.h from
the toplevel kernel.h exposes intractable circular dependency issues.
We create a new per-arch header "kernel_arch_thread.h" with very limited
scope; it only defines the three data structures necessary to instantiate
the arch-specific bits of a struct k_thread.
Change-Id: I3a55b4ed4270512e58cf671f327bb033ad7f4a4f
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Unlike assertions, these APIs are active at all times. The kernel will
treat these errors in the same way as fatal CPU exceptions. Ultimately,
the policy of what to do with these errors is implemented in
_SysFatalErrorHandler.
If the archtecture supports it, a real CPU exception can be triggered
which will provide a complete register dump and PC value when the
problem occurs. This will provide more helpful information than a fake
exception stack frame (_default_esf) passed to the arch-specific exception
handling code.
Issue: ZEP-843
Change-Id: I8f136905c05bb84772e1c5ed53b8e920d24eb6fd
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We do the same thing on all arch's right now for thread_monitor_init so
lets put it in a common place. This also should fix an issue on xtensa
when thread monitor can be enabled (reference to _nanokernel.threads).
Change-Id: If2f26c1578aa1f18565a530de4880ae7bd5a0da2
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
We do a bit of the same stuff on all the arch's to setup a new thread.
So lets put that code in a common place so we unify it for everyone and
reduce some duplicated code.
Change-Id: Ic04121bfd6846aece16aa7ffd4382bdcdb6136e3
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
There are a few places that we used an naked unsigned type, lets be
explicit and make it 'unsigned int'.
Change-Id: I33fcbdec4a6a1c0b1a2defb9a5844d282d02d80e
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types. This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies. We also convert the PRI printf formatters in the arch
code over to normal formatters.
Jira: ZEP-2051
Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types. There are few places we dont convert over to the new
types because of compatiability with ext/HALs or for ease of transition
at this point. Fixup a few of the PRI formatters so we build with newlib.
Jira: ZEP-2051
Change-Id: I7d2d3697cad04f20aaa8f6e77228f502cd9c8286
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This is a start to move away from the C99 {u}int{8,16,32,64}_t types to
Zephyr defined u{8,16,32,64}_t and s{8,16,32,64}_t. This allows Zephyr
to define the sized types in a consistent manor across all the
architectures we support and not conflict with what various compilers
and libc might do with regards to the C99 types.
We introduce <zephyr/types.h> as part of this and have it include
<stdint.h> for now until we transition all the code away from the C99
types.
We go with u{8,16,32,64}_t and s{8,16,32,64}_t as there are some
existing variables defined u8 & u16 as well as to be consistent with
Zephyr naming conventions.
Jira: ZEP-2051
Change-Id: I451fed0623b029d65866622e478225dfab2c0ca8
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This private data structure now no longer introduces a typedef or
uses CamelCase. It's not necessary to specify the size of extern
arrays, so we don't need a block of #ifdefs for every arch.
Change-Id: I71fe61822ecef29820280a43d5ac2822a61f7082
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This avoids asm files from having to explicitly define the _ASMLANGUAGE
symbol themselves.
Change-Id: I71f5a169f75d7443a58a0365a41c55b20dae3029
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
They are not part of the API, so rename from K_<state> to
_THREAD_<state>.
Change-Id: Iaebb7d3083b80b9769bee5616e0f96ed2abc5c56
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.
Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.
Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file. Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.
Jira: ZEP-1457
Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
These two fields in the thread structure control the preemptibility of a
thread.
sched_locked is decremented when the scheduler gets locked, which means
that the scheduler is locked for values 0xff to 0x01, since it can be
locked recursively. A thread is coop if its priority is negative, thus
if the prio field value is 0x80 to 0xff when looked at as an unsigned
value.
By putting them end-to-end, this means that a thread is non-preemptible
if the bundled value is greater than or equal to 0x0080. This is the
only thing the interrupt exit code has to check to decide to try a
reschedule or not.
Change-Id: I902d36c14859d0d7a951a6aa1bea164613821aca
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.
- prio can fit in one byte, limiting the priorities range to -128 to 127
- recursive scheduler locking can be limited to 255; a rollover results
most probably from a logic error
- flags are split into execution flags and thread states; 8 bits is
enough for each of them currently, with at worst two states and four
flags to spare (on x86, on other archs, there are six flags to spare)
Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.
Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Use least significant bits for common flags and high bits for
arch-specific ones.
Change-Id: I982719de4a24d3588c19a0d30bbe7a27d9a99f13
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
replace include <nanokernel.h> with <kernel.h> everywhere and also fix
any remaining mentions of nanokernel.
Keep the legacy samples/tests as is.
Change-Id: Iac48447bd191e83f21a719c69dc26233216d08dc
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Obsolete, replaced by _set_thread_return_value().
Change-Id: I23e9cfc07e43542f0965817edc3552d456fd2ef3
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.
Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Also remove some old cflags referencing directories that do not exist
anymore.
Also replace references to legacy APIs in doxygen documentation of
various functions.
Change-Id: I8fce3d1fe0f4defc44e6eb0ae09a4863e33a39db
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
- does not pull in printk(), for potential footprint gain
- does not pull in k_thread_abort(), for single-threaded systems
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Ibc6a198b81a6cd73117d1e85aa05b92a4501a34d
Some kernel operations, like scheduler locking can be optmized out,
since coop threads lock the scheduler by their very nature. Also, the
interrupt exit path for all architecture does not have to do any
rescheduling, again by the nature of non-preemptible threads.
Change-Id: I270e926df3ce46e11d77270330f2f4b463971763
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
nano_cpu_idle/nano_cpu_atomic_idle were not ported to the unified
kernel, and only the old APIs were available. There was no real impact
since, in the unified kernel, only the idle thread should really be
doing power management. However, with a single-threaded kernel, these
functions can be useful again.
The kernel internals now make use of these APIs instead of the legacy
ones.
Change-Id: Ie8a6396ba378d3ddda27b8dd32fa4711bf53eb36
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.
However, this caused two problems:
1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.
2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.
To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.
On the ARM FRDM K64F board, this improvement is seen:
Before:
1- Measure time to switch from ISR back to interrupted task
switching time is 215 tcs = 1791 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 315 tcs = 2625 nsec
After:
1- Measure time to switch from ISR back to interrupted task
switching time is 130 tcs = 1083 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 225 tcs = 1875 nsec
These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.
Fixes ZEP-1401.
Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Also remove NO_METRIC, which is not referenced anywhere anymore.
Change-Id: Ieaedf075af070a13aa3d975fee9b6b332203bfec
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Move _thread_base initialization to _init_thread_base(), remove mention
of "nano" in timeouts init and move timeout init to _init_thread_base().
Initialize all base fields via the _init_thread_base in semaphore groups
code.
Change-Id: I05b70b06261f4776bda6d67f358190428d4a954a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Artifact from microkernel, for handling multiple pending tasks on
nanokernel objects.
Change-Id: I3c2959ea2b87f568736384e6534ce8e275f1098f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>