These two fields in the thread structure control the preemptibility of a
thread.
sched_locked is decremented when the scheduler gets locked, which means
that the scheduler is locked for values 0xff to 0x01, since it can be
locked recursively. A thread is coop if its priority is negative, thus
if the prio field value is 0x80 to 0xff when looked at as an unsigned
value.
By putting them end-to-end, this means that a thread is non-preemptible
if the bundled value is greater than or equal to 0x0080. This is the
only thing the interrupt exit code has to check to decide to try a
reschedule or not.
Change-Id: I902d36c14859d0d7a951a6aa1bea164613821aca
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.
- prio can fit in one byte, limiting the priorities range to -128 to 127
- recursive scheduler locking can be limited to 255; a rollover results
most probably from a logic error
- flags are split into execution flags and thread states; 8 bits is
enough for each of them currently, with at worst two states and four
flags to spare (on x86, on other archs, there are six flags to spare)
Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.
Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Use least significant bits for common flags and high bits for
arch-specific ones.
Change-Id: I982719de4a24d3588c19a0d30bbe7a27d9a99f13
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
This will allow for an enhancement when checking if the thread is
preemptible when exiting an interrupt.
Change-Id: If93ccd1916eacb5e02a4d15b259fb74f9800d6f4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
replace include <nanokernel.h> with <kernel.h> everywhere and also fix
any remaining mentions of nanokernel.
Keep the legacy samples/tests as is.
Change-Id: Iac48447bd191e83f21a719c69dc26233216d08dc
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Obsolete, replaced by _set_thread_return_value().
Change-Id: I23e9cfc07e43542f0965817edc3552d456fd2ef3
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.
Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Move logging out of misc/ to its own subsystem. Anything related to
logging and any new logging features or backends could be added here
instead of the generic location in misc/ which is overcrowded with
options that are not related to eachother.
Jira: ZEP-1467
Change-Id: If6a3ea625c3a3562a7a61a0ba5fd7e6ca75518ba
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Also remove some old cflags referencing directories that do not exist
anymore.
Also replace references to legacy APIs in doxygen documentation of
various functions.
Change-Id: I8fce3d1fe0f4defc44e6eb0ae09a4863e33a39db
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
- does not pull in printk(), for potential footprint gain
- does not pull in k_thread_abort(), for single-threaded systems
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Ibc6a198b81a6cd73117d1e85aa05b92a4501a34d
Some kernel operations, like scheduler locking can be optmized out,
since coop threads lock the scheduler by their very nature. Also, the
interrupt exit path for all architecture does not have to do any
rescheduling, again by the nature of non-preemptible threads.
Change-Id: I270e926df3ce46e11d77270330f2f4b463971763
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
nano_cpu_idle/nano_cpu_atomic_idle were not ported to the unified
kernel, and only the old APIs were available. There was no real impact
since, in the unified kernel, only the idle thread should really be
doing power management. However, with a single-threaded kernel, these
functions can be useful again.
The kernel internals now make use of these APIs instead of the legacy
ones.
Change-Id: Ie8a6396ba378d3ddda27b8dd32fa4711bf53eb36
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.
However, this caused two problems:
1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.
2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.
To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.
On the ARM FRDM K64F board, this improvement is seen:
Before:
1- Measure time to switch from ISR back to interrupted task
switching time is 215 tcs = 1791 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 315 tcs = 2625 nsec
After:
1- Measure time to switch from ISR back to interrupted task
switching time is 130 tcs = 1083 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 225 tcs = 1875 nsec
These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.
Fixes ZEP-1401.
Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The ARG_UNUSED macro is added to avoid compiler warnings.
Change-Id: If0242548849ee5b258bb3fce9fd727b377411343
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
This patch fixes the unused parameter warning found at the
quark_x1000/soc.h file.
Change-Id: I110d7185d8302f95d14efd13060055e7378aea23
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
We have no idea what's in the GDT if we don't set it ourself.
Change-Id: I3c2e406370e3ea149252c423d66c97aab95bee17
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The cpu context save function was manipulating stack and
returning to C caller. This can corrupt stack if the calling
function has data saved and it pops before entering deep
sleep. Moved sleep functions into assembly to avoid this.
Jira: ZEP-1345
Change-Id: I8a6d279ec14e42424f764d9ce8cbbef32149fe84
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
Also remove NO_METRIC, which is not referenced anywhere anymore.
Change-Id: Ieaedf075af070a13aa3d975fee9b6b332203bfec
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Move _thread_base initialization to _init_thread_base(), remove mention
of "nano" in timeouts init and move timeout init to _init_thread_base().
Initialize all base fields via the _init_thread_base in semaphore groups
code.
Change-Id: I05b70b06261f4776bda6d67f358190428d4a954a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Artifact from microkernel, for handling multiple pending tasks on
nanokernel objects.
Change-Id: I3c2959ea2b87f568736384e6534ce8e275f1098f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Previous configuration was backwards. From the Intel manual:
"If the segment descriptors in the GDT or an LDT are placed in ROM,
the processor can enter an indefinite loop if software or the
processor attempts to update (write to) the ROM-based segment
descriptors. To prevent this problem, set the accessed bits
for all segment descriptors placed in a ROM. Also, remove
operating-system or executive code that attempts to modify
segment descriptors located in ROM."
Only by some miracle has this not been causing problems.
Change-Id: I0bb915962a1069876d2486473760112102feae7b
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Prio should be an int, since values are small integers, not a fixed-size
int32_t. It aligns with the prio parameters of the other APIs.
Stack size should be size_t.
Change-Id: Id29751b86c4ad7a7c2a7ffe446c2a96ae83c77bf
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
When a thread dies, at least print the pointer to it, so we can debug
better.
Change-Id: Ief6bbc0c221e2d5271c240a4b73df16413aa5e22
Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
Most kernel APIs are now ready for inclusion in the API guide.
The APIs largely follow a standard template to provide users
of the API guide with a consistent look-and-feel.
Change-Id: Ib682c31f912e19f5f6d8545d74c5f675b1741058
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Move away from legacy APIs and use unified kenrel instead.
Change-Id: Icae86beec66df1b041405cbe3455913630fc8ad1
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
There was a lot of duplication between architectures for the definition
of threads and the "nanokernel" guts. These have been consolidated.
Now, a common file kernel/unified/include/kernel_structs.h holds the
common definitions. Architectures provide two files to complement it:
kernel_arch_data.h and kernel_arch_func.h. The first one contains at
least the struct _thread_arch and struct _kernel_arch data structures,
as well as the struct _callee_saved and struct _caller_saved register
layouts. The second file contains anything that needs what is provided
by the common stuff in kernel_structs.h. Those two files are only meant
to be included in kernel_structs.h in very specific locations.
The thread data structure has been separated into three major parts:
common struct _thread_base and struct k_thread, and arch-specific struct
_thread_arch. The first and third ones are included in the second.
The struct s_NANO data structure has been split into two: common struct
_kernel and arch-specific struct _kernel_arch. The latter is included in
the former.
Offsets files have also changed: nano_offsets.h has been renamed
kernel_offsets.h and is still included by the arch-specific offsets.c.
Also, since the thread and kernel data structures are now made of
sub-structures, offsets have to be added to make up the full offset.
Some of these additions have been consolidated in shorter symbols,
available from kernel/unified/include/offsets_short.h, which includes an
arch-specific offsets_arch_short.h. Most of the code include
offsets_short.h now instead of offsets.h.
Change-Id: I084645cb7e6db8db69aeaaf162963fe157045d5a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Add support for console output via the USB UART.
Note that console input via the USB UART doesnt work.
Adds a simulated poll method for UART interface exposed by USB.
Jira : ZEP-775
Change-Id: I357827ea52c027eb000baed80225f422df1f3358
Signed-off-by: Jithu Joseph <jithu.joseph@intel.com>
Some arduino 101 boards have old boot loader without context
restore boot flow feature. This handler will allow doing deep sleep
in those boards by jumping to the context restore code. This will
be disabled by default and can be optionally enabled by user.
Jira: ZEP-1258
Change-Id: I92e70550fd92c1cac42b3039d667fb0be8cf5bce
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
Some bootloaders have power management support to restoer context
upon resume from deep sleep. In such cases, the OS startup code
should call the notification hook. Create Kconfig flags to configure
this option.
Jira: 1257
Change-Id: I9f40c5fa077c2f17dc8e9f11604c3ed17e549ed5
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
_sys_soc_resume hook is over loaded to handle to different
scenarios. It is primarily called to notify exit of kernel idling
after PM operations. It is also used to notify exit from deep sleep.
This is very confusing and also makes the implementation of the
hook function very difficult because of very different conditions
involved in the 2 different use cases. Further, users may not require
either or both use cases depending of their custom boot flow and
power state handling. To simplify, create a separate hook for the
purpose of deep sleep exit notification. Use the existing one to
only notify kernel idling exit after PM operations.
Jira: ZEP-1256
Change-Id: I96350199a0fd37f16590c8ee5302a94a3d71b8ba
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
When waking up from C2LP state, the timer needs
to be reinitialized as we cannot know the time
that we spent in that state.
In order to reschedule the user application, expire it
as soon as we restart.
Change-Id: Id38a0de71e148ae8d9024a36d3983ab57b1e40d2
Signed-off-by: Julien Delayen <julien.delayen@intel.com>
Updates x86 floating point support to reflect changes that have
been made in recent months.
* Many, many, many cosmetic changes (mostly revisions to comments).
* Elimination of unnecessary function aliases that were needed
to support the task and fiber versions of certain APIs.
* Elimination of run-time code to enable a thread's "FP regs"
option bit if the "SSE regs" option bit was set. The kernel
now recognizes that the thread is using the FPU as long as
either option bit is set. (If the thread has both option bits
enabled this is the same as if only the "SSE regs" bit is set.)
Change-Id: Ic12abc54b6fa78921749b546d8debf23e7ad232d
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
PRIMARY, SECONDARY, NANOKERNEL, MICROKERNEL init levels are now
deprecated.
New init levels introduced: PRE_KERNEL_1, PRE_KERNEL_2, POST_KERNEL
to replace them.
Most existing code has instances of PRIMARY replaced with PRE_KERNEL_1,
SECONDARY with POST_KERNEL as SECONDARY has had a longstanding bug
where the documentation specified SECONDARY ran before the kernel started
up, but actually ran afterwards.
Change-Id: I771bc634e9caf7f17dbf214a270bc9967eed7d32
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Verify the thread priorities are within the bounds when starting a new
thread and when changing the priority of a thread.
Change-Id: I007b3b249e4b80235b6439cbee44cad2f31973bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The bitwise AND operator was being applied to the boolean expression
"!shared_data->flags" instead of the whole expression because a
parenthesis was lacking.
This bug has been found using Coccinelle using the following spatch,
after finding a similar bug somewhere else in the code base:
@@
expression E1;
expression E2;
@@
- !E1 & E2
+ !(E1 & E2)
No other instance of this defect has been found with this spatch.
Change-Id: I6b9ca092f4015c80ddc83c31ce540a92e67cdb11
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
Symbols now use the K_ prefix which is now standard for the
unified kernel. Legacy support for these symbols is retained
to allow existing applications to build successfully.
Change-Id: I3ff12c96f729b535eecc940502892cbaa52526b6
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Adds standard prefix to symbolic option that flags a thread
as essential to system operation.
Change-Id: Ia904a81ce343fdd1cd44caaaeae641d822777f9b
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
QMSI 1.3 natively supports restoring the SoC and peripherals
after sleep.
The Zephyr Power Management shim layer is updated
in order to support QMSI functions.
The following functions have been added:
void _sys_soc_set_power_state(enum power_state);
void _sys_soc_power_state_post_ops(void);
In order to fully support deep sleep, the function
_sys_soc_set_power_state now support saving and
restoring CPU context and returns to the application.
_sys_soc_set_power_state function also abstracts
QMSI cpu states and enable the application to choose
between C1/C2 or C2LP states.
The QMSI power states are mapped as follows:
SYS_SOC_POWER_STATE_CPU_LPS -> power_cpu_c2lp
SYS_SOC_POWER_STATE_CPU_LPS_1 -> power_cpu_c2
SYS_SOC_POWER_STATE_CPU_LPS_2 -> power_cpu_c1
SYS_SOC_POWER_STATE_DEEP_SLEEP -> power_soc_deep_sleep
SYS_SOC_POWER_STATE_DEEP_SLEEP_1 -> power_soc_sleep
The following functions have been removed:
void _sys_soc_set_power_policy(uint32_t pm_policy);
int _sys_soc_get_power_policy(void);
FUNC_NORETURN void _sys_soc_put_deep_sleep(void);
void _sys_soc_put_low_power_state(void);
void _sys_soc_deep_sleep_post_ops(void);
Those changes are propagated to the samples.
All calls to QMSI are removed.
Jira: ZEP-1045, ZEP-993, ZEP-1047
Change-Id: I26822727985b63be0a310cc3590a3e71b8e72c8c
Signed-off-by: Julien Delayen <julien.delayen@intel.com>
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
A new shared memory has been added to the qmsi bootloader
in order to handle the restore flow and jump to
the restore trap where context is restored.
Add the new entry to the QUARK SE C1000 linker file
and new kconfig options:
- CONFIG_BSP_SHARED_RAM_ADDR to set the address of the
shared memory.
- CONFIG_BSP_SHARED_RAM_SIZE to set the size of the
shared memory.
This is only enabled with CONFIG_SYS_POWER_DEEP_SLEEP.
Jira: ZEP-1046
Change-Id: I35d924a100c5583025aa36a9741428ab51809c57
Signed-off-by: Julien Delayen <julien.delayen@intel.com>