Some tick frequencies lend themselves to optimized conversions from ms
to ticks and vice-versa.
- 1000Hz which does not need any conversion
- 500Hz, 250Hz, 125Hz where the division/multiplication are a straight
shift since they are power-of-two factors of 1000.
In addition, some more generally used values are made to use optimized
conversion equations rather than the generic one that uses 64-bit math,
and often results in calling compiler intrinsics.
These values are: 100Hz, 50Hz, 25Hz, 20Hz, 10Hz, 1Hz (the last one used
in some testing).
Avoiding the 64-bit math intrisics has the additional benefit, in
addition to increased performance, of using a significant lower amount
of stack space: 52 bytes on ARM Cortex-M and 80 bytes on x86.
Change-Id: I080eb338a2637d6b1c6838c119af1a9fa37fe869
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
This limits the execution contexts that will go over the loop in
_unpend_first_thread() to only ISRs of very high priority that are
preempting the system clock timer ISR, and only during the time it is
handling timeouts.
Change-Id: Iaf0500d28a2de5e077c9cf9861a5a70244127d58
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Use a short name for this option CONFIG_OBJECT_TRACING.
Change-Id: Id27de7ef9ca299492b6b7d2324d9f5bcf8059a31
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.
Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Reorganise and cleanup Kernel Kconfig options and group options of the
same area under Menus to ease readability and to have a better structure
when using menuconfig.
Change-Id: Ic6b39730297861367abd345ede35e41c046c099d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Move those into a separate Kconfig file and include them instead.
Change-Id: Ifa25d6ec92937080ad5970af7ca5c3f07ddec961
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
rename NANOKERNEL_TICKLESS_IDLE_SUPPORTED to
TICKLESS_IDLE_SUPPORTED and remove nanokernel occurances in Kconfig
files.
Make TICKLESS_IDLE depend on hardware that supports it.
Change-Id: I6a2e4fb0f7cf4b45475b48e71823ea089ee98759
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Also remove some old cflags referencing directories that do not exist
anymore.
Also replace references to legacy APIs in doxygen documentation of
various functions.
Change-Id: I8fce3d1fe0f4defc44e6eb0ae09a4863e33a39db
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
And also remove now obsolete ARCH_HAS_TASK_ABORT.
ARC does not need the options either.
Change-Id: Ie52d63178a367ce12b911dacfe2d389f4f75ed2d
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The system work queue spawns a coop thread to hanlde the work items. If
it is spawned before the kernel is up and the initialization dummy
thread's priority is lower, there will be a context switch into the
system work queue's thread at that time, before the kernel is ready to
handle this.
Change-Id: I879659ab58231c5a5cfaa34f2f65c2eccab99142
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Legacy applications still need that, otherwise kernel objects are not
configured correctly. Will be removed later.
Change-Id: I22df10e4adcc11f035f9813bea8c93dd1a560a1d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
For very constrained systems, like bootloaders.
Only the main thread is available, so a main() function must be
provided. Kernel objects where pending is in play will not behave as
expected, since the main thread cannot pend, it being the only thread in
the system. Usage of objects should be limited to using K_NO_WAIT as the
timeout parameter, effectively polling on the object.
Change-Id: Iae0261daa98bff388dc482797cde69f94e2e95cc
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
A thread cannot have a coop priority in this case. It turns out a
priority is not needed when a thread is not inserted in the ready queue,
which is the case with the dummy thread.
The comment was also out-of-date, since it referred to a nanokernel
concept.
Change-Id: Id117501164bd72383d53f3df13030cf95dadc38b
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Some kernel operations, like scheduler locking can be optmized out,
since coop threads lock the scheduler by their very nature. Also, the
interrupt exit path for all architecture does not have to do any
rescheduling, again by the nature of non-preemptible threads.
Change-Id: I270e926df3ce46e11d77270330f2f4b463971763
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
nano_cpu_idle/nano_cpu_atomic_idle were not ported to the unified
kernel, and only the old APIs were available. There was no real impact
since, in the unified kernel, only the idle thread should really be
doing power management. However, with a single-threaded kernel, these
functions can be useful again.
The kernel internals now make use of these APIs instead of the legacy
ones.
Change-Id: Ie8a6396ba378d3ddda27b8dd32fa4711bf53eb36
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The numbers of timeouts that expire on a given tick is arbitrary. When
handling them, interrupts were locked, which prevented higher-priority
interrupts from preempting the system clock timer handler.
Instead of looping on the list of timeouts, which needs interrupts being
locked since it can be manipulated at interrupt level, timeouts are
dequeued one by one, unlocking interrupts between each, and put on a
local 'expired' queue that is drained subsequently, with interrupts
unlocked. This scheme uses the fact that no timeout can be prepended
onto the timeout queue while expired timeouts are getting removed from
it, since adding a timeout of 0 is prohibited.
Timer handlers now run with interrupts unlocked: the previous behaviour
added potentially horrible non-determinism to the handling of timeouts.
Change-Id: I709085134029ea2ad73e167dc915b956114e14c2
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
These tick computation can take a significant amount of time, and there
is no reason to do them with interrupts locked.
Change-Id: I2d8803ec6025b827e9450fa493084bbf8be98bad
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Use _INACTIVE instead of hardcoding -1.
_EXPIRED is defined as -2 and will be used for an improvement so that
interrupts are not locked for a non-deterministic amount of time while
handling expired timeouts.
_abort_timeout/_abort_thread_timeout return _INACTIVE instead of -1 if
the timeout has already been disabled.
Change-Id: If99226ff316a62c27b2a2e4e874388c3c44a8aeb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.
However, this caused two problems:
1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.
2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.
To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.
On the ARM FRDM K64F board, this improvement is seen:
Before:
1- Measure time to switch from ISR back to interrupted task
switching time is 215 tcs = 1791 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 315 tcs = 2625 nsec
After:
1- Measure time to switch from ISR back to interrupted task
switching time is 130 tcs = 1083 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 225 tcs = 1875 nsec
These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.
Fixes ZEP-1401.
Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Disable MDEF option and set it only in legacy projects.
Change-Id: I2e1f011eb1f876af929140e36f71f0efb5e955c1
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The ARG_UNUSED macro is added to avoid compiler warnings.
Change-Id: Ie9b72c94191318c1d667d7929eb029098c62e993
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
Use uint32_t for counters instead of int to avoid compiler warnings.
Change-Id: Ie96dfaca650b5f91562c0740c18610fc40968be6
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
mem_pool structures use uint32_t for counters and size_t
to specify sizes, however some routines in mem_pool.c
make use of int for similar purposes. This commit fixes
that situation by updating some variables to match
mem_pool data types.
Change-Id: I0aa01c27e512d06d40432e8091ed8fd9d959970c
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
Factor out the code for evaluating the remaining time for _timeout
structs so that it can also be used for other objects besides k_timer
structs (like k_delayed_work, coming in a subsequent patch).
Change-Id: I243a7b29fb2831f06e95086a31f0d3a6c37dad67
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Use the SYS_INIT() mechanism to invoke the sys_rand32_init() function
in random drivers that require an initializer. Remove all empty
sys_rand32_init() instances.
The existing explicit sys_rand32_init() function runs immediately after
PRE_KERNEL_2 before stack canaries are initialized. In order to get
equivalent behaviour with sys_rand32_init() we set SYS_INIT() to
initialize the random drivers at the lowest priority of PRE_KERNEL_2.
Change-Id: I4521e44daac806bc4eef01ce7fdf2ba5367e0587
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
To guarantee that the compiler does not reorder the execution of
irq_lock() with preceding operations, a volatile qualifier is
placed before the declaration of the ticks variable, which then
ensures that irq_lock() is executed after the tick calculation but
before accessing the ready and timeout queues.
Without the volatile keyword interrupts will be disabled during the
calculation of the ticks, which increases interrupt latency
significantly.
Change-Id: I2da82a1282e344f3b8d69e9457b36a4cb1d9ec18
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Use standard library calls like memset/memcpy for setting up BSS and DATA
sections during system initialization, this helps to take advantage of
architecture specific optimizations from standard library.
Change-Id: Ia72b42aa65b44d1df7c22dd1fbc39a44fa001be9
Signed-off-by: Mahavir Jain <mjain@marvell.com>
Make boot banner more informative by adding kernel version string
Change-Id: I21865ea3a001fba2c30fe58e6e052aae59fef3e2
Signed-off-by: Mahavir Jain <mjain@marvell.com>
If delayed work is already submitted or completed, then subsequent
cancel should return -EINVAL as return status.
Fixes ZEP-1373.
Change-Id: I16bbacca7e31a5a5d8e5a89e729d70302ada6223
Signed-off-by: Mahavir Jain <mjain@marvell.com>
Interrupt must be locked before inserting a timeout in the timeout
queue.
Change-Id: Iab0bf01f393e66a6403d2f85e899dbf737da4afc
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The fact that a thread is timing out was tracked via two flags: the
K_TIMING thread flag bit, and the thread's timeout's
delta_ticks_from_prev being -1 or not. This duplication could
potentially cause discrepancies if the two flags got out-of-sync, and
there was no benfits to having both.
Since timeouts that are not parts of a thread rely on the value of
delta_ticks_from_prev, standardize on it.
Since the K_TIMING bit is removed from the thread's flags, K_READY would
not reflect the reality anymore. It is removed and replaced by
_is_thread_prevented_froM_running(), which looks at the state flags that
are relevant. A thread that is ready now is not prevented from running
and does not have an active timeout.
Change-Id: I902ef9fb7801b00626df491f5108971817750daa
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Also remove NO_METRIC, which is not referenced anywhere anymore.
Change-Id: Ieaedf075af070a13aa3d975fee9b6b332203bfec
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Now that we're out of the unified kernel development phase, turn off
that debugging option.
Change-Id: I89decbdf445b1ba111a829edf2c8a36846419586
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
It was possible for a dummy thread to be not timing, but not having
timeout.delta_ticks_from_prev not be -1 at the same time, which is a big
no-no.
Use _init_thread_base() to do a full initialization of the dummy thread.
Fixes ZEP-1312.
Change-Id: I16a2373be3329c142cf26f5dca6bfdbe6014ac5e
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Move _thread_base initialization to _init_thread_base(), remove mention
of "nano" in timeouts init and move timeout init to _init_thread_base().
Initialize all base fields via the _init_thread_base in semaphore groups
code.
Change-Id: I05b70b06261f4776bda6d67f358190428d4a954a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Renamed main_stack and idle_stack, to _main_stack and
_idle_stack, respectively, and made them globals. This does
not affect performance. They are still kept kernel private
symbols and not part of kernel API.
This will allow these symbols to be referenced in calls to
stack_analyse misc functions to profile stack usage in
applications.
Change-id: Id6b746c5cfda617c26901c6e62c3e17114471f57
Signed-off-by: Vinayak Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
It's possible that an architecture needs a custom way of switching to
the main() task, rather than using _Swap() with a dummy thread.
Change-Id: I14e9bc67be35174ff16209bcea27b18a069ff754
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>