Commit Graph

1105 Commits

Author SHA1 Message Date
Benjamin Walsh 168695c7ef kernel/arch: inspect prio/sched_locked together for preemptibility
These two fields in the thread structure control the preemptibility of a
thread.

sched_locked is decremented when the scheduler gets locked, which means
that the scheduler is locked for values 0xff to 0x01, since it can be
locked recursively. A thread is coop if its priority is negative, thus
if the prio field value is 0x80 to 0xff when looked at as an unsigned
value.

By putting them end-to-end, this means that a thread is non-preemptible
if the bundled value is greater than or equal to 0x0080. This is the
only thing the interrupt exit code has to check to decide to try a
reschedule or not.

Change-Id: I902d36c14859d0d7a951a6aa1bea164613821aca
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-09 20:52:25 +00:00
Benjamin Walsh f955476559 kernel/arch: optimize memory use of some thread fields
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.

- prio can fit in one byte, limiting the priorities range to -128 to 127

- recursive scheduler locking can be limited to 255; a rollover results
  most probably from a logic error

- flags are split into execution flags and thread states; 8 bits is
  enough for each of them currently, with at worst two states and four
  flags to spare (on x86, on other archs, there are six flags to spare)

Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.

Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-09 20:52:24 +00:00
Anas Nashif 70a2e138b7 kernel: add LEGACY_KERNEL option
Add global option for legacy configurations and enable by default for
backward compatibility. Disable option on tests and keep it on legacy
samples and tests.

Jira: ZEP-964
Change-Id: I0831e2aa74d438b1ac74eb762186cb220a504beb
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-01-09 19:42:13 +00:00
Anas Nashif f6e039062a kernel: remove dependency on CONFIG_NANO_TIMERS/TIMEOUTS
Remove legacy option and use SYS_CLOCK_EXISTS where appropriate.

Change-Id: I3d524ea2776e638683f0196c0cc342359d5d810f
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-01-08 18:09:52 +00:00
Benjamin Walsh 66b99f1486 kernel: add _timeout_q dump before and after adding timeout
Kernel debugging aid.

Change-Id: I852ba2f626f133d943be2ecac41354fecca478d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:27 +00:00
Benjamin Walsh 99eef25815 kernel: do not use sys_dlist_insert_at() in _add_timeout()
Similar to _pend_queue, it's more efficient to do the logic inline.

Change-Id: I68ac4fbc26c97b6ec9322caef98504ff6ccc8727
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:26 +00:00
Benjamin Walsh b8c2160a2b kernel: do not use sys_dlist_insert_at() in _pend_thread()
It's calling a function on every iteration, it's more efficient to just
do the logic inline.

Change-Id: I166e377d4ffb3056749fd625cb789173030904ac
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:26 +00:00
Benjamin Walsh d779f3d240 kernel/arch: streamline thread flag bits used
Use least significant bits for common flags and high bits for
arch-specific ones.

Change-Id: I982719de4a24d3588c19a0d30bbe7a27d9a99f13
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:24 +00:00
Benjamin Walsh e6a69cae54 kernel/arch: reverse polarity on sched_locked
This will allow for an enhancement when checking if the thread is
preemptible when exiting an interrupt.

Change-Id: If93ccd1916eacb5e02a4d15b259fb74f9800d6f4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:24 +00:00
Benjamin Walsh 04ed860c68 kernel: make _thread.sched_locked a non-atomic operator variable
Not needed, since only the thread itself can modifiy its own
sched_locked count.

Change-Id: I3d3d8be548d2b24ca14f51637cc58bda66f8b9ee
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:23 +00:00
Anas Nashif fad7e2dd8d logging: move event_logger to subsys/logging
Jira: ZEP-1337
Change-Id: If1690e19a882cf53caaa3418ccabeb49c783f63d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-25 14:34:43 -05:00
Benjamin Walsh 6209218f40 kernel: optimize ms-to-ticks for certain tick frequencies
Some tick frequencies lend themselves to optimized conversions from ms
to ticks and vice-versa.

- 1000Hz which does not need any conversion
- 500Hz, 250Hz, 125Hz where the division/multiplication are a straight
  shift since they are power-of-two factors of 1000.

In addition, some more generally used values are made to use optimized
conversion equations rather than the generic one that uses 64-bit math,
and often results in calling compiler intrinsics.

These values are: 100Hz, 50Hz, 25Hz, 20Hz, 10Hz, 1Hz (the last one used
in some testing).

Avoiding the 64-bit math intrisics has the additional benefit, in
addition to increased performance, of using a significant lower amount
of stack space: 52 bytes on ARM Cortex-M and 80 bytes on x86.

Change-Id: I080eb338a2637d6b1c6838c119af1a9fa37fe869
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-21 19:50:07 +00:00
Benjamin Walsh eec37e6752 kernel: add flag that tells the system is handling timeouts
This limits the execution contexts that will go over the loop in
_unpend_first_thread() to only ISRs of very high priority that are
preempting the system clock timer ISR, and only during the time it is
handling timeouts.

Change-Id: Iaf0500d28a2de5e077c9cf9861a5a70244127d58
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-21 19:50:05 +00:00
Anas Nashif dc3d73bf58 kernel: fix all nanokernel usage in comments
Also include kernel.h instead of nanokernel.h

Change-Id: I65dc5e31b5409b809397296817e2d5e7adf28892
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-21 18:45:03 +00:00
Anas Nashif 3d8e86c12c drivers: eliminate nano/micro kernel usage
Jira: ZEP-1415

Change-Id: I4a009ff57edb799750175aef574a865589f96c14
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-21 18:45:02 +00:00
Anas Nashif 2f203c2f92 tracing: rename CONFIG_DEBUG_TRACING_KERNEL_OBJECTS
Use a short name for this option CONFIG_OBJECT_TRACING.

Change-Id: Id27de7ef9ca299492b6b7d2324d9f5bcf8059a31
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00
Anas Nashif d687a95611 kernel: move kernel code to kernel/ directly
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.

Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00
Anas Nashif 9463dc0b8f kernel: merge kernel Kconfigs into one
Reorganise and cleanup Kernel Kconfig options and group options of the
same area under Menus to ease readability and to have a better structure
when using menuconfig.

Change-Id: Ic6b39730297861367abd345ede35e41c046c099d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:43 +00:00
Anas Nashif 40b7183326 kernel: fixed description of THREAD_CUSTOM_DATA
Change-Id: I63ebfc6b7cf869d7a00ccbe4f20eca8060edaf43
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:42 +00:00
Anas Nashif 569f0b4105 debug: move debug features from misc to subsys/debug
Change-Id: I446be0202325cf3cead7ce3024ca2047e3f7660d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:40 +00:00
Anas Nashif ed116ace6d kernel: kconfig: move power management options out
Change-Id: I5d7068ca7a5793bb3499f2bf2dc1abc4e337313e
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:37 +00:00
Anas Nashif 666afe5923 kernel: kconfig: move event logger options into file
Change-Id: I1e80375df583c5a5b6f04b216b54ed5b786e4655
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:37 +00:00
Anas Nashif bd10845996 kernel: kconfig: replace task/fiber with threads
Change-Id: I6d44cad8b2cf195137f04808167614390ee2ec55
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:36 +00:00
Anas Nashif 59a7de8ddf kernel: Isolate logger options
Move those into a separate Kconfig file and include them instead.

Change-Id: Ifa25d6ec92937080ad5970af7ca5c3f07ddec961
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:35 +00:00
Anas Nashif cfbe9b05a1 kernel: rename NANOKERNEL_TICKLESS_IDLE_SUPPORTED
rename NANOKERNEL_TICKLESS_IDLE_SUPPORTED to
TICKLESS_IDLE_SUPPORTED and remove nanokernel occurances in Kconfig
files.

Make TICKLESS_IDLE depend on hardware that supports it.

Change-Id: I6a2e4fb0f7cf4b45475b48e71823ea089ee98759
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:35 +00:00
Anas Nashif cb888e6805 kernel: remove nano/micro wording and usage
Also remove some old cflags referencing directories that do not exist
anymore.
Also replace references to legacy APIs in doxygen documentation of
various functions.

Change-Id: I8fce3d1fe0f4defc44e6eb0ae09a4863e33a39db
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:03 +00:00
Benjamin Walsh 1f2a5791bc kernel: add missing ___kernel_t_arch_OFFSET
Change-Id: I9913a1734f00dfb24f41214942230c4a127aa1a8
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-19 19:10:17 +00:00
Benjamin Walsh cef368f578 kernel/arch: rename ARCH_HAS_NANO_FIBER_ABORT to ARCH_HAS_THREAD_ABORT
And also remove now obsolete ARCH_HAS_TASK_ABORT.

ARC does not need the options either.

Change-Id: Ie52d63178a367ce12b911dacfe2d389f4f75ed2d
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-17 22:44:40 +00:00
Benjamin Walsh 096d8e9af5 kernel: fix warnings when CONFIG_MULTITHREADING=n
Change-Id: I57c6a225c3eece9e2d4942bacdfcb097f2edaf42
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-17 16:54:45 +00:00
Benjamin Walsh e0ca2109a6 kernel: initialize system work queue after kernel is up
The system work queue spawns a coop thread to hanlde the work items. If
it is spawned before the kernel is up and the initialization dummy
thread's priority is lower, there will be a context switch into the
system work queue's thread at that time, before the kernel is ready to
handle this.

Change-Id: I879659ab58231c5a5cfaa34f2f65c2eccab99142
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-17 16:24:34 +00:00
Anas Nashif f11fe9eca5 kernel: set CONFIG_MDEF by default
Legacy applications still need that, otherwise kernel objects are not
configured correctly. Will be removed later.

Change-Id: I22df10e4adcc11f035f9813bea8c93dd1a560a1d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-17 10:25:20 -05:00
Benjamin Walsh b12a8e0914 kernel: introduce single-threaded kernel
For very constrained systems, like bootloaders.

Only the main thread is available, so a main() function must be
provided. Kernel objects where pending is in play will not behave as
expected, since the main thread cannot pend, it being the only thread in
the system. Usage of objects should be limited to using K_NO_WAIT as the
timeout parameter, effectively polling on the object.

Change-Id: Iae0261daa98bff388dc482797cde69f94e2e95cc
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 16:17:39 -05:00
Benjamin Walsh a2098393fa kernel: fix dummy init thread prio in preempt-only configurations
A thread cannot have a coop priority in this case. It turns out a
priority is not needed when a thread is not inserted in the ready queue,
which is the case with the dummy thread.

The comment was also out-of-date, since it referred to a nanokernel
concept.

Change-Id: Id117501164bd72383d53f3df13030cf95dadc38b
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 16:17:38 -05:00
Benjamin Walsh 8e4a534ea1 kernel: enable and optimize coop-only configurations
Some kernel operations, like scheduler locking can be optmized out,
since coop threads lock the scheduler by their very nature. Also, the
interrupt exit path for all architecture does not have to do any
rescheduling, again by the nature of non-preemptible threads.

Change-Id: I270e926df3ce46e11d77270330f2f4b463971763
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 16:17:38 -05:00
Benjamin Walsh c8cecca192 kernel: add CONFIG_PREEMPT_ENABLED and CONFIG_COOP_ENABLED
Enabled when CONFIG_NUM_PREEMPT_PRIORITIES != 0 and
CONFIG_NUM_COOP_PRIORITIES != 0 repectively.

Change-Id: Ic791518429d9d8ad8127f67087f7927bffeabe44
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 16:17:38 -05:00
Benjamin Walsh c3a2bbba16 kernel: add k_cpu_idle/k_cpu_atomic_idle()
nano_cpu_idle/nano_cpu_atomic_idle were not ported to the unified
kernel, and only the old APIs were available. There was no real impact
since, in the unified kernel, only the idle thread should really be
doing power management. However, with a single-threaded kernel, these
functions can be useful again.

The kernel internals now make use of these APIs instead of the legacy
ones.

Change-Id: Ie8a6396ba378d3ddda27b8dd32fa4711bf53eb36
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 16:17:38 -05:00
Benjamin Walsh b889fa8b20 kernel: enhance realtime-ness when handling timeouts
The numbers of timeouts that expire on a given tick is arbitrary. When
handling them, interrupts were locked, which prevented higher-priority
interrupts from preempting the system clock timer handler.

Instead of looping on the list of timeouts, which needs interrupts being
locked since it can be manipulated at interrupt level, timeouts are
dequeued one by one, unlocking interrupts between each, and put on a
local 'expired' queue that is drained subsequently, with interrupts
unlocked. This scheme uses the fact that no timeout can be prepended
onto the timeout queue while expired timeouts are getting removed from
it, since adding a timeout of 0 is prohibited.

Timer handlers now run with interrupts unlocked: the previous behaviour
added potentially horrible non-determinism to the handling of timeouts.

Change-Id: I709085134029ea2ad73e167dc915b956114e14c2
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2016-12-15 16:17:22 -05:00
Benjamin Walsh 5596f78c08 kernel: fix typo
Change-Id: I5a9e53100dcac9b78cae655c3c68444357832094
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 15:50:02 -05:00
Benjamin Walsh 6ca6c28dd3 kernel/timers: move tick computation out of irq_lock block
These tick computation can take a significant amount of time, and there
is no reason to do them with interrupts locked.

Change-Id: I2d8803ec6025b827e9450fa493084bbf8be98bad
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 15:50:02 -05:00
Benjamin Walsh d211a52fc0 kernel: add defines for delta_ticks_from_prev special values
Use _INACTIVE instead of hardcoding -1.

_EXPIRED is defined as -2 and will be used for an improvement so that
interrupts are not locked for a non-deterministic amount of time while
handling expired timeouts.

_abort_timeout/_abort_thread_timeout return _INACTIVE instead of -1 if
the timeout has already been disabled.

Change-Id: If99226ff316a62c27b2a2e4e874388c3c44a8aeb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 15:50:02 -05:00
Benjamin Walsh 88b3691415 kernel/arch: enhance the "ready thread" cache
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.

However, this caused two problems:

1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.

2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.

To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.

On the ARM FRDM K64F board, this improvement is seen:

Before:

1- Measure time to switch from ISR back to interrupted task

   switching time is 215 tcs = 1791 nsec

2- Measure time from ISR to executing a different task (rescheduled)

   switch time is 315 tcs = 2625 nsec

After:

1- Measure time to switch from ISR back to interrupted task

   switching time is 130 tcs = 1083 nsec

2- Measure time from ISR to executing a different task (rescheduled)

   switch time is 225 tcs = 1875 nsec

These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.

Fixes ZEP-1401.

Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 15:50:02 -05:00
Anas Nashif 418058a123 kernel: remove NANOKERNEL and MICROKERNEL configs
Those are legacy and not needed anymore.

Change-Id: I8113114fd60880b3f538612db7702f6129af0a06
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-14 13:45:52 +00:00
Anas Nashif 0859df1eca kernel: disable MDEF by default
Disable MDEF option and set it only in legacy projects.

Change-Id: I2e1f011eb1f876af929140e36f71f0efb5e955c1
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-12 20:25:07 +00:00
Flavio Santes b80db0a41b kernel: Add ARG_UNUSED macro to avoid compiler warnings
The ARG_UNUSED macro is added to avoid compiler warnings.

Change-Id: Ie9b72c94191318c1d667d7929eb029098c62e993
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
2016-12-12 20:02:31 +00:00
Flavio Santes 5349af8702 kernel/mem_slab: Use the right data-type
Use uint32_t for counters instead of int to avoid compiler warnings.

Change-Id: Ie96dfaca650b5f91562c0740c18610fc40968be6
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
2016-12-12 20:01:31 +00:00
Flavio Santes 380ee05a58 kernel/mem_pool: Use the right data-type
mem_pool structures use uint32_t for counters and size_t
to specify sizes, however some routines in mem_pool.c
make use of int for similar purposes. This commit fixes
that situation by updating some variables to match
mem_pool data types.

Change-Id: I0aa01c27e512d06d40432e8091ed8fd9d959970c
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
2016-12-12 20:01:31 +00:00
Johan Hedberg f99ad3f0e2 kernel: Refactor remaining time evaluation for timeouts
Factor out the code for evaluating the remaining time for _timeout
structs so that it can also be used for other objects besides k_timer
structs (like k_delayed_work, coming in a subsequent patch).

Change-Id: I243a7b29fb2831f06e95086a31f0d3a6c37dad67
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2016-12-12 18:55:40 +00:00
Marcus Shawcroft a715194d43 random: Rewrite sys_rand32_init() with SYS_INIT()
Use the SYS_INIT() mechanism to invoke the sys_rand32_init() function
in random drivers that require an initializer.  Remove all empty
sys_rand32_init() instances.

The existing explicit sys_rand32_init() function runs immediately after
PRE_KERNEL_2 before stack canaries are initialized.  In order to get
equivalent behaviour with sys_rand32_init() we set SYS_INIT() to
initialize the random drivers at the lowest priority of PRE_KERNEL_2.

Change-Id: I4521e44daac806bc4eef01ce7fdf2ba5367e0587
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
2016-12-11 11:18:18 +00:00
Carles Cufi 9849df8c80 kernel: Disable interrupts after tick calculation in k_sleep()
To guarantee that the compiler does not reorder the execution of
irq_lock() with preceding operations, a volatile qualifier is
placed before the declaration of the ticks variable, which then
ensures that irq_lock() is executed after the tick calculation but
before accessing the ready and timeout queues.
Without the volatile keyword interrupts will be disabled during the
calculation of the ticks, which increases interrupt latency
significantly.

Change-Id: I2da82a1282e344f3b8d69e9457b36a4cb1d9ec18
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2016-12-08 16:37:51 +00:00
Mahavir Jain acea24138a kernel: replace .BSS and .DATA setup with standard library calls
Use standard library calls like memset/memcpy for setting up BSS and DATA
sections during system initialization, this helps to take advantage of
architecture specific optimizations from standard library.

Change-Id: Ia72b42aa65b44d1df7c22dd1fbc39a44fa001be9
Signed-off-by: Mahavir Jain <mjain@marvell.com>
2016-12-02 17:44:06 +00:00