Some inconsistent spacing and private types starting with '_'.
Change-Id: I3354b69cc3934717d3b8097cdda98474339c1f32
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
The loop was not tracking the correct next node in the list correctly.
However, it happened that the fix is way more involved than just fixing
that small issue, due to the way that semaphore group timeouts work.
Instead of handling timeouts one-by-one, we have to handle all timeouts
in a semaphore group as one. To do that, we use the fact that the
timeout of the real thread is always found first in the kernel's
timeout_q, and if it has expired, we do not even look at the timeouts of
the dummy threads.
Change-Id: Iadcfd06f33c6b335efa2592b2c01eeb5ca67afde
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Queuing in the timeout_q of timeouts expiring on the same tick queue
them in reverse order: as soon as the new timeout finds a timeout
expiring on the same tick or later, it get prepended to that timeout:
this allows exiting the traversal of the timeout as soon as possible,
which is done with interrupts locked, thus reducing interrupt latency.
However, this has the side-effect of handling the timeouts expiring on
the same tick in the reverse order that they are queued.
For example:
thread_c, prio 4:
uint32_t uptime = k_uptime_get_32();
while(uptime == k_uptime_get_32()); /* align on tick */
k_timer_start(&timer_a, 5, 0);
k_timer_start(&timer_b, 5, 0);
thread_a, prio 5:
k_timer_status_sync(&timer_a);
printk("thread_a got timer_a\n");
thread_b, prio 5:
k_timer_status_sync(&timer_b);
printk("thread_b got timer_b\n");
One could "reasonably" expect thread_a to run first, since both threads
have the same prio, and timer_a was started before timer_b, thus
inserted first in the timeout_q first (time-wise). However, thread_b
will run before thread_a, since timer_b's timeout is prepended to
timer_a's.
This patch keeps the reversing of the order when adding timeouts in the
timeout_q, thus preserving the same interrupt latency; however, when
dequeuing them and adding them to the expired queue, we now reverse that
order _again_, causing the timeouts to be handled in the expected order.
Change-Id: Id83045f63e2be88809d6089b8ae62034e4e3facb
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Modify _get_first_thread_to_unpend() so that it does not remove the
thread from the wait queue. Rename it to _find_first_thread_to_unpend()
to match the new behaviour.
This will be needed to fix a semaphore group bug.
Change-Id: I1b7531c3beecf3b6a86ecf88a93a02449edd0767
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Rather than explicitely checking the thread state bit.
Change-Id: Ic78427d9847e627a0e91d0147d3b6164450597f6
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
This has not bitten us yet, but it was a ticking timebomb.
This is similar to the issue that was found with irq_lock/irq_unlock
implementations on several architectures. Having a volatile variable is
not the way to force the sched_lock variable to be
incremented/decremented around the accesses to data it protects.
Instead, a compiler barrier must prevent the compiler from reordering
the memory accesses around setting of sched_lock. Needed in the inline
implementations _sched_lock()/_sched_unlock_no_reschedule(), which
resolve to simple decrement/increment of the per-thread sched_lock
variable.
Change-Id: I06f5b3524889f193efe69caa947118404b1be0b5
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Xtensa port uses more stack than others. This was discussed with the team and
we agreed that this can be accepted for the first beta.
We will investigate this later to see how to avoid allocating coproc registers
for the system threads in order to reduce the stack overhead. However this
will not be before the port is considered stable.
Change-Id: Icd5b2b0ab68d0906b5408f35f081b100acabc010
Signed-off-by: Mazen NEIFER <mazen@nestwave.com>
This patch adds support for using device tree configuration files for
configuring ARM platforms.
In this patch, only the FLASH_SIZE, SRAM_SIZE, NUM_IRQS, and
NUM_IRQ_PRIO_BITS were removed from the Kconfig options. A minimal set
of options were removed so that it would be easier to work through the
plumbing of the build system.
It should be noted that the host system must provide access to the
device tree compiler (DTC). The DTC can usually be installed on host
systems through distribution packages or by downloading and compiling
from https://git.kernel.org/pub/scm/utils/dtc/dtc.git
This patch also requires the Python yaml package.
This change implements parts of each of the following Jira:
ZEP-1304
ZEP-1305
ZEP-1306
ZEP-1307
ZEP-1589
Change-Id: If1403801e19d9d85031401b55308935dadf8c9d8
Signed-off-by: Andy Gross <andy.gross@linaro.org>
Poll events were getting registered even when polling conditions had
already been met, but events with conditions met did not register and
did not increment the number of events registered. This caused a
possible discrepancy between the number of events registered and the
position of the last event registered in the events array.
As soon as one event condition is met, the next ones in the array should
not get registered even if their condition is not met. This is what the
code does now.
Change-Id: Ibcc3b135ec9d3cf463beb9da3f641fec962b34bf
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
This flag is no longer necessary and TICKLESS_IDLE will be
enabled by default if SYS_POWER_MANAGEMENT is enabled.
Jira: ZEP-1325
Change-Id: Ic6cd4b8dc0a17c6a413cabf6509b215a4558318d
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
Due to a limitation on XCC, the inline assembly does not
produce the expected instructions. This results in a wrong
code sequence. On the other hand, plain C code works well.
The note about compilers seems to not be an issue on any of
our currently supported compilers.
Change-Id: I9d2ab0fbf8a48d9dad51da3fd54453f205516d74
Signed-off-by: Mazen NEIFER <mazen@nestwave.com>
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
It was in the static initializers, but was missing from the object
runtime init functions.
Change-Id: I10d519760eabdbe640a19cc5cfa9241c1356b070
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
This will allow users to install a way of finding out what the event and
the objects are used for without looking at the object itself, or to
tag a bunch of objects that belong together.
The runtime init function _does not_ take a tag so that there is no
runtime hit if not needed. The static initializer macro _does_ take the
tag, so that it does not have to be initialized at runtime if needed,
and thus avoids a runtime hit.
Change-Id: I89a36c6f969ff952f9d1673b1bb5136e407535c6
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
When CONFIG_FP_SHARING is enabled without CONFIG_LEGACY
thread.c was referencing symbols like K_TASK_GROUP_FPU
which are defined in legacy.h
Change-Id: I4bb1723f91c3e3586c5d1bf05cf23a1c0d3d5aac
Signed-off-by: Jithu Joseph <jithu.joseph@intel.com>
With the recently added k_poll feature, k_fifo_put_list was forgotten
about. Add the necessary code to wake up a k_poll call when
k_fifo_put_list is called.
Change-Id: Ib9baef5ee2bd00620e2eea5afdd81accc4518bd5
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
k_poll() is similar to the POSIX poll() API in spirit in that it allows
a single thread to monitor multiple events without actively polling
them, but rather pending for one or more to become ready. Such events
can be a direct event, or kernel objects (currently only semaphores and
fifos).
When a kernel object being polled on is ready, it is not "given" to the
poller: the poller must then acquire it via the regular API for the
object (e.g. k_sem_take()). Only one thread can poll on a particular
object at one time. These restrictions mean that k_poll() is most
effective when a single thread monitors multiple events that are not
subject for contention. For example, being the sole reader on multiple
fifos, or the only thread being signalled by multiple semaphores, or a
combination of both.
Change-Id: I7035a9baf4aa016fb87afc5f5c0f5f8cb216480f
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Dissociate wait queue initialization from doubly-linked lists if the
underlying implementation is to be abstracted.
Change-Id: Id7544c6ac506643437f9c4f0ae97e7eecab8d06d
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
The K_<thread option> flags/options avaialble to users were hidden in
the kernel private header files: move them to include/kernel.h to
publicize them.
Also, to avoid any future confusion, rename the k_thread.execution_flags
field to user_options.
Change-Id: I65a6fd5e9e78d4ccf783f3304b607a1e6956aeac
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
The execution_flags will store the user-facing states of a thread.
This also fixes a bug where K_ESSENTIAL was already assigned to
execution_flags via the options field of
k_thread_spawn()/K_THREAD_DEFINE().
Change-Id: I91ad7a62b5d180e09eead8985ff519809959ecf2
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
They are not part of the API, so rename from K_<state> to
_THREAD_<state>.
Change-Id: Iaebb7d3083b80b9769bee5616e0f96ed2abc5c56
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Unused.
Reuse bit for K_FP_REGS to keep the used bits the lowest possible.
Change-Id: I5998801ef34156271d4f66d1948a05e0b2ce58f7
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
This will be needed for some thread user options that will move to
kernel.h since they are part of the user API.
Change-Id: I46e302b6cafcdddbad3458134b98feb5b8d45d9b
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
When prio and sched_locked were moved into a struct together to create a
union with the combined preempt field, the volatile qualifier moved from
sched_locked to prio by mistake.
Change-Id: I5a8e01324f14e77e3d7162c12515471826023633
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.
Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.
Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file. Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.
Jira: ZEP-1457
Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Make it depend on CONFIG_LEGACY_KERNEL being enabled.
Change-Id: Id5d3cd35a52d38bf7476ea8e51b71e2c687f0923
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The idle priority was not accounted for.
With this change, the philosophers demo runs in coop-only mode.
Change-Id: I23db33687bcf3b2107d5fc07977143730f62e476
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
If the system's priority inheritance priority ceiling is not the same as
the highest priority in the system, it was possible for a thread owning
the mutex to get its priority lowered instead of left unchanged.
Change-Id: Ic06a1c4a66322c2949b2ba2f53efa03200fb1fc1
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
-1 is reserved for the idle thread in coop-only mode and -1 does not
exist as a priority in preempt-only mode.
With this change, the philosophers demo runs in preempt-only mode.
Change-Id: Id15a6eafc7582966deaf0db9ed6960b5da74be33
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Similar to what was available with nano timers in the original kernel,
allow a user to associate opaque data with a timer.
Fix for ZEP-1558.
Change-Id: Ib8cf998b47988da27eba4ee5cd2658f90366b1e4
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Default 256 bytes stack size for idle task is not enough, as
stack grows/shrinks by a multiple of 16-bytes in the
RISC-V architecture.
Increase it to 512 bytes for RISCV32 architecture
Change-Id: I8321c48e4c1a877b252ba5561f3cbdd1fe475fc7
Signed-off-by: Jean-Paul Etienne <fractalclone@gmail.com>
added _MOVE_INSTR for RISCV32 architecture
The store instruction has a different syntax in RISC-V,
compared to the other architectures. Hence, for each
architecture, specify the entire load instruction within
the _MOVE_INSTR variable.
Change-Id: Iedc421e73411876abd8b698f7d4b46081b473d79
Signed-off-by: Jean-Paul Etienne <fractalclone@gmail.com>
For some of our samples/test we disable all console support, yet enable
BOOT_BANNER in tests/include/test.config, this can generate warnings
like:
warning: (BOOT_BANNER && BLUETOOTH_DEBUG_LOG && BLUETOOTH_DEBUG_MONITOR)
selects PRINTK which has unmet direct dependencies (CONSOLE_HAS_DRIVER)
So having BOOT_BANNER depend on CONSOLE_HAS_DRIVER cleans things up.
Change-Id: Ia6a6348fc08b0808ea6eaedb8c8833507f82c702
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
The main, idle, interrupt and workqueue call stack definitions are not available
to applications to call stack_analyze() on, but they often require to be
measured empirically to tune their sizes in particular applications and
use cases.
This exposes a new k_call_stacks_analyze() API call that allows the
application to measure the used call stack space for the 4
kernel-defined call stacks.
Additionally for the ARC architecture the FIRQ stack is also profiled.
Change-id: I0cde149c7366cb6c4bbe8f9b0ab1cc5b56a36ed9
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
These two fields in the thread structure control the preemptibility of a
thread.
sched_locked is decremented when the scheduler gets locked, which means
that the scheduler is locked for values 0xff to 0x01, since it can be
locked recursively. A thread is coop if its priority is negative, thus
if the prio field value is 0x80 to 0xff when looked at as an unsigned
value.
By putting them end-to-end, this means that a thread is non-preemptible
if the bundled value is greater than or equal to 0x0080. This is the
only thing the interrupt exit code has to check to decide to try a
reschedule or not.
Change-Id: I902d36c14859d0d7a951a6aa1bea164613821aca
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.
- prio can fit in one byte, limiting the priorities range to -128 to 127
- recursive scheduler locking can be limited to 255; a rollover results
most probably from a logic error
- flags are split into execution flags and thread states; 8 bits is
enough for each of them currently, with at worst two states and four
flags to spare (on x86, on other archs, there are six flags to spare)
Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.
Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Add global option for legacy configurations and enable by default for
backward compatibility. Disable option on tests and keep it on legacy
samples and tests.
Jira: ZEP-964
Change-Id: I0831e2aa74d438b1ac74eb762186cb220a504beb
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Remove legacy option and use SYS_CLOCK_EXISTS where appropriate.
Change-Id: I3d524ea2776e638683f0196c0cc342359d5d810f
Signed-off-by: Anas Nashif <anas.nashif@intel.com>