Commit Graph

2055 Commits

Author SHA1 Message Date
Andy Ross 914205ca85 kernel/timeout: Add k_uptime_ticks() API
Add a call to get the system tick count as an official API (and
redefine the existing millisecond API in terms of it).  Sophisticated
applications need to be able to count ticks directly, and the newer
timeout API supports that.  Uptime should too, for symmetry.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Andy Ross 5a5d3daf6f kernel/timeout: Add timeout remaining/expires APIs
Add tick-based (i.e. precision resistant) inspection APIs for kernel
timeouts visible via k_timer, k_delayed work and thread timeouts
(i.e. pended/sleeping threads).  These are each available in
"remaining" and "expires" variants returning time values relative to
current time and system start.  All have system calls where applicable
(i.e. everywhere but k_delayed_work, which is not a userspace API)

The pre-existing millisecond "remaining_get()" predicates for timer
and delayed work remain, but are expressed in terms of the newer
calls.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Andy Ross 4c7b77a716 kernel/timeout: Add absolute timeout APIs
Add support for "absolute" timeouts, which are expressed relative to
system uptime instead of deltas from current time.  These allow for
more race-resistant code to be written by allowing application code to
do a single timeout computation, once, and then reuse the timeout
value even if the thread wakes up and needs to suspend again later.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Andy Ross cfeb07eded kernel/timeout: Enable 64 bit timeout precision
Add a CONFIG_TIMEOUT_64BIT kconfig that, when selected, makes the
k_ticks_t used in timeout computations pervasively 64 bit.  This will
allow much longer timeouts and much faster (i.e. more precise) tick
rates.  It also enables the use of absolute (not delta) timeouts in an
upcoming commit.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Andy Ross 7832738ae9 kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument.  Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created.  This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.

The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.

The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.

Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.

For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided.  When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.

Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions.  These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig.  These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.

k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.

Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate.  Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure.  But k_poll() does not fail
spuriously, so the loop was removed.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Oleg Zhurakivskyy b1e1f64d14 global: Replace BUILD_ASSERT_MSG() with BUILD_ASSERT()
Replace all occurences of BUILD_ASSERT_MSG() with BUILD_ASSERT()
as a result of merging BUILD_ASSERT() and BUILD_ASSERT_MSG().

Signed-off-by: Oleg Zhurakivskyy <oleg.zhurakivskyy@intel.com>
2020-03-31 07:18:06 +02:00
Daniel Leung 4e1637b54e kernel: add sys init level for SMP
This adds a sys init level which allows device and sys_init
to be done after SMP initialization, z_smp_init(), when all
cores are up and running.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-03-25 19:07:28 -04:00
Andrew Boie a4c9190649 kernel: fix oops policy for k_thread_abort()
Don't generate a Z_OOPS() if k_thread_abort() is called on a
thread that isn't running. Just return to the caller instead,
much like how k_thread_join() functions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-25 10:23:12 -07:00
Carles Cufi 4b37a8f3a4 Revert "global: Replace BUILD_ASSERT_MSG() with BUILD_ASSERT()"
This reverts commit 8739517107.

Pull Request #23437 was merged by mistake with an invalid manifest.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2020-03-19 18:45:13 +01:00
Oleg Zhurakivskyy 8739517107 global: Replace BUILD_ASSERT_MSG() with BUILD_ASSERT()
Replace all occurences of BUILD_ASSERT_MSG() with BUILD_ASSERT()
as a result of merging BUILD_ASSERT() and BUILD_ASSERT_MSG().

Signed-off-by: Oleg Zhurakivskyy <oleg.zhurakivskyy@intel.com>
2020-03-19 15:47:53 +01:00
Andrew Boie 28be793cb6 kernel: delete separate logic for priv stacks
This never needed to be put in a separate gperf table.
Privilege mode stacks can be generated by the main
gen_kobject_list.py logic, which we do here.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-17 20:11:27 +02:00
Andrew Boie 2dc2ecfb60 kernel: rename struct _k_object
Private type, internal to the kernel, not directly associated
with any k_object_* APIs. Is the return value of z_object_find().
Rename to struct z_object.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-17 20:11:27 +02:00
Andrew Boie 2f3a89fa8d kernel: rename _k_object_assignment
Private structure, rename to z_object_assignment

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-17 20:11:27 +02:00
Andrew Boie 4bad34e749 kernel: rename _k_thread_stack_element
Private data type, prefix with z_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-17 20:11:27 +02:00
Andrew Boie f2734ab022 kernel: use a union for kobject data values
Rather than stuffing various values in a uintptr_t based on
type using casts, use a union for this instead.

No functional difference, but the semantics of the data member
are now much clearer to the casual observer since it is now
formally defined by this union.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-17 20:11:27 +02:00
Andrew Boie fb1c29475f kernel: zero app shmem bss via SYS_INIT
Doesn't need to be directly in init.c.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-16 21:40:52 -04:00
Andrew Boie 80a0d9d16b kernel: interrupt/idle stacks/threads as array
The set of interrupt stacks is now expressed as an array. We
also define the idle threads and their associated stacks this
way. This allows for iteration in cases where we have multiple
CPUs.

There is now a centralized declaration in kernel_internal.h.

On uniprocessor systems, z_interrupt_stacks has one element
and can be used in the same way as _interrupt_stack.

The IRQ stack for CPU 0 is now set in init.c instead of in
arch code.

The extern definition of the main thread stack is now removed,
this doesn't need to be in a header.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-16 23:17:36 +02:00
Andrew Boie 322816eada kernel: add k_thread_join()
Callers will go to sleep until the thread exits, either normally
or crashing out.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-13 08:42:43 -04:00
Andrew Boie 5a2619e17a kernel: use z_swap_unlocked in k_thread_abort
z_swap_unlocked() does the same construction of using a
dummy spinlock; just use that and make the code simpler.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-12 10:57:02 -04:00
Andrew Boie 60be4eb653 kernel: remove comment in k_thread_abort()
z_reschedule_unlocked() is a no-op if the caller is
cooperative, because the logic that maintains the ready queue
ensures that the co-op thread is always at the front unless
some special handling is done like in k_yield(), which does
not happen here.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-12 10:57:02 -04:00
Joakim Andersson bd3b4b0caf kernel: Add static threads to k_thread_foreach functions
Add iterating over the static threads for k_thread_foreach and
k_thread_foreach_unlocked iterator functions

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2020-03-12 10:48:29 +02:00
Ioannis Glaropoulos 1c56f87321 kernel: fatal: fix indentation in z_fatal_error
Fix indentation error in z_fatal_error().

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-11 10:26:36 +02:00
Ioannis Glaropoulos 9d184286d3 kernel: fatal: allow tests to continue upon spurious ISR trigger
We want to be regression-testing the spurious ISR functionality.
Therefore, in z_fatal_error() we need to allow a test to continue
if an error has occured due to a spurious IRQ being triggered.
Only in test mode, wee allow the function to return without an
error. In normal mode the current thread will be aborted.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-11 10:26:36 +02:00
Ioannis Glaropoulos 49fb5d0812 kernel: fatal: check for esf validity when inspecting nested IRQ
For architectures that support detection of nested interrupts,
we need to check the validity of the exception stack frame,
before we can supply it as a pointer to the function that
evaluates whether we are in a nested interrupt context. This
commits adds the required esf pointer checks in z_fatal_error().

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-11 10:26:36 +02:00
Andrew Boie c4fbc08347 kernel: fixup thread monitor locking
The lock in kernel/thread.c was pulling double-duty, protecting
both the thread monitor linked list and also serializing access
to k_thread_suspend/resume functions.

The monitor list now has its own dedicated lock.

The object tracing test has been updated to use k_thread_foreach().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-10 16:09:24 -04:00
Flavio Ceolin 8ae822c9fa kernel: random: ifdef z_early_boot_rand_get
This function had a to sys_rand_get() even without random source. As
Zephyr is built with linkage garbage collection and this function is
called only if either ENTROPY_HAS_DRIVER or TEST_RANDOM_GENERATOR is
enabled and these options automatically enable a random source.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-03-10 21:12:28 +02:00
Andrew Boie 9f0acd44a4 kernel: add APIs for atomic os on pointers
The existing APIs are insufficient on 64-bit systems as atomic_t
is 32-bits wide.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-10 10:18:16 -04:00
Andrew Boie 60e0019751 kernel: fix return type for atomic_cas()
In some cases this was a bool, in some cases an integer value
of 1 or 0. Standardize on bool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-10 10:18:16 -04:00
Andrew Boie 6cf496f324 kernel: use sched lock for k_thread_suspend/resume
This logic should be using the sched_lock and not its own
separate lock for these two functions.

Some simplications were made; z_thread_single_resume and
z_thread_single_suspend were only used in one place, and there was
some redundant logic for whether to reschedule in the suspend case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-10 09:57:58 -04:00
Flavio Ceolin fbeaa0a510 kernel: Stack pointer random depends on MULTITHREADING
Don't pretend with have stack randomization without multithreading.
When multithreading is disabled the "main" thread never starts. Zephyr
will run on the stack used for the z_cstart(), which on most
architectures is the interrupt stack.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-03-02 22:49:37 +02:00
Andrew Boie 896e32b414 kernel: remove problematic pend() assertion
This assertion, if built in, allows users threads to crash
the kernel in a critical section by passing a negative timeout
value, creating a DoS attack vector.

Remove this assertion, immediately below it there's a check
which just resets it to 0 anyway.

Fixes: #22999

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-02-21 08:57:07 -08:00
Peter Bigot d8146d6c6d kernel: work_q: fix return value in non-error case
A recent patch allowed an error code to be returned even though the
execution path treated it as a non-error condition.  Clear the code
before returning.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-02-20 17:50:05 +02:00
Andy Ross a2f6826f9c kernel/thread: Don't clobber arch initialization of switch_handle
The recent synchronization work required that the kernel guarantee
switch_handle is non-null, but it did it in a way that works for ARC
and x86_64 but would clobber the work xtensa had already done to
populate that field.

There's no point: just make this an assert, as it's always been the
arch layer's job.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-19 08:29:35 -05:00
Luiz Augusto von Dentz 038d727c18 kernel: work: Return error if timeout cannot be aborted
This is aligned with the documentation which states that an error shall
be returned if the work has been completed:

  '-EINVAL Work item is being processed or has completed its work.'

Though in order to be able to resubmit from the handler itself it needs
to be able to distinct when the work is already completed so instead of
-EINVAL it return -EALREADY when the work is considered to be completed.

Fixes #22803

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2020-02-17 22:37:26 +02:00
Ioannis Glaropoulos 3a3364eef8 kernel: fatal: unlock IRQs in early return points in z_fatal_error
We need to unlock IRQs in early return points of
z_fatal_error() functions; not only at the normal
return point.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-02-14 20:47:37 +02:00
Andy Ross 7353c7f95d kernel/userspace: Move syscall_frame field to thread struct
The syscall exception frame was stored on the CPU struct during
syscall execution, but that's not right.  System calls might "feel
like" exceptions, but they're actually perfectly normal kernel mode
code and can be preempted and migrated between CPUs at any time.

Put the field on the thread struct.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-08 08:51:04 -05:00
Andy Ross 8153144de0 kernel/fatal: Fatal errors must not be preempted
The code underneath z_fatal_error() (which is usually run in an
exception context, but is not required to be) was running with
interrupts enabled, which is a little surprising.

The only bug present currently is that the CPU ID extracted for
logging is subject to a race (i.e. it's possible but very unlikely
that such a handler might migrate to another CPU after the error is
flagged and log the wrong CPU ID), but in general users with custom
error handlers are likely to be surprised when their dying threads
gets preempted by other code before they can abort.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-08 08:51:04 -05:00
Andy Ross eefd3daa81 kernel/smp: arch/x86_64: Address race with CPU migration
Use of the _current_cpu pointer cannot be done safely in a preemptible
context.  If a thread is preempted and migrates to another CPU, the
old CPU record will be wrong.

Add a validation assert to the expression that catches incorrect
usages, and fix up the spots where it was wrong (most important being
a few uses of _current outside of locks, and the arch_is_in_isr()
implementation).

Note that the resulting _current expression now requires locking and
is going to be somewhat slower.  Longer term it's going to be better
to augment the arch API to allow SMP architectures to implement a
faster "get current thread pointer" action than this default.

Note also that this change means that "_current" is no longer
expressible as an lvalue (long ago, it was just a static variable), so
the places where it gets assigned now assign to _current_cpu->current
instead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-08 08:51:04 -05:00
Andrew Boie efc5fe07a2 kernel: overhaul unused stack measurement
The existing stack_analyze APIs had some problems:

1. Not properly namespaced
2. Accepted the stack object as a parameter, yet the stack object
   does not contain the necessary information to get the associated
   buffer region, the thread object is needed for this
3. Caused a crash on certain platforms that do not allow inspection
   of unused stack space for the currently running thread
4. No user mode access
5. Separately passed in thread name

We deprecate these functions and add a new API
k_thread_stack_space_get() which addresses all of these issues.

A helper API log_stack_usage() also added which resembles
STACK_ANALYZE() in functionality.

Fixes: #17852

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-02-08 10:02:35 +02:00
Anas Nashif 73008b427c tracing: move headers under include/tracing
Move tracing.h to include/tracing/ to align with subsystem reorg.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-02-07 15:58:05 -05:00
Andrew Boie d1f50122f9 kernel: move timing externs to public header
These arch_timing_ defines get used in certain timer
drivers and need to be in the public include space,
and not the private kernel headers.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-02-06 23:07:37 -05:00
Andy Ross 5737b5c843 kernel/sched: Re-add IPI calls on k_wakeup() and k_thread_priority_set()
These got dropped by an earlier patch, but are required on SMP systems
so synchronously notify other CPUs of changed scheduler state.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-04 17:50:11 -05:00
Andy Ross c44d566aee kernel/sched: Re-fix SMP wait-for-switch on interrupt exit
This got clobbered by commit adac4cbafa in what I think was a rebase
mistake.  Without it, on SMP systems it's possible to select a new
_current thread and try to return into it before another CPU has
actually finished switching away from it.

Interestingly: the frequency with which this bug got caught once it
was reintroduced was much, much higher than it was when it was fixed
the first time due to the instruction pointer poisoning introduced in
the interrim.  Incompletely saved threads now have deliberately broken
state when assertions are enabled and will panic synchronously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-04 17:50:11 -05:00
Andy Ross 96ccc46e03 kernel/sched: Put k_thread_start() under a single lock
Similar to the suspend refactoring earlier, this really nees to be
done in an atomic block.  There were two confirmable races here,
though it's not completely clear either was being hit in practice:

1. The bit operations in z_mark_thread_as_started() aren't atomic so
   it needs to be protected.

2. The intermediate state in z_ready_thread() could result in a dead
   or suspended thread being added to the ready queue if another
   context tried a simultaneous abort or suspend.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-03 09:31:56 -05:00
Andy Ross ed6b4fb21c kernel/sched: Properly synchronize pend()
Kernel wait_q's and the thread pended_on backpointer are scheduler
state and need to be modified under the scheduler lock.  There was one
spot in pend() where they were not.

Also unpack z_remove_thread_from_ready_q() into an unsynchronized
utility so that it can be called by this process in a single lock
block.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-03 09:31:56 -05:00
Andy Ross b8ff63e3c7 kernel/sem: Fix SMP race
This had the same race that queue did: you have to be 100% done with
state management before calling z_ready_thread(), because another CPU
can pick up the thread before the return value was set.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-03 09:31:56 -05:00
Daniel Leung adac4cbafa sched: smp: fix thread marked dead but still running
Under SMP, when a thread is marked aborting, this thread may still
be running on another CPU. However, if there is only one thread
available to run, this thread may be selected to run again due to
next_up() not checking for the aborting state. Moreover, when
there is no IPI to signal to others k_thread_abort() being called,
the k_thread_abort() target thread is marked dead after a new
thread is selected to run. This causes the original thread calling
k_thread_abort() to mistaken that target thread is no longer
running and returns.

Note that, with working IPI, z_sched_ipi() is called as an ISR
to mark the target thread dead. A new thread is then selected to
run, so that the target thread would not be selected due to it
being dead.

This moves the code to mark thread dead into next_up(), where
the next best thread is selected, and the current thread being
swapped out. z_sched_ipi() now becomes an empty function, and
calls to it are removed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-01-31 11:46:35 -05:00
Anas Nashif 471ffbe77d coverage: do not dump coverage data by default
Only dump data when we are interested in the analysing coverage. By
default just collect the data.

CONFIG_COVERAGE_DUMP is used to control this behaviour.

This will help speed up sanitycheck and will avoid lots of noise in the
log when some tests with coverage enabled failed. Dumping data to
console is also suspected to be one of the reason why qemu hangs in CI.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-30 16:04:03 -05:00
Anas Nashif 7605ac2c4c kernel: thread: fix string for _THREAD_PRESTART
_THREAD_PRESTART means the thread was not started yet and is being
setup, for example this is the case when starting a thread with a
timeout. We do not have a 'restart' thread state.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-29 13:17:19 -08:00
Sebastian Bøe fdac7b3319 cmake: Add target for generating header files
Before C sources can be compiled any generated header that they
include must be generated. Currently, the target 'offsets_h' happens
to depend directly or indirectly on all generated headers.

This means that to compile safely, one can simply depend on
'offsets_h'. But this is coincidental and might not be true in the
future.

To be able to safely depend on a target that represents all generated
headers being ready we introduce the target
'zephyr_generated_headers'.

Any third-party build scripts can now safely depend on
'zephyr_generated_headers' and be protected from any internal changes
to the build system, like the removal of offsets_h.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2020-01-29 11:44:57 -06:00