Scheduling relative timeouts from within timer callbacks (=sys clock ISR
context) differs from scheduling relative timeouts from an application
context.
This change documents and explains the rationale of this distinction.
Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
Device dependencies are not always required, so make them optional via
CONFIG_DEVICE_DEPS. When enabled, the gen_device_deps script will run so
that dependencies are collected and part of the final image. Related
APIs will be also made available. Since device dependencies are used in
just a few places (power domains), disable the feature by default. When
not enabled, a second linking pass will not be required.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The option can now be set by projects. This change will also allow to
make it dependent on a future CONFIG_DEVICE_DEPS option.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename the Kconfig option to be in line with recent renamings in device
handles/dependencies.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename struct device `handles` member to `deps`, in line with previous
renamings in the device API.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
This adds a few line use zephyr_syscall_header() to include
headers containing syscall function prototypes.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Only set a cpu as active (on pm subsystem) when the cpu is effectively
initialized. We cannot assume on pm subsystem that all cpus were
initialized since when the option CONFIG_SMP_BOOT_DELAY is used cpus are
initialized on demand by the application.
Note that once cpus are properly initialized the subystem is able to track
their status.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
As discovered by Carlo Caione, the k_thread_join code had a case where
it detected it had been called on a thread already marked _THREAD_DEAD
and exited early. That's not sufficient. The thread state is mutated
from the thread itself on its exit path. It may still be running!
Just like the code in z_swap(), we need to spin waiting on the other
CPU to write the switch handle before knowing it's safe to return,
otherwise the calling context might (and did) do something like
immediately k_thread_create() a new thread in the "dead" thread's
struct while it was still running on the other core.
There was also a similar case in k_thread_abort() which had the same
issue: it needs to spin waiting on the other CPU to kill the thread
via the same mechanism.
Fixes#58116
Originally-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Andy Ross <andyross@google.com>
The switch_handle field in the thread struct is used as an atomic flag
between CPUs in SMP, and has been known for a long time to technically
require memory barriers for correct operation. We have an API for
that now, so put them in:
* The code immediately before arch_switch() needs a write barrier to
ensure that thread state written by the scheduler is seen to happen
before the outgoing thread is flagged with a valid switch handle.
* The loop in z_sched_switch_spin() needs a read barrier at the end,
to make sure the calling context doesn't load state from before the
other CPU stored the switch handle.
Also, that same spot in switch_spin was spinning with interrupts held,
which means it needs a call to arch_spin_relax() to avoid a FPU state
deadlock on some architectures.
Signed-off-by: Andy Ross <andyross@google.com>
This trick turns out also to be needed by the abort/join code.
Promote it to a more formal-looking internal API and clean up the
documentation to (hopefully) clarify the exact behavior and better
explain the need.
This is one of the more... enchanted bits of the scheduler, and while
the trick is IMHO pretty clean, it remains a big SMP footgun.
Signed-off-by: Andy Ross <andyross@google.com>
Many RTOS applications assume the virtual and physical address
is 1:1 mapping, so add the 1:1 mapping support in z_phys_map()
to easy adapt these applications.
Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
Give architectures that need it the ability to perform special checks
while e.g. waiting for a spinlock to become available.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce a new API for barrier operations starting with a general
skeleton and the implementation for barrier_data_memory_fence_full().
Select a built-in or an arch-based implementation according to new
Kconfig symbols CONFIG_BARRIER_OPERATIONS_BUILTIN and
CONFIG_BARRIER_OPERATIONS_ARCH.
The built-in implementation falls back on the compiler built-in
function using __ATOMIC_SEQ_CST as it is done for the atomic APIs
already.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
z_page_frame can't be packed on Xtensa due memory alignment
constraints. When this is struct is packed it is 5 bytes long it will
cause an memory alignment problem on Xtensa.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Until now iterable sections APIs have been part of the toolchain
(common) headers. They are not strictly related to a toolchain, they
just rely on linker providing support for sections. Most files relied on
indirect includes to access the API, now, it is included as needed.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
When a running thread gets aborted asynchronously (this only happens
in SMP contexts, obviously) it gets flagged "aborting", but the actual
abort needs to happen in the thread's own context. For convenience,
this was done in the next_up() routine that selects the next thread to
run at interrupt exit time.
But this check was being done AFTER the next candidate thread was
selected from the run queue. Thread abort can wake up threads blocked
in k_thread_join(), and therefore these weren't seen as runable
threads, even if they should have been.
Executive summary: if you killed a thread running on another CPU, and
there was another thread joined to the killed thread that should have
run on that CPU, it wouldn't (until it received an interrupt or
otherwise reached a schedule point).
Move the abort check above the run queue inspection and into the
end-of-interrupt processing in z_get_next_switch_handle() (so it's
actually a mild performance boost as it's no longer part of the
cooperative context switch path). Simple fix, subtle bug.
Fixes#58040
Signed-off-by: Andy Ross <andyross@google.com>
Exception handler(arch/x86/core/ia32/excstub.S) may access
_kernel variable, it will lead to failure when enabled paging,
so make this critical variable pinned.
Signed-off-by: Qipeng Zha <qipeng.zha@intel.com>
The ACE 2.0 LNL platform has 5 HIFI4 cores. Change number
of cores to enable 5th core on the platform.
Signed-off-by: Jaroslaw Stelter <Jaroslaw.Stelter@intel.com>
Without these parentheses, specifying a q_max_msgs of e.g.
`MY_DEFAULT_QUEUESIZE+1` would result in a buffer of size
(1 element + MY_DEFAULT_QUEUESIZE bytes).
This would then lead to an unbounded buffer overflow because the queue
never reaches the exact (offset by MY_DEFAULT_QUEUESIZE bytes)
`buffer_end` and just keeps writing.
Additionally, add asserts to make sure this can't happen again.
Signed-off-by: Armin Brauns <armin.brauns@embedded-solutions.at>
Use iterable sections to handle devices list. This simplifies devices
implementation by using standard APIs.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
When building sample.minimal.mt-no-preempt-no-timers.arm on arm-clang
we get a link error as z_pm_save_idle_exit expects sys_clock_idle_exit
to be defined.
However the sample sets CONFIG_SYS_CLOCK_EXISTS=n so
sys_clock_idle_exit() will not be defined by any driver. So add proper
ifdef protection in z_pm_save_idle_exit to fix this.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
When a semaphore is given and there is no thread waiting
for it, do not unconditionally perform a reschedule.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Some devices do not need to perform any initialization, so allow the
init function to be NULL. In this case, the initialization code will
just mark the device as initialized, i.e. ready.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Removes unused absolute symbols that are defined via the
GEN_ABSOLUTE_SYM() macro in the kernel directory.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
As both C and C++ standards require applications running under an OS to
return 'int', adapt that for Zephyr to align with those standard. This also
eliminates errors when building with clang when not using -ffreestanding,
and reduces the need for compiler flags to silence warnings for both clang
and gcc.
Most of these changes were automated using coccinelle with the following
script:
@@
@@
- void
+ int
main(...) {
...
- return;
+ return 0;
...
}
Approximately 40 files had to be edited by hand as coccinelle was unable to
fix them.
Signed-off-by: Keith Packard <keithp@keithp.com>
As both C and C++ standards require applications running under an OS to
return 'int', adapt that for Zephyr to align with those standard. This also
eliminates errors when building with clang when not using -ffreestanding,
and reduces the need for compiler flags to silence warnings for both clang
and gcc
Signed-off-by: Keith Packard <keithp@keithp.com>
Many areas of Zephyr divide and round up without using the DIV_ROUND_UP
macro. Make use of it, so that we make use of a tested system macro and
at the same time we make code more readable.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The init infrastructure, found in `init.h`, is currently used by:
- `SYS_INIT`: to call functions before `main`
- `DEVICE_*`: to initialize devices
They are all sorted according to an initialization level + a priority.
`SYS_INIT` calls are really orthogonal to devices, however, the required
function signature requires a `const struct device *dev` as a first
argument. The only reason for that is because the same init machinery is
used by devices, so we have something like:
```c
struct init_entry {
int (*init)(const struct device *dev);
/* only set by DEVICE_*, otherwise NULL */
const struct device *dev;
}
```
As a result, we end up with such weird/ugly pattern:
```c
static int my_init(const struct device *dev)
{
/* always NULL! add ARG_UNUSED to avoid compiler warning */
ARG_UNUSED(dev);
...
}
```
This is really a result of poor internals isolation. This patch proposes
a to make init entries more flexible so that they can accept sytem
initialization calls like this:
```c
static int my_init(void)
{
...
}
```
This is achieved using a union:
```c
union init_function {
/* for SYS_INIT, used when init_entry.dev == NULL */
int (*sys)(void);
/* for DEVICE*, used when init_entry.dev != NULL */
int (*dev)(const struct device *dev);
};
struct init_entry {
/* stores init function (either for SYS_INIT or DEVICE*)
union init_function init_fn;
/* stores device pointer for DEVICE*, NULL for SYS_INIT. Allows
* to know which union entry to call.
*/
const struct device *dev;
}
```
This solution **does not increase ROM usage**, and allows to offer clean
public APIs for both SYS_INIT and DEVICE*. Note that however, init
machinery keeps a coupling with devices.
**NOTE**: This is a breaking change! All `SYS_INIT` functions will need
to be converted to the new signature. See the script offered in the
following commit.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
init: convert SYS_INIT functions to the new signature
Conversion scripted using scripts/utils/migrate_sys_init.py.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
manifest: update projects for SYS_INIT changes
Update modules with updated SYS_INIT calls:
- hal_ti
- lvgl
- sof
- TraceRecorderSource
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: devicetree: devices: adjust test
Adjust test according to the recently introduced SYS_INIT
infrastructure.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: kernel: threads: adjust SYS_INIT call
Adjust to the new signature: int (*init_fn)(void);
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Add check to ensure that CONFIG_MP_NUM_CPUS and CONFIG_MP_MAX_NUM_CPUS
are set the same. This will at least cause a build issue for out of
tree users.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
All we really want here is to set default parameters. However
k_sched_time_slice_set() also calls z_reset_time_slice(_current)
which expects `_current` to be fully initialized.
Simply initialize `slice_ticks` and `slice_max_prio` with default values
directly. Unfortunately the compiler isn't smart enough to expand
k_ms_to_ticks_ceil32(CONFIG_TIMESLICE_SIZE) to a constant expression
at build time so we must do the conversion by hand (and it shouldn't
overflow due to the nature of the value).
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Slice expirations are now based on the same timeout mechanism as
regular timers which have been recently fixed and proven to work with
single-tick periods.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The reason for arch_num_cpus() is to be able to dynamically adapt to
the actual number of available CPUs at run time.
In the z_sched_init() case, it is not the number of active CPUs that
we need but rather the total number of potential CPUs, and that is
represented by CONFIG_MP_MAX_NUM_CPUS not arch_num_cpus().
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add the `zephyr,pm-device-runtime-auto` flag to `pm.yaml` and
`struct pm_device`.
This flag is intended to signify to the boot system that device runtime
PM should be automatically enabled on the device after the init function
has run.
Only run `pm_device_runtime_auto_enable` function on a device if
initialisation succeeded. This prevents actions being run on devices
that are not ready.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Make sliceable() the actual condition for a sliceable thread. Avoid
creating a slice timeout for non sliceable threads. Always reset
slice_expired even if the next thread is not sliceable. Fold
slice_expired_locked() into z_time_slice() to avoid the hidden
unlock/lock. Change `curr` to `thread` as this is not necessarily
the current thread (yet) being set. Make variables static.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Updates events to prevent a timeout from corrupting the list of
threads that needs to be waken up.
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
Fixes race condition for k_event_post_internal() in an
SMP environment while walking the waitq. Uses z_sched_waitq_walk()
to safely walk the waitq by using a sched_spinlock.
It should be noted that since walking the wait queue is an
operation of indeterminant length, there exists the possibility
that the sched_spinlock (which is a highly used and contended-for
lock) may be locked for an indeterminant amount of time. However,
it is expected that few threads will be waiting on any given kernel
event object, which should ameliorate this risk.
Fixes#54317
Signed-off-by: Aastha Grover <aastha.grover@intel.com>
Moving timeslice events to timeouts isn't quite enough on SMP, as it's
still possible for systems that don't broadcast their timer interrupts
to end up handling an expiration for a foreign CPU. There, we need an
IPI, and a symmetric call to z_time_slice() (which is itempotent and
fast) in the IPI ISR.
Signed-off-by: Andy Ross <andyross@google.com>
Rework the fragile and ad-hoc computation of timeslice expirations
into per-CPU struct _timeout objects with regular callbacks. The
expiration callbacks themselves simply set a per-cpu flag (they might
run on any CPU), which gets checked at the end of the timer ISR on
every CPU.
This simplifies logic and removes a bunch of code. It also fixes at
least three bugs:
1. As @npitre discovered: On SMP, the number of ticks announced on any
given CPU is going to be a subset of all expired ticks. This broke
the accounting of timeslice ticks, and effectively meant that
timeslicing only worked on SMP on systems where one CPU could hog all
the announcements, and only on that CPU.
2. The bootstrap path to arm the timer driver after setting the first
timeout in an empty list couldn't take into account
sys_clock_elapsed() ticks, as it didn't know whether it was being
called underneath an existing announce loop. Now this code is no
longer responsible for knowing anything about time slicing at all.
3. Also on SMP, there was a case where two CPUs timeslicing
simultaneously could stomp on each others' timeouts in
z_set_timeout_expiry(), as neither had a way of knowing what the
other's state was. CPUs could miss their own expiration and have to
wait for the slice expiration on the other CPU. Now, timeouts are
global objects with simple expiration times, and there's no need for
that function at all.
Signed-off-by: Andy Ross <andyross@google.com>
Some of the offset symbols that are derived from the macro
GEN_OFFSET_SYM() are not used anywhere in the Zephyr codebase.
Remove them as part of a cleanup effort.
Instances of an associated GEN_OFFSET_SYM() have also been
removed when the resulting macro is no longer referenced.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Some of the offset symbols generated via the macro GEN_OFFSET_SYM()
are not used anywhere in the Zephyr codebase. Remove them as part of
a cleanup effort.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Commit 3e729b2b1c ("kernel/timer: Correctly clamp period argument")
increased the lower limit to 1 so that it wouldn't conflict with a
K_NO_WAIT. But in doing so it enforced a minimum period of 2 ticks.
And the subtraction must obviously be avoided if the period is zero, etc.
Instead of doing this masquerade in k_timer_start(), let's move the
subtraction and clamping in z_timer_expiration_handler() right before
registering a new timeout. It makes the code cleaner, and then it is
possible to have single-tick periods again.
Whith this, timer_jitter_drift in tests/kernel/timer/timer_behavior does
pass with any CONFIG_SYS_CLOCK_TICKS_PER_SEC value, even when the tick
period is equal or larger than the specified timer period for the test
which failed the test before.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The call to unschedule_locked() would return true ("successfully
unscheduled") even in the case where the underlying z_abort_timeout()
failed (because the callback was already unpended and
in-progress/complete/about-to-be-run, remember that timeout callbacks
are unsynchronized), leading to state bugs and races against the
callback behavior.
Correctly detect that case and propagate the error to the caller.
Fixes#51872
Signed-off-by: Andy Ross <andyross@google.com>
Fixes sporadic data access violations that were occuring when pipes
were being used from an ISR. The ISR was incorrectly using the pipe
descriptor belonging to the interrupted thread. This led to corrupted
pipe meta-data. The solution proposed here is to perform a run-time
check and if use a pipe descriptor on the ISR's stack if called from
an ISR.
For additional information, see:
https://github.com/zephyrproject-rtos/zephyr/issues/52812
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Adds a spin lock/unlock barrier pair after a pipe thread wakes.
After the list of waiting threads is generated, it is possible for
threads on that list to timeout and be removed from the wait queue.
However, since that list was generated before the timeout occurred,
the timed-out thread must wait until the copying is done (the
pipe's spin-lock has been released).
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
By the time the working list of readers/writers is processed, it is
possible that waiting reader/writer being processed had timed out
and is no longer on the wait queue. As such, we can not blindly
wake the next thread as that next thread might not be the thread we
had just been processing.
To address this, the calls to z_sched_wake() have been replaced
with z_unpend_thread() and z_ready_thread() so that a specific
thread can be safely targeted for waking.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Uses the new z_sched_waitq_walk() routine to walk the pipe's wait
queue to build a list of waiting threads that will be used for
the data transfer.
This method is preferred over the previous as it ensures that
wait queue is safely traversed.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Adds a routine to safely walk a specified wait queue and invoke a
custom callback function on each waiting thread.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>