Commit Graph

2466 Commits

Author SHA1 Message Date
Ioannis Glaropoulos dd574f6ec7 kernel: stack_sentinel: disable in single-threaded builds
Add a dependency on MULTITHREADING for the
STACK_SENTINEL feature, so it may not get
enabled in single-thread Zephyr builds.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-05-28 10:41:46 -05:00
Daniel Leung dfa4b7e375 kernel: mmu: z_backing_store* to k_mem_paging_backing_store*
These functions are those that need be implemented by backing
store outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Daniel Leung 31c362d966 kernel: mmu: rename z_eviction* to k_mem_paging_eviction*
These functions and data structures are those that need
to be implemented by eviction algorithm and application
outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Lauren Murphy 4c85b4606b kernel: k_sleep: fix return value for absolute timeout
Fixes calculation of remaining ticks returned from z_tick_sleep
so that it takes absolute timeouts into account.

Fixes #32506

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2021-05-26 18:11:52 -05:00
Ioannis Glaropoulos 4084242a71 kernel: make MULTITHREADING promptless if single-thread not supported
If single thread builds are not supported by the
architecture, the MULTITHREADING option should be
prompt-less to block any modifications to it. We
also introduce an explicit ARCH-level Kconfig that
reflects whether the ARCH is capable of single-thread
Zephyr builds.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-05-26 11:03:22 -05:00
Flavio Ceolin 97281b3862 pm: device_runtime: get rid of the spinlock
Protect critical sections using the mutex.
The mutex is required to use the conditional variable and since we
need to atomically check the pm state and the workqueue before wait
the condition, it is necessary to protect them using the same mutex.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-05-26 10:56:55 -04:00
Flavio Ceolin 378a19d2a8 pm: device_runtime: Add helper to wait on async ops
Add a function that properly uses a mutex to check a condition before
wait on the conditional variable.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-05-26 10:56:55 -04:00
Maksim Masalski 970820e92d sched: create unique function name
In file include/kernel/thread.h in "struct _thread_base" is a member
called "_wait_q_t *pended_on"
At the same time in file kernel/sched.c is function called
"static _wait_q_t *pended_on()"

Coding scanning tool assigns violation (MISRA R5.9) that static
object reused, because thread.h is included in struct.c file.

I think we can rename function to avoid misreading in the future.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-05-25 19:06:21 -04:00
Andrzej Głąbek 59b21a29aa kernel: timeout: Fix adding of an absolute timeout
Correct the way the relative ticks value is calculated for an absolute
timeout. Previously, elapsed() was called twice and the returned value
was first subtracted from and then added to the ticks value. It could
happen that the HW counter value read by elapsed() changed between the
two calls to this function. This caused the test_timeout_abs test case
from the timer_api test suite to occasionally fail, e.g. on certain nRF
platforms.

Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
2021-05-24 23:53:18 -04:00
Andy Ross 851d14afc8 kernel/sched: Remove "cooperative scheduling only" special cases
The scheduler has historically had an API where an application can
inform the kernel that it will never create a thread that can be
preempted, and the kernel and architecture layer would use that as an
optimization hint to eliminate some code paths.

Those optimizations have dwindled to almost nothing at this point, and
they're now objectively a smaller impact than the special casing that
was required to handle the idle thread (which, obviously, must always
be preemptible).

Fix this by eliminating the idea of "cooperative only" and ensuring
that there will always be at least one preemptible priority with value
>=0.  CONFIG_NUM_PREEMPT_PRIORITIES now specifies the number of
user-accessible priorities other than the idle thread.

The only remaining workaround is that some older architectures (and
also SPARC) use the CONFIG_PREEMPT_ENABLED=n state as a hint to skip
thread switching on interrupt exit.  So detect exactly those platforms
and implement a minimal workaround in the idle loop (basically "just
call swap()") instead, with a big explanation.

Note that this also fixes a bug in one of the philosophers samples,
where it would ask for 6 cooperative priorities but then use values -7
through -2.  It was assuming the kernel would magically create a
cooperative priority for its idle thread, which wasn't correct even
before.

Fixes #34584

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-24 23:38:16 -04:00
Maksim Masalski d6c9d40ee0 userspace: remove dead code
File userspace.c contains dead code in function char *otype_to_str()
Remove "return NULL" and replace with "ret = NULL".

Found as a coding guideline violation (MISRA R2.1) by static
coding scanning tool.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-05-24 22:35:03 -04:00
Armando Visconti 4b4068c948 kernel/device: add arg checking in z_device_ready()
If this call receives an invalid device pointer as argument it
assumes that the `device` is not ready for usage.

This routine is currently called by two device specific APIs:

    - device_usable_check(const struct device *dev)
    - device_is_ready(const struct device *dev)

The device-specific APIs documentation claims that these two
routines must be called with a device pointer captured from
DEVICE_DT_GET(). So passing NULL is a violation of the rule.

Nevertheless, is quite common in drivers to assign NULL to
a device pointer if the corresponding DT property has not been
found (e.g. a not used gpio interrupt declaration for a given
device instance) and seems legit to interpret this condition
same as the device is not ready for usage.

Signed-off-by: Armando Visconti <armando.visconti@st.com>
2021-05-18 11:29:46 -05:00
Peter Bigot 656c09589a kernel: work: fix race condition with cancel before work runs
The original state management solution involved separate locks for a
work queue and each work item.  To avoid inter-lock dependencies a
window was left between the point where the work item was removed from
the queue (protected by queue lock) and the point where the work item
state was updated to mark the work item running.

This introduced a bug: If a cancellation was issued during this window
it would succeed, and the work item would appear to be idle even
though in fact the work queue thread was about to run it.

Since there is now only one lock, move the work item state updates
into the mutex regions associated with dequeuing the work item and
clearing the work queue busy flag.

Note that removing the window between queue and work mutex regions
eliminates the potential of having a dequeued work item be cancelled
before its QUEUED flag is cleared, simplifying the work item state
update.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-05-18 15:02:08 +02:00
Maksim Masalski 929956df70 coding guidelines rule 14_3_j: add explicit case check
Violation of the [MISRAC2012-RULE_14_3-j]:
Boolean operations whose results are invariant
shall not be permitted

Probably in that part of code is a misprint.
Added to check _OBJ_INIT_FALSE case explicitly

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-05-18 08:36:57 -04:00
Anas Nashif 7b9084cb4f ztest: set thread name to test name
Use the actual test name and not a hardcoded thread name. This is useful
for tracing test cases.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-17 18:45:57 -04:00
Andy Ross bd077561d0 kernel/swap: Add assertion to catch lock-breaking context switches
Our z_swap() API takes a key returned from arch_irq_lock() and
releases it atomically with the context switch.  Make sure that the
action of the unlocking is to unmask interrupts globally.  If
interrupts would still be masked then that means there is an OUTER
interrupt lock still held, and the code that locked it surely doesn't
expect the thread to be suspended and interrupts unmasked while it's
held!

Unfortunately, this kind of mistake is very easy to make.  We should
catch that with a simple assertion.  This is essentially a crude
Zephyr equivalent of the extremely common "BUG: scheduling while
atomic" error in Linux drivers (just google it).

The one exception made is the circumstance where a thread has already
aborted itself.  At that stage, whatever upthread lock state might
have existed will have already been messed up, so there's no value in
our asserting here.  We can't catch all bugs, and this can actually
happen in error handling and/or test frameworks.

Fixes #33319

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-17 15:27:37 -04:00
Daniel Leung 216dc5ddfe kernel: mmu: remove un-needed call to virt_to_bitmap_offset
When marking the reserved region at the end of virtual address
space, call virt_to_bitmap_offset() is not needed as we already
know the offset. So remove it.

Coverity-CID: 235930
Fixes #35160

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-13 09:00:54 -05:00
Daniel Leung 660d1478c6 kernel: init.c: tag source for boot/pinned sections
This adds the tags for functions and variables so they
can be put into boot/pinned sections.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung 1310ad6b0e linker: add bits for pinned regions
This adds the necessary bits for linker scripts and source code
to specify which symbols need to be pinned in memory. This is
needed for demand paging as some functions and data must reside
in memory all the time and cannot be paged out (e.g. paging,
scheduler, and interrupt routines for functionality).

This is up to the arch/SoC/board to define the sections in
their linker scripts as the pinned section may need special
alignment which cannot be done in common script snippets.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung d812728ec4 linker: add bits for boot regions
This adds the necessary bits for linker scripts and source code
to specify which symbols are needed for boot process so they
can be grouped together.

One use of this is to group boot related code and data so these
won't interval with other kernel and application for better
caching.

This is a must for demand paging as some functions and data
must be available during the boot process and before the memory
manager is initialized. During this time, paging cannot be used
so symbols linked in virtual memory space are unavailable.

This is up to the arch/SoC/board to define the sections in
their linker scripts as section may need special alignment
which cannot be done in common script snippets.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Carlo Caione f000695243 cache: Rename sys_{dcache,icache}_* to sys_{data,instr}_cache_*
To have a common prefix.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Carlo Caione e2333269ae cache: Introduce external cache controller system support
The cache API currently shipped in Zephyr is assuming that the cache
controller is always on-core thus managed at the arch level. This is not
always the case because many SoCs rely on external cache controllers as
a peripheral external to the core (for example PL310 cache controller
and the L2Cxxx family). In some cases you also want a single driver to
control a whole set of cache controllers.

Rework the cache code introducing support for external cache
controllers.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Anas Nashif 4d994af032 kernel: remove object tracing
Remove this intrusive tracing feature in favor of the new object tracing
using the main tracing feature in zephyr. See #33603 for the new tracing
coverage for all objects.

This will allow for support in more tools and less reliance on GDB for
tracing objects.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 7a646b3f8e Tracing: Work Queue tracing
Add Work tracing, default tracing hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell cae9a905d4 Tracing: Poll API and Work Poll tracing
Add Poll API and Work Poll tracing, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 3a66d6c695 Tracing: Timer tracing
Add Timer tracing, default tracing hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 65b376eb87 Tracing: Memory Slab tracing
Add memory slab tracing, default trace hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 80cd9dac22 Tracing: Memory Heap tracing
Add Memory heap tracing, default trace hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell fa9e64b304 Tracing: Pipe tracing
Add Pipe tracing, default trace hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell d2e7de522d Tracing: Mailbox tracing
Add Mailbox tracing, default tracing hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 9ab447b3de Tracing: Message Queue tracing
Add Message Queue tracing, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 69e8869127 Tracing: Memory Stack tracing
Add memory stack tracing, defaul trace hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell f984823e0d Tracing: Queue tracing
Add Queue tracing hooks, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell f17144349b Tracing: Thread tracing
Add thread tracing hooks, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell b93ff29e4b Tracing: Conditional variable tracing
Add conditional variable tracing hooks, default tracing hooks,
and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell ed6148a841 Tracing: Mutex tracing hooks
Add mutex trace hooks, default mutex trace hooks, and trace hook
documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 82addd6a64 Tracing: Semaphore tracing documentation
Add default semaphore trace hooks and documentation into
tracing.h.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell fcf2fb6320 Tracing: Semaphore tracing
Add semahphore tracing using the new trace macros.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Flavio Ceolin 54324fd08e power: device_pm: Use spin lock instead of semaphore
Device pm runtime was using semaphore to protect critical section but
enable / disable functions were waiting on the semaphore. So, just
replace it with a spin lock.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-05-07 16:55:31 -04:00
Flavio Ceolin 8705c688e2 power: device_pm: Fix concurrence issues
The sync API was using k_poll_signal and in certain conditions is
possible multiple threads waiting on a signal leading to an undefined
behavior.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-05-07 16:55:31 -04:00
Daniel Leung c31829074f kernel: mmu: use bitarrays for k_mem_map/k_mem_unmap
This uses bitarrays for allocating and deallocating virtual
addresses with k_mem_map() and k_mem_unmap(). This will
allow us to reuse virtual addresses.

Fixes #28900

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung c254c58184 kernel: mmu: add k_mem_unmap
This adds k_mem_unmap() so the memory mapped by k_mem_map()
can be freed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung 085d3768e1 kernel: mmu: introduce arch_page_phys_get()
This adds a new function prototype for arch_page_phys_get()
which will be used to translate mapped virtual addresses back
to physical memory addresses. This is needed for the future
k_mem_unmap() function which requires this to find
the corresponding page frame. It is faster to look through
the page tables instead of doing linear search of the page
frame array.

A weak function is provided in case arch_page_phys_get()
is not implemented at the arch level. This simply goes
through all the page frame and find the one which has
mapped to the virtual address.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung fe48f5a920 kernel: mmu: always use before/after guard pages for k_mem_map()
When we start allowing unmapping of memory region, there is no
exact way to know if k_mem_map() is called with guard page option
specified or not. So just unconditionally enable guard pages on
both sides of the memory region to hopefully catch access
violations.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung e6df25f68c kernel: mmu: implement z_phys_unmap()
This provides a counterpart to z_phys_map() which can be used
to temporary map memory region during boot process, and
subsequently discards the mapping.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Guennadi Liakhovetski d7a3752915 work: remove a statement with no effect
work_timeout() is a function, a statement like "(void)work_timeout;"
has no effect.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-05-07 12:44:34 -04:00
Gerard Marull-Paretas d31a9be27c pm: device: rename device_pm struct to pm_device
Prefix all PM related functions/structures with pm.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-05-05 18:35:49 -04:00
Gerard Marull-Paretas 2c7b763e47 pm: replace DEVICE_PM_* states with PM_DEVICE_*
Prefix device PM states with PM.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-05-05 18:35:49 -04:00
Gerard Marull-Paretas 13f528bc59 kernel: replace power/power.h with pm/pm.h
Replace old header with the new one.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-05-05 18:35:49 -04:00
Jennifer Williams ca75bbef3c tests: boot_time: remove all the code and instrumentation feeding into test
Remove the config BOOT_TIME_MEASUREMENT and corresponding #ifdef'd code
throughout (kernel/init.c, idle.c, core/common.S , reset.S, ... ) which
hold the extern hooks for z_timestamp_main and z_timestamp_idle in the
removed boot_time test suite.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-05-05 10:41:15 -04:00