Update reserved function names starting with one underscore, replacing
them as follows:
'_k_' with 'z_'
'_K_' with 'Z_'
'_handler_' with 'z_handl_'
'_Cstart' with 'z_cstart'
'_Swap' with 'z_swap'
This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.
Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.
Various generator scripts have also been updated as well as perf,
linker and usb files. These are
drivers/serial/uart_handlers.c
include/linker/kobject-text.ld
kernel/include/syscall_handler.h
scripts/gen_kobject_list.py
scripts/gen_syscall_header.py
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
Straightforward port. Each struct k_queue object gets a spinlock to
control obvious data ownership.
Note that this port actually discovered a preexisting bug: the -ENOMEM
case in queue_insert() was failing to release the lock. But because
the tests that hit that path didn't rely on other threads being
scheduled, they ran to successful completion even with interrupts
disabled. The spinlock API detects that as a recursive lock when
asserts are enabled.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch. The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.
Just refactoring. No logic changes.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These files were using z_thread_malloc() without including
kernel_internal.h. On existing architectures that works due to
transitive includes, but x86_64 has a thinner include layer and
doesn't do it for us. Include the files required for the APIs we use.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
MISRA-C requires the right-hand operand of && or || operator does not
contain persistent effect.
MISRA-C rule 13.5
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This patch fixes few issues in queue.c. This patch also changes
the return type of k_queue_alloc_append and k_queue_alloc_prepend
from int to s32_t.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Make while statement using pointers explicitly check whether
the value is NULL or not.
The C standard does not say that the null pointer is the same
as the pointer to memory address 0 and because of this is a good
practice always compare with the macro NULL.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Make if statement using pointers explicitly check whether the value is
NULL or not.
The C standard does not say that the null pointer is the same as the
pointer to memory address 0 and because of this is a good practice
always compare with the macro NULL.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Previously (as introduced in 48fadfe62), if k_poll() waited on a
queue (or subclass like fifo), and wait was cancelled on queue's
side using k_queue_cancel_wait(), k_poll returned -EINTR. But it
did not set event->state field (to anything else but
K_POLL_STATE_NOT_READY), so in case of waiting on multiple queues,
it was not possible to differentiate which of them was cancelled.
This in particular broke detection of network socket EOF conditions
in POSIX poll() implementation.
This situation is now resolved with introduction of explicit
K_POLL_STATE_CANCELLED state, which is now set for cancelled queue
(-EINTR return remains the same).
This change also elaborates docstring for the functions mentioned, to
document this behavior.
Fixes: #9032
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
queue_insert will always return 0 when no memory is allocated, just
explicitly marking that we are ignoring return value in these cases.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Uncovered by clang we have some functions being only used conditionally,
so gaurd them to make them only available when those conditions are met.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The queue loop when CONFIG_POLL is in used has an inherent race
between the return of k_poll() and the inspection of the list where no
lock can be held. Other contending readers of the same queue can
sneak in and steal the item out of the list before the current thread
gets to the sys_sflist_get() call, and the current loop will (if it
has a timeout) spuriously return NULL before the timeout expires.
It's not even a hard race to exercise. Consider three threads at
different priorities: High (which can be an ISR too), Mid, and Low:
1. Mid and Low both enter k_queue_get() and sleep inside k_poll() on
an empty queue.
2. High comes along and calls k_queue_insert(). The queue code then
wakes up Mid, and reschedules, but because High is still running Mid
doesn't get to run yet.
3. High inserts a SECOND item. The queue then unpends the next thread
in the list (Low), and readies it to run. But as before, it won't
be scheduled yet.
4. Now High sleeps (or if it's an interrupt, exits), and Mid gets to
run. It dequeues and returns the item it was delivered normally.
5. But Mid is still running! So it re-enters the loop it's sitting in
and calls k_queue_get() again, which sees and returns the second
item in the queue synchronously. Then it calls it a third time and
goes to sleep because the queue is empty.
6. Finally, Low wakes up to find an empty queue, and returns NULL
despite the fact that the timeout hadn't expired.
The fix is simple enough: check the timeout expiration inside the loop
so we don't return early.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs. Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages. Now replacement of wait_q with a different
data structure is much cleaner.
Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro. While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:
* System call handlers that acquire resources in the handler
have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
to the caller instead of just killing the calling thread,
even though the base API doesn't do these checks.
These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.
At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/
The macros now use the Z_ notation for private APIs.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
User mode may now use queue objects. Instead of embedding the kernel's
linked list information directly in the data item, a container struct
is allocated from the caller's resource pool which is then added to
the queue. The new sflist type is now used to store a flag indicating
whether a data item needs to be freed when removed from the queue.
FIFO/LIFOs are derived from k_queues and have had allocator functions
added.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons. The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.
So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout(). (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Recent changes have eliminated most use of _Swap() in favor of higher
level scheduler abstractions. We can remove the header too.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps. Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().
And some places were just calling _reschedule_threads(), which did all
this already.
Unify all this madness. The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield(). The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor. I'm not at all convinced it
should exist...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This was the only spot where the scheduler-internal
_peek_first_pending_thread() API was used. Given that this kind of
thing is inherently racy (it may not be pending as long as you expect
if a timeout expires, etc...), it would be nice to retire it.
And as it happens all the queue code was using it for was to detect
the case of a non-empty wait_q over which it was looping, which is
trivial to do without API support.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.
Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.
Break out the _Swap definition (only) into a separate header and use
that instead. Cleaner. Seems not to have any more hidden gotchas.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
_Swap() is defined in nano_internal.h. Everything calls _Swap().
Pretty much nothing that called _Swap() included nano_internal.h,
expecting it to be picked up automatically through other headers (as
it happened, from the kernel arch-specific include file). A new
_Swap() is going to need some other symbols in the inline definition,
so I needed to break that cycle. Now nothing sees _Swap() defined
anymore. Put nano_internal.h everywhere it's needed.
Our kernel includes remain a big awful yucky mess. This makes things
more correct but no less ugly. Needs cleanup.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When k_poll is being used k_queue_cancel_wait shall mark the state as
K_POLL_STATE_NOT_READY so other threads will get properly notified with
a NULL pointer return.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
k_queue_get shall never return NULL when timeout is K_FOREVER which can
happen when a higher priority thread cancel/take an item before the
waiting thread.
Fixes issue #4358
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
All sys_slist_*() functions aren't threadsafe and calls to them
must be protected with irq_lock. This is usually done in a wider
caller context, but k_queue_poll() is called with irq_lock already
relinquished, and is thus subject to hard to detect and explain
race conditions, as e.g. was tracked in #4022.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
This is necessary in order for k_queue_get to work properly since that
is used with buffer pools which might be used by multiple threads asking
for buffers.
Jira: ZEP-2553
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
This makes use of POLL_EVENT in case k_poll is enabled which is
preferable over wait_q as that allows objects to be removed for the
data_q at any time.
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
Fixes sparse warnings:
<snip>/zephyr/kernel/timer.c:15:16: warning: symbol '_trace_list_k_timer' was not declared. Should it be static?
<snip>/zephyr/kernel/sem.c:32:14: warning: symbol'_trace_list_k_sem' was not declared. Should it be static?
<snip>/zephyr/kernel/stack.c:24:16: warning: symbol '_trace_list_k_stack' was not declared. Should it be static?
<snip>/zephyr/kernel/queue.c:27:16: warning: symbol '_trace_list_k_queue' was not declared. Should it be static?
<snip>/zephyr/kernel/pipes.c:40:15: warning: symbol '_trace_list_k_pipe' was not declared. Should it be static?
<snip>/zephyr/kernel/mutex.c:46:16: warning: symbol '_trace_list_k_mutex' was not declared. Should it be static?
<snip>/zephyr/kernel/msg_q.c:26:15: warning: symbol '_trace_list_k_msgq' was not declared. Should it be static?
<snip>/zephyr/kernel/mem_slab.c:20:19: warning: symbol '_trace_list_k_mem_slab' was not declared. Should it be static?
<snip>/zephyr/kernel/mailbox.c:53:15: warning: symbol '_trace_list_k_mbox' was not declared. Should it be static?
Change-Id: I42d55aea9855b9c1dd560852ca033c9a19f1ac21
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
Currently, a queue/fifo getter chooses how long to wait for an
element. But there are scenarios when putter would know better,
there should be a way to expire getter's timeout to make it run
again. k_queue_cancel_wait() and k_fifo_cancel_wait() functions
do just that. They cause corresponding *_get() functions to return
with NULL value, as if timeout expired on getter's side (even
K_FOREVER).
This can be used to signal out of band conditions from putter to
getter, e.g. end of processing, error, configuration change, etc.
A specific event would be communicated to getter by other means
(e.g. using existing shared context structures).
Without this call, achieving the same effect would require e.g.
calling k_fifo_put() with a pointer to a special sentinal memory
structure - such structure would need to be allocated somewhere
and somehow, and getter would need to recognize it from a normal
data item. Having cancel_wait() functions offers an elegant
alternative. From this perspective, these calls can be seen as
an equivalent to e.g. k_fifo_put(fifo, NULL), except that such
call won't work in practice.
Change-Id: I47b7f690dc325a80943082bcf5345c41649e7024
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types. This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies. We also convert the PRI printf formatters in the arch
code over to normal formatters.
Jira: ZEP-2051
Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This unifies k_fifo and k_lifo APIs thus making it more flexible regarding
where the data elements are inserted.
Change-Id: Icd6e2f62fc8b374c8273bb763409e9e22c40f9f8
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>