Commit Graph

45 Commits

Author SHA1 Message Date
Kumar Gala a1b77fd589 zephyr: replace zephyr integer types with C99 types
git grep -l 'u\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/u\(8\|16\|32\|64\)_t/uint\1_t/g"
	git grep -l 's\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/s\(8\|16\|32\|64\)_t/int\1_t/g"

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-06-08 08:23:57 -05:00
Andy Ross 99c2d2d047 kernel/queue: Remove interior use of k_poll()
The k_queue data structure, when CONFIG_POLL was enabled, would
inexplicably use k_poll() as its blocking mechanism instead of the
original wait_q/pend() code.  This was actually racy, see commit
b173e4353f.  The code was structured as a condition variable: using
a spinlock around the queue data before deciding to block.  But unlike
pend_current_thread(), k_poll() cannot atomically release a lock.

A workaround had been in place for this, and then accidentally
reverted (both by me!) because the code looked "wrong".

This is just fragile, there's no reason to have two implementations of
k_queue_get().  Remove.

Note that this also removes a test case in the work_queue test where
(when CONFIG_POLL was enabled, but not otherwise) it was checking for
the ability to immediately cancel a delayed work item that was
submitted with a timeout of K_NO_WAIT (i.e. "queue it immediately").
This DOES NOT work with the origina/non-poll queue backend, and has
never been a documented behavior of k_delayed_work_submit_to_queue()
under any circumstances.  I don't know why we were testing this.

Fixes #25904

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-06-03 01:47:41 +02:00
Andy Ross 7832738ae9 kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument.  Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created.  This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.

The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.

The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.

Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.

For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided.  When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.

Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions.  These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig.  These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.

k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.

Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate.  Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure.  But k_poll() does not fail
spuriously, so the loop was removed.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Andy Ross 4c2fc2aed7 kernel/queue: Fix SMP race
Calling z_ready_thread() means the thread is now ready and can wake up
at any moment on another CPU.  But we weren't finished setting the
return value!  So the other side could wake up with a spurious "error"
condition if it ran too soon.  Note that on systems with a working
IPI, that wakeup can happen much faster than you might think.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-21 14:47:52 -08:00
Anas Nashif 756d8b03e2 kernel: queue: runtime error handling
Runtime error handling for k_queue_append_list and k_queue_merge_slist.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-20 17:19:54 -05:00
Andrew Boie 4ad9f687df kernel: rename thread return value functions
z_set_thread_return_value is part of the core kernel -> arch
interface and has been renamed to z_arch_thread_return_value_set.

z_set_thread_return_value_with_data renamed to
z_thread_return_value_set_with_data for consistency.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andy Ross 643701aaf8 kernel: syscalls: Whitespace fixups
The semi-automated API changes weren't checkpatch aware.  Fix up
whitespace warnings that snuck into the previous patches.  Really this
should be squashed, but that's somewhat difficult given the structure
of the series.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Andy Ross 6564974bae userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words.  So
passing wider values requires splitting them into two registers at
call time.  This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.

Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths.  So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.

Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types.  So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*().  The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function.  It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.

This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs.  Future commits will port the less testable code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Anas Nashif 5c0516bce3 cleanup: include/: move misc/sflist.h to sys/sflist.h
move misc/sflist.h to sys/sflist.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Nicolas Pitre 659fa0d57d lifo/fifo: first word is not always first 4 bytes
The first word is used as a pointer, meaning it is 64 bits on 64-bit
systems. To reserve it, it has to be either a pointer, a long, or an
intptr_t. Not an int nor an u32_t.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-26 09:08:42 -04:00
Nicolas Pitre aa9228854f linker generated list: provide an iterator to simplify list access
Given that the section name and boundary simbols can be inferred from
the struct object name, it makes sense to create an iterator that
abstracts away the access details and reduce the possibility for
mistakes.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-06 14:21:32 -07:00
Patrik Flykt 4344e27c26 all: Update reserved function names
Update reserved function names starting with one underscore, replacing
them as follows:
   '_k_' with 'z_'
   '_K_' with 'Z_'
   '_handler_' with 'z_handl_'
   '_Cstart' with 'z_cstart'
   '_Swap' with 'z_swap'

This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.

Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.

Various generator scripts have also been updated as well as perf,
linker and usb files. These are
   drivers/serial/uart_handlers.c
   include/linker/kobject-text.ld
   kernel/include/syscall_handler.h
   scripts/gen_kobject_list.py
   scripts/gen_syscall_header.py

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-11 13:48:42 -04:00
Andy Ross 603ea42764 kernel/queue: Spinlockify
Straightforward port.  Each struct k_queue object gets a spinlock to
control obvious data ownership.

Note that this port actually discovered a preexisting bug: the -ENOMEM
case in queue_insert() was failing to release the lock.  But because
the tests that hit that path didn't rely on other threads being
scheduled, they ran to successful completion even with interrupts
disabled.  The spinlock API detects that as a recursive lock when
asserts are enabled.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross ec554f44d9 kernel: Split reschdule & pend into irq/spin lock versions
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch.  The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.

Just refactoring.  No logic changes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross 4f911e192f kernel: Add missing include
These files were using z_thread_malloc() without including
kernel_internal.h.  On existing architectures that works due to
transitive includes, but x86_64 has a thinner include layer and
doesn't do it for us.  Include the files required for the APIs we use.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-11 15:18:52 -05:00
Flavio Ceolin 76b3518ce6 kernel: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Flavio Ceolin a42de6466a kernel: queue: Fix MISRA-C violation
MISRA-C requires the right-hand operand of && or || operator does not
contain persistent effect.

MISRA-C rule 13.5

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-07 09:06:34 -05:00
Adithya Baglody 2a78b8d86f kernel: queue: MISRA C compliance.
This patch fixes few issues in queue.c. This patch also changes
the return type of k_queue_alloc_append and k_queue_alloc_prepend
from int to s32_t.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-31 08:44:47 -04:00
Flavio Ceolin c806ac3d36 kernel: Compare pointers with NULL in while statements
Make while statement using pointers explicitly check whether
the value is NULL or not.

The C standard does not say that the null pointer is the same
as the pointer to memory address 0 and because of this is a good
practice always compare with the macro NULL.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-18 13:57:15 -04:00
Flavio Ceolin 4218d5f8f0 kernel: Make If statement have essentially Boolean type
Make if statement using pointers explicitly check whether the value is
NULL or not.

The C standard does not say that the null pointer is the same as the
pointer to memory address 0 and because of this is a good practice
always compare with the macro NULL.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-18 13:57:15 -04:00
Paul Sokolovsky 45c0b20470 kernel: k_poll: Introduce separate status for cancelled events
Previously (as introduced in 48fadfe62), if k_poll() waited on a
queue (or subclass like fifo), and wait was cancelled on queue's
side using k_queue_cancel_wait(), k_poll returned -EINTR. But it
did not set event->state field (to anything else but
K_POLL_STATE_NOT_READY), so in case of waiting on multiple queues,
it was not possible to differentiate which of them was cancelled.

This in particular broke detection of network socket EOF conditions
in POSIX poll() implementation.

This situation is now resolved with introduction of explicit
K_POLL_STATE_CANCELLED state, which is now set for cancelled queue
(-EINTR return remains the same).

This change also elaborates docstring for the functions mentioned, to
document this behavior.

Fixes: #9032

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2018-08-30 09:28:29 -04:00
Flavio Ceolin cc74ad0805 kernel: Explicitly ignoring results of queue_insert
queue_insert will always return 0 when no memory is allocated, just
explicitly marking that we are ignoring return value in these cases.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Anas Nashif 80e6a978a6 kernel/drivers: fix compile warnings
Uncovered by clang we have some functions being only used conditionally,
so gaurd them to make them only available when those conditions are met.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-07-01 22:58:23 +02:00
Andy Ross b173e4353f kernel/queue: Fix spurious NULL exit condition when using timeouts
The queue loop when CONFIG_POLL is in used has an inherent race
between the return of k_poll() and the inspection of the list where no
lock can be held.  Other contending readers of the same queue can
sneak in and steal the item out of the list before the current thread
gets to the sys_sflist_get() call, and the current loop will (if it
has a timeout) spuriously return NULL before the timeout expires.

It's not even a hard race to exercise.  Consider three threads at
different priorities: High (which can be an ISR too), Mid, and Low:

1. Mid and Low both enter k_queue_get() and sleep inside k_poll() on
   an empty queue.

2. High comes along and calls k_queue_insert().  The queue code then
   wakes up Mid, and reschedules, but because High is still running Mid
   doesn't get to run yet.

3. High inserts a SECOND item.  The queue then unpends the next thread
   in the list (Low), and readies it to run.  But as before, it won't
   be scheduled yet.

4. Now High sleeps (or if it's an interrupt, exits), and Mid gets to
   run.  It dequeues and returns the item it was delivered normally.

5. But Mid is still running!  So it re-enters the loop it's sitting in
   and calls k_queue_get() again, which sees and returns the second
   item in the queue synchronously.  Then it calls it a third time and
   goes to sleep because the queue is empty.

6. Finally, Low wakes up to find an empty queue, and returns NULL
   despite the fact that the timeout hadn't expired.

The fix is simple enough: check the timeout expiration inside the loop
so we don't return early.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-06-11 17:11:51 -04:00
Andy Ross ccf3bf7ed3 kernel: Fix sloppy wait queue API
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs.  Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages.  Now replacement of wait_q with a different
data structure is much cleaner.

Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro.  While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andrew Boie 8345e5ebf0 syscalls: remove policy from handler checks
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:

* System call handlers that acquire resources in the handler
  have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
  to the caller instead of just killing the calling thread,
  even though the base API doesn't do these checks.

These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.

At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/

The macros now use the Z_ notation for private APIs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie 2b9b4b2cf7 k_queue: allow user mode access via allocators
User mode may now use queue objects. Instead of embedding the kernel's
linked list information directly in the data item, a container struct
is allocated from the caller's resource pool which is then added to
the queue. The new sflist type is now used to store a flag indicating
whether a data item needs to be freed when removed from the queue.

FIFO/LIFOs are derived from k_queues and have had allocator functions
added.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andy Ross 22642cf309 kernel: Clean up _unpend_thread() API
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons.  The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.

So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout().  (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 15cb5d7293 kernel: Further unify _reschedule APIs
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 0447a73f6c kernel: include cleanup
Recent changes have eliminated most use of _Swap() in favor of higher
level scheduler abstractions.  We can remove the header too.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross e0a572beeb kernel: Refactor, unifying _pend_current_thread() + _Swap() idiom
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps.  Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 8606fabf74 kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().

And some places were just calling _reschedule_threads(), which did all
this already.

Unify all this madness.  The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield().  The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor.  I'm not at all convinced it
should exist...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 345553b19b kernel/queue: Clean up scheduler API usage
This was the only spot where the scheduler-internal
_peek_first_pending_thread() API was used.  Given that this kind of
thing is inherently racy (it may not be pending as long as you expect
if a timeout expires, etc...), it would be nice to retire it.

And as it happens all the queue code was using it for was to detect
the case of a non-empty wait_q over which it was looping, which is
trivial to do without API support.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-18 16:58:12 -04:00
Andy Ross 9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 32a444c54e kernel: Fix nano_internal.h inclusion
_Swap() is defined in nano_internal.h.  Everything calls _Swap().
Pretty much nothing that called _Swap() included nano_internal.h,
expecting it to be picked up automatically through other headers (as
it happened, from the kernel arch-specific include file).  A new
_Swap() is going to need some other symbols in the inline definition,
so I needed to break that cycle.  Now nothing sees _Swap() defined
anymore.  Put nano_internal.h everywhere it's needed.

Our kernel includes remain a big awful yucky mess.  This makes things
more correct but no less ugly.  Needs cleanup.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Luiz Augusto von Dentz 48fadfe623 queue: k_queue_cancel_wait: Fix not interrupting other threads
When k_poll is being used k_queue_cancel_wait shall mark the state as
K_POLL_STATE_NOT_READY so other threads will get properly notified with
a NULL pointer return.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-10-18 13:02:52 -04:00
Luiz Augusto von Dentz f87c4c6743 queue: k_queue_get: Fix NULL return
k_queue_get shall never return NULL when timeout is K_FOREVER which can
happen when a higher priority thread cancel/take an item before the
waiting thread.

Fixes issue #4358

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-10-18 13:02:52 -04:00
Paul Sokolovsky 199d07e655 kernel: queue: k_queue_poll: Fix slist access race condition
All sys_slist_*() functions aren't threadsafe and calls to them
must be protected with irq_lock. This is usually done in a wider
caller context, but k_queue_poll() is called with irq_lock already
relinquished, and is thus subject to hard to detect and explain
race conditions, as e.g. was tracked in #4022.

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2017-10-17 10:37:47 +02:00
Luiz Augusto von Dentz 7d01c5ecb7 poll: Enable multiple threads to use k_poll in the same object
This is necessary in order for k_queue_get to work properly since that
is used with buffer pools which might be used by multiple threads asking
for buffers.

Jira: ZEP-2553

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-25 09:00:46 -04:00
Luiz Augusto von Dentz 84db641de6 queue: Use k_poll if enabled
This makes use of POLL_EVENT in case k_poll is enabled which is
preferable over wait_q as that allows objects to be removed for the
data_q at any time.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-15 08:49:09 -04:00
Anas Nashif 397d29db42 linker: move all linker headers to include/linker
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-06-18 09:24:04 -05:00
Maciek Borzecki 059544d1ae kernel: make sure that CONFIG_OBJECT_TRACING structs are properly ifdef'ed
Fixes sparse warnings:
<snip>/zephyr/kernel/timer.c:15:16: warning: symbol '_trace_list_k_timer' was not declared. Should it be static?
<snip>/zephyr/kernel/sem.c:32:14: warning: symbol'_trace_list_k_sem' was not declared. Should it be static?
<snip>/zephyr/kernel/stack.c:24:16: warning: symbol '_trace_list_k_stack' was not declared. Should it be static?
<snip>/zephyr/kernel/queue.c:27:16: warning: symbol '_trace_list_k_queue' was not declared. Should it be static?
<snip>/zephyr/kernel/pipes.c:40:15: warning: symbol '_trace_list_k_pipe' was not declared. Should it be static?
<snip>/zephyr/kernel/mutex.c:46:16: warning: symbol '_trace_list_k_mutex' was not declared. Should it be static?
<snip>/zephyr/kernel/msg_q.c:26:15: warning: symbol '_trace_list_k_msgq' was not declared. Should it be static?
<snip>/zephyr/kernel/mem_slab.c:20:19: warning: symbol '_trace_list_k_mem_slab' was not declared. Should it be static?
<snip>/zephyr/kernel/mailbox.c:53:15: warning: symbol '_trace_list_k_mbox' was not declared. Should it be static?

Change-Id: I42d55aea9855b9c1dd560852ca033c9a19f1ac21
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
2017-05-18 12:41:56 -05:00
Paul Sokolovsky 3f50707672 kernel: queue, fifo: Add cancel_wait operation.
Currently, a queue/fifo getter chooses how long to wait for an
element. But there are scenarios when putter would know better,
there should be a way to expire getter's timeout to make it run
again. k_queue_cancel_wait() and k_fifo_cancel_wait() functions
do just that. They cause corresponding *_get() functions to return
with NULL value, as if timeout expired on getter's side (even
K_FOREVER).

This can be used to signal out of band conditions from putter to
getter, e.g. end of processing, error, configuration change, etc.
A specific event would be communicated to getter by other means
(e.g. using existing shared context structures).

Without this call, achieving the same effect would require e.g.
calling k_fifo_put() with a pointer to a special sentinal memory
structure - such structure would need to be allocated somewhere
and somehow, and getter would need to recognize it from a normal
data item. Having cancel_wait() functions offers an elegant
alternative. From this perspective, these calls can be seen as
an equivalent to e.g. k_fifo_put(fifo, NULL), except that such
call won't work in practice.

Change-Id: I47b7f690dc325a80943082bcf5345c41649e7024
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2017-05-10 09:40:33 -04:00
Kumar Gala cc334c7273 Convert remaining code to using newly introduced integer sized types
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types.  This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies.  We also convert the PRI printf formatters in the arch
code over to normal formatters.

Jira: ZEP-2051

Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 11:38:23 -05:00
Luiz Augusto von Dentz a7ddb87501 kernel: Add k_queue API
This unifies k_fifo and k_lifo APIs thus making it more flexible regarding
where the data elements are inserted.

Change-Id: Icd6e2f62fc8b374c8273bb763409e9e22c40f9f8
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-02-27 21:20:50 +00:00