Commit Graph

55 Commits

Author SHA1 Message Date
Kumar Gala a1b77fd589 zephyr: replace zephyr integer types with C99 types
git grep -l 'u\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/u\(8\|16\|32\|64\)_t/uint\1_t/g"
	git grep -l 's\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/s\(8\|16\|32\|64\)_t/int\1_t/g"

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-06-08 08:23:57 -05:00
Andy Ross a343cf9480 kernel/timer: Handle K_FOREVER in k_timer_start()
The possibility of passing K_FOREVER as the initial duration argument
to k_timer_start() wasn't being handled, with the result that the
computed value became an zero timeout (effecitvely treating it as
K_NO_WAIT and firing at the next tick).

This is obviously pathlogical, but it should still do what the code
says it should and wait forever.

Make k_timer_start(..., K_FOREVER, ...) a noop.

Fixes #25820

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-05-29 19:59:14 +02:00
Andy Ross 3e729b2b1c kernel/timer: Correctly clamp period argument
The period argument of a k_timer needs an offset of one tick from the
value computed in user code (because periods get reset from within the
ISR, see the comment above this code for an explanation).  When the
computed tick value was 1, it would become 0.  This is actually
perfectly correct as a k_timeout_t to be passed to z_add_timeout().

BUT: to k_timer's API, K_NO_WAIT means "never" (i.e. the same as
K_FOREVER) and not "as soon as possible", so the period timer would
not be reset.  This is sort of a wart, but it's the way the API has
been specified forever.

The upshot is that for the case of calling k_timer_start() with a
minimal period argument (i.e. one that produces "one tick"), the
period would be ignored and the timer would act like a one shot.  Fix
the clamp so it can't produce K_NO_WAIT.

This also adds a filter for absolute timeouts, which (while that's
sort of a pathological usage) were getting that one tick offset when
it wasn't appropriate.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-04-24 16:38:33 +02:00
Andy Ross 5a5d3daf6f kernel/timeout: Add timeout remaining/expires APIs
Add tick-based (i.e. precision resistant) inspection APIs for kernel
timeouts visible via k_timer, k_delayed work and thread timeouts
(i.e. pended/sleeping threads).  These are each available in
"remaining" and "expires" variants returning time values relative to
current time and system start.  All have system calls where applicable
(i.e. everywhere but k_delayed_work, which is not a userspace API)

The pre-existing millisecond "remaining_get()" predicates for timer
and delayed work remain, but are expressed in terms of the newer
calls.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Andy Ross 4c7b77a716 kernel/timeout: Add absolute timeout APIs
Add support for "absolute" timeouts, which are expressed relative to
system uptime instead of deltas from current time.  These allow for
more race-resistant code to be written by allowing application code to
do a single timeout computation, once, and then reuse the timeout
value even if the thread wakes up and needs to suspend again later.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Andy Ross 7832738ae9 kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument.  Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created.  This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.

The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.

The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.

Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.

For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided.  When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.

Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions.  These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig.  These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.

k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.

Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate.  Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure.  But k_poll() does not fail
spuriously, so the loop was removed.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Andy Ross 8892406c1d kernel/sys_clock.h: Deprecate and convert uses of old conversions
Mark the old time conversion APIs deprecated, leave compatibility
macros in place, and replace all usage with the new API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-11-08 11:08:58 +01:00
Andrew Boie 4f77c2ad53 kernel: rename z_arch_ to arch_
Promote the private z_arch_* namespace, which specifies
the interface between the core kernel and the
architecture code, to a new top-level namespace named
arch_*.

This allows our documentation generation to create
online documentation for this set of interfaces,
and this set of interfaces is worth treating in a
more formal way anyway.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 15:21:46 -08:00
Stephanos Ioannidis 2d7460482d headers: Refactor kernel and arch headers.
This commit refactors kernel and arch headers to establish a boundary
between private and public interface headers.

The refactoring strategy used in this commit is detailed in the issue

This commit introduces the following major changes:

1. Establish a clear boundary between private and public headers by
  removing "kernel/include" and "arch/*/include" from the global
  include paths. Ideally, only kernel/ and arch/*/ source files should
  reference the headers in these directories. If these headers must be
  used by a component, these include paths shall be manually added to
  the CMakeLists.txt file of the component. This is intended to
  discourage applications from including private kernel and arch
  headers either knowingly and unknowingly.

  - kernel/include/ (PRIVATE)
    This directory contains the private headers that provide private
   kernel definitions which should not be visible outside the kernel
   and arch source code. All public kernel definitions must be added
   to an appropriate header located under include/.

  - arch/*/include/ (PRIVATE)
    This directory contains the private headers that provide private
   architecture-specific definitions which should not be visible
   outside the arch and kernel source code. All public architecture-
   specific definitions must be added to an appropriate header located
   under include/arch/*/.

  - include/ AND include/sys/ (PUBLIC)
    This directory contains the public headers that provide public
   kernel definitions which can be referenced by both kernel and
   application code.

  - include/arch/*/ (PUBLIC)
    This directory contains the public headers that provide public
   architecture-specific definitions which can be referenced by both
   kernel and application code.

2. Split arch_interface.h into "kernel-to-arch interface" and "public
  arch interface" divisions.

  - kernel/include/kernel_arch_interface.h
    * provides private "kernel-to-arch interface" definition.
    * includes arch/*/include/kernel_arch_func.h to ensure that the
     interface function implementations are always available.
    * includes sys/arch_interface.h so that public arch interface
     definitions are automatically included when including this file.

  - arch/*/include/kernel_arch_func.h
    * provides architecture-specific "kernel-to-arch interface"
     implementation.
    * only the functions that will be used in kernel and arch source
     files are defined here.

  - include/sys/arch_interface.h
    * provides "public arch interface" definition.
    * includes include/arch/arch_inlines.h to ensure that the
     architecture-specific public inline interface function
     implementations are always available.

  - include/arch/arch_inlines.h
    * includes architecture-specific arch_inlines.h in
     include/arch/*/arch_inline.h.

  - include/arch/*/arch_inline.h
    * provides architecture-specific "public arch interface" inline
     function implementation.
    * supersedes include/sys/arch_inline.h.

3. Refactor kernel and the existing architecture implementations.

  - Remove circular dependency of kernel and arch headers. The
   following general rules should be observed:

    * Never include any private headers from public headers
    * Never include kernel_internal.h in kernel_arch_data.h
    * Always include kernel_arch_data.h from kernel_arch_func.h
    * Never include kernel.h from kernel_struct.h either directly or
     indirectly. Only add the kernel structures that must be referenced
     from public arch headers in this file.

  - Relocate syscall_handler.h to include/ so it can be used in the
   public code. This is necessary because many user-mode public codes
   reference the functions defined in this header.

  - Relocate kernel_arch_thread.h to include/arch/*/thread.h. This is
   necessary to provide architecture-specific thread definition for
   'struct k_thread' in kernel.h.

  - Remove any private header dependencies from public headers using
   the following methods:

    * If dependency is not required, simply omit
    * If dependency is required,
      - Relocate a portion of the required dependencies from the
       private header to an appropriate public header OR
      - Relocate the required private header to make it public.

This commit supersedes #20047, addresses #19666, and fixes #3056.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-11-06 16:07:32 -08:00
Andrew Boie 4ad9f687df kernel: rename thread return value functions
z_set_thread_return_value is part of the core kernel -> arch
interface and has been renamed to z_arch_thread_return_value_set.

z_set_thread_return_value_with_data renamed to
z_thread_return_value_set_with_data for consistency.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie e1ec59f9c2 kernel: renamespace z_is_in_isr()
This is part of the core kernel -> architecture interface
and is appropriately renamed z_arch_is_in_isr().

References from test cases changed to k_is_in_isr().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Peter A. Bigot 5639ea07f8 kernel: timeout: remove unused callback parameter from init function
The callback function has been ignored in z_timeout_init() since the
timer rework in fall 2018.  Passing real handlers to it in code is
distracting when they will be overridden by whatever callback is
provided in z_add_timeout().

As this function is an internal API deprecation is not necessary.
Remove the parameter and change all call sites to drop the argument.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-09-28 15:41:18 -04:00
Andy Ross 643701aaf8 kernel: syscalls: Whitespace fixups
The semi-automated API changes weren't checkpatch aware.  Fix up
whitespace warnings that snuck into the previous patches.  Really this
should be squashed, but that's somewhat difficult given the structure
of the series.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Andy Ross 6564974bae userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words.  So
passing wider values requires splitting them into two registers at
call time.  This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.

Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths.  So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.

Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types.  So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*().  The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function.  It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.

This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs.  Future commits will port the less testable code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Nicolas Pitre aa9228854f linker generated list: provide an iterator to simplify list access
Given that the section name and boundary simbols can be inferred from
the struct object name, it makes sense to create an iterator that
abstracts away the access details and reduce the possibility for
mistakes.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-06 14:21:32 -07:00
Patrik Flykt 24d71431e9 all: Add 'U' suffix when using unsigned variables
Add a 'U' suffix to values when computing and comparing against
unsigned variables.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Patrik Flykt 4344e27c26 all: Update reserved function names
Update reserved function names starting with one underscore, replacing
them as follows:
   '_k_' with 'z_'
   '_K_' with 'Z_'
   '_handler_' with 'z_handl_'
   '_Cstart' with 'z_cstart'
   '_Swap' with 'z_swap'

This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.

Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.

Various generator scripts have also been updated as well as perf,
linker and usb files. These are
   drivers/serial/uart_handlers.c
   include/linker/kobject-text.ld
   kernel/include/syscall_handler.h
   scripts/gen_kobject_list.py
   scripts/gen_syscall_header.py

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-11 13:48:42 -04:00
Andy Ross b29fb220b1 kernel/timer: Spinlockify
Simple global lock around the timer API.  Actually a lot of this usage
was using needless vestigial locking around existing scheduler and
timeout APIs that are now internally synchronized.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross ec554f44d9 kernel: Split reschdule & pend into irq/spin lock versions
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch.  The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.

Just refactoring.  No logic changes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Peter A. Bigot b4ece0ad44 kernel: timeout: detect inactive timeouts using dnode linked state
Whether a timeout is linked into the timeout queue can be determined
from the corresponding sys_dnode_t linked state.  This removes the need
to use a special flag value in dticks to determine that the timeout is
inactive.

Update _abort_timeout to return an error code, rather than the flag
value, when the timeout to be aborted was not active.

Remove the _INACTIVE flag value, and replace its external uses with an
internal API function that checks whether a timeout is inactive.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-23 20:46:49 +01:00
Flavio Ceolin 76b3518ce6 kernel: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Flavio Ceolin 118715c62d misra: Fixes for MISRA-C rule 8.3
MISRA-C says all declarations of an object or function must use the
same name and qualifiers.

MISRA-C rule 8.3

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-07 09:06:34 -05:00
Flavio Ceolin 4b35dd2628 misra: Fixes for MISRA-C rule 8.2
In C90 was introduced function prototype, that allows argument types
to be checked against parameter types, though it is not necessary
specify names for the parameters. MISRA-C requires names for function
prototype parameters, it claims that names can provide useful
information regarding the function interface.

MISRA-C rule 8.2

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-07 09:06:34 -05:00
Andy Ross 987c0e5fc1 kernel: New timeout implementation
Now that the API has been fixed up, replace the existing timeout queue
with a much smaller version.  The basic algorithm is unchanged:
timeouts are stored in a sorted dlist with each node nolding a delta
time from the previous node in the list; the announce call just walks
this list pulling off the heads as needed.  Advantages:

* Properly spinlocked and SMP-aware.  The earlier timer implementation
  relied on only CPU 0 doing timeout work, and on an irq_lock() being
  taken before entry (something that was violated in a few spots).
  Now any CPU can wake up for an event (or all of them) and everything
  works correctly.

* The *_thread_timeout() API is now expressible as a clean wrapping
  (just one liners) around the lower-level interface based on function
  pointer callbacks.  As a result the timeout objects no longer need
  to store backpointers to the thread and wait_q and have shrunk by
  33%.

* MUCH smaller, to the tune of hundreds of lines of code removed.

* Future proof, in that all operations on the queue are now fronted by
  just two entry points (_add_timeout() and z_clock_announce()) which
  can easily be augmented with fancier data structures.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 52e444bc05 kernel: Move timeout_remaining API
_timeout_remaining_get() was a function on a struct _timeout, doing
iteration on the timeout list, but it was defined in timer.c (the
higher level abstraction).

Move it to where it belongs.  Also have it return ticks instead of ms
to conform to scheme in the rest of the timeout API.  And rename it to
a more standard zephyr name.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross fe82f1c2af kernel/timeout: Refactor API
Add the callback parameter to add_timeout(), and remove the thread
argument.  Now the "low level" timeout API can be expressed without
reference to threads.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 5d203523b6 kernel/timeout: Eliminate wait_q parameters from API
Now that this is known to be an unused value, remove it from the API.
Note that this caught a few spots where we were passing values (a
non-NULL wait_q with a NULL thread handle) that were always being
ignored before.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Flavio Ceolin 4218d5f8f0 kernel: Make If statement have essentially Boolean type
Make if statement using pointers explicitly check whether the value is
NULL or not.

The C standard does not say that the null pointer is the same as the
pointer to memory address 0 and because of this is a good practice
always compare with the macro NULL.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-18 13:57:15 -04:00
Flavio Ceolin 4d5397bb0a kernel: Ignore _abort_timeout return
Ignoring the return of _abort_timeout when there is nothing to
do. Either because the condition to return something different from 0
was prior checked or because it was called during come cleanup.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Flavio Ceolin 1663ca8590 kernel: Ignore _pend_current_thread return in some cases
There are some cases that there is nothing to do with
_pend_current_thread() return (that is _Swap return value).

As MISRA-C requires that all non-void functions have their
return value checked, we are explicitly ignoring it when there is
nothing to do.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Flavio Ceolin 0866d18d03 irq: Fix irq_lock api usage
irq_lock returns an unsigned int, though, several places was using
signed int. This commit fix this behaviour.

In order to avoid this error happens again, a coccinelle script was
added and can be used to check violations.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Andy Ross ccf3bf7ed3 kernel: Fix sloppy wait queue API
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs.  Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages.  Now replacement of wait_q with a different
data structure is much cleaner.

Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro.  While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andrew Boie 8345e5ebf0 syscalls: remove policy from handler checks
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:

* System call handlers that acquire resources in the handler
  have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
  to the caller instead of just killing the calling thread,
  even though the base API doesn't do these checks.

These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.

At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/

The macros now use the Z_ notation for private APIs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andy Ross 22642cf309 kernel: Clean up _unpend_thread() API
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons.  The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.

So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout().  (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 15cb5d7293 kernel: Further unify _reschedule APIs
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 0447a73f6c kernel: include cleanup
Recent changes have eliminated most use of _Swap() in favor of higher
level scheduler abstractions.  We can remove the header too.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross e0a572beeb kernel: Refactor, unifying _pend_current_thread() + _Swap() idiom
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps.  Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 8606fabf74 kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().

And some places were just calling _reschedule_threads(), which did all
this already.

Unify all this madness.  The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield().  The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor.  I'm not at all convinced it
should exist...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 32a444c54e kernel: Fix nano_internal.h inclusion
_Swap() is defined in nano_internal.h.  Everything calls _Swap().
Pretty much nothing that called _Swap() included nano_internal.h,
expecting it to be picked up automatically through other headers (as
it happened, from the kernel arch-specific include file).  A new
_Swap() is going to need some other symbols in the inline definition,
so I needed to break that cycle.  Now nothing sees _Swap() defined
anymore.  Put nano_internal.h everywhere it's needed.

Our kernel includes remain a big awful yucky mess.  This makes things
more correct but no less ugly.  Needs cleanup.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Leandro Pereira 6f99bdb02a kernel: Provide only one _SYSCALL_HANDLER() macro
Use some preprocessor trickery to automatically deduce the amount of
arguments for the various _SYSCALL_HANDLERn() macros.  Makes the grunt
work of converting a bunch of kernel APIs to system calls slightly
easier.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2017-10-16 13:42:15 -04:00
Andrew Boie 225e4c0e76 kernel: greatly simplify syscall handlers
We now have macros which should significantly reduce the amount of
boilerplate involved with defining system call handlers.

- Macros which define the proper prototype based on number of arguments
- "SIMPLE" variants which create handlers that don't need anything
  other than object verification

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-12 16:26:28 -05:00
Andrew Boie 37ff5a9bc5 kernel: system call handler cleanup
Use new _SYSCALL_OBJ/_SYSCALL_OBJ_INIT macros.

Use new _SYSCALL_MEMORY_READ/_SYSCALL_MEMORY_WRITE macros.

Some non-obvious checks changed to use _SYSCALL_VERIFY_MSG.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 17:54:47 -07:00
Andrew Boie a354d49c4f kernel: convert timer APIs to system calls
k_timer_init() registers callbacks that run in supervisor mode and is
excluded.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Andrew Boie 945af95f42 kernel: introduce object validation mechanism
All system calls made from userspace which involve pointers to kernel
objects (including device drivers) will need to have those pointers
validated; userspace should never be able to crash the kernel by passing
it garbage.

The actual validation with _k_object_validate() will be in the system
call receiver code, which doesn't exist yet.

- CONFIG_USERSPACE introduced. We are somewhat far away from having an
  end-to-end implementation, but at least need a Kconfig symbol to
  guard the incoming code with. Formal documentation doesn't exist yet
  either, but will appear later down the road once the implementation is
  mostly finalized.

- In the memory region for RAM, the data section has been moved last,
  past bss and noinit. This ensures that inserting generated tables
  with addresses of kernel objects does not change the addresses of
  those objects (which would make the table invalid)

- The DWARF debug information in the generated ELF binary is parsed to
  fetch the locations of all kernel objects and pass this to gperf to
  create a perfect hash table of their memory addresses.

- The generated gperf code doesn't know that we are exclusively working
  with memory addresses and uses memory inefficently. A post-processing
  script process_gperf.py adjusts the generated code before it is
  compiled to work with pointer values directly and not strings
  containing them.

- _k_object_init() calls inserted into the init functions for the set of
  kernel object types we are going to support so far

Issue: ZEP-2187
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:33:33 -07:00
Youvedeep Singh d787e3c554 timer: k_timer_start should accept 0 as duration parameter.
k_timer_start(timer, duration, period) is API used to
start a timer. Currently duration parameters accepts
only positive number.
But a user may require to do some periodic activity
ASAP and start timer with 0 value. So this patch
allows 0 as minimum value of duration.
In this patch, when duration value is set as 0 then
timer expiration handler is called instead of submiting
this into timeout queue.

Jira: ZEP-2497

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2017-09-06 10:18:39 -07:00
Maciek Borzecki 059544d1ae kernel: make sure that CONFIG_OBJECT_TRACING structs are properly ifdef'ed
Fixes sparse warnings:
<snip>/zephyr/kernel/timer.c:15:16: warning: symbol '_trace_list_k_timer' was not declared. Should it be static?
<snip>/zephyr/kernel/sem.c:32:14: warning: symbol'_trace_list_k_sem' was not declared. Should it be static?
<snip>/zephyr/kernel/stack.c:24:16: warning: symbol '_trace_list_k_stack' was not declared. Should it be static?
<snip>/zephyr/kernel/queue.c:27:16: warning: symbol '_trace_list_k_queue' was not declared. Should it be static?
<snip>/zephyr/kernel/pipes.c:40:15: warning: symbol '_trace_list_k_pipe' was not declared. Should it be static?
<snip>/zephyr/kernel/mutex.c:46:16: warning: symbol '_trace_list_k_mutex' was not declared. Should it be static?
<snip>/zephyr/kernel/msg_q.c:26:15: warning: symbol '_trace_list_k_msgq' was not declared. Should it be static?
<snip>/zephyr/kernel/mem_slab.c:20:19: warning: symbol '_trace_list_k_mem_slab' was not declared. Should it be static?
<snip>/zephyr/kernel/mailbox.c:53:15: warning: symbol '_trace_list_k_mbox' was not declared. Should it be static?

Change-Id: I42d55aea9855b9c1dd560852ca033c9a19f1ac21
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
2017-05-18 12:41:56 -05:00
Maciek Borzecki 4fef76082a kernel: k_timer_init: use NULL when initializing user data
Fixes sparse warning:
<snip>/zephyr/kernel/timer.c:105:28: warning: Using plain integer as NULL pointer
  CC      kernel/timer.o

Change-Id: Ic17a0b976d25079711f10137667148a321c95dbf
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
2017-05-18 12:41:56 -05:00
Kumar Gala cc334c7273 Convert remaining code to using newly introduced integer sized types
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types.  This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies.  We also convert the PRI printf formatters in the arch
code over to normal formatters.

Jira: ZEP-2051

Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 11:38:23 -05:00
Anas Nashif 4fb12ae988 kernel: k_timer_stop: remove assert when called from an ISR
Change-Id: I596e0323a7aafc9d7f3834a8d1b655ad2540d4ef
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-02-04 19:25:11 +00:00