Commit Graph

57 Commits

Author SHA1 Message Date
Tomasz Bursztyka e18fcbba5a device: Const-ify all device driver instance pointers
Now that device_api attribute is unmodified at runtime, as well as all
the other attributes, it is possible to switch all device driver
instance to be constant.

A coccinelle rule is used for this:

@r_const_dev_1
  disable optional_qualifier
@
@@
-struct device *
+const struct device *

@r_const_dev_2
 disable optional_qualifier
@
@@
-struct device * const
+const struct device *

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Anas Nashif 390537bf68 tracing: trace mutex/semaphore using dedicated calls
Instead of using generic trace calls, use dedicated functions for
tracing sempahores and mutexes.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Anas Nashif 8aa3dc340d kernel: cleanup header inclusion
Remove unused header inclusion in kernel code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-06-25 16:12:36 -05:00
Anas Nashif 2c5d40437b kernel: logging: convert K_DEBUG to LOG_DBG
Move K_DEBUG to use LOG_DBG instead of plain printk.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-06-25 16:12:36 -05:00
Kumar Gala a1b77fd589 zephyr: replace zephyr integer types with C99 types
git grep -l 'u\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/u\(8\|16\|32\|64\)_t/uint\1_t/g"
	git grep -l 's\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/s\(8\|16\|32\|64\)_t/int\1_t/g"

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-06-08 08:23:57 -05:00
Andrew Boie 6af9793f0c kernek: don't allow mutex ops in ISRs
Mutex operations check ownership against _current. But in an
ISR, _current is just whatever thread was interrupted when the
ISR fired. Explicitly do not allow this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-28 01:38:05 +02:00
Andy Ross 7832738ae9 kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument.  Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created.  This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.

The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.

The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.

Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.

For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided.  When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.

Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions.  These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig.  These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.

k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.

Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate.  Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure.  But k_poll() does not fail
spuriously, so the loop was removed.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Anas Nashif 73008b427c tracing: move headers under include/tracing
Move tracing.h to include/tracing/ to align with subsystem reorg.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-02-07 15:58:05 -05:00
Anas Nashif 86bb2d06d7 kernel: mutex: add error checking
k_mutex_unlock will now perform error checking and return on failures.

If the current thread does not own the mutex, we will now return -EPERM.
In the unlikely situation where we own a lock and the lock count is
zero, we assert. This is considered an undefined bahviour and should not
happen.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-20 17:19:54 -05:00
Steven Wang 9b909f2d69 kernel/mutex: code improvement
Line "new_prio = mutex->owner_orig_prio" is unnecessary. So
remove it.

Signed-off-by: Steven Wang <steven.l.wang@linux.intel.com>
2020-01-07 17:13:37 +01:00
Andy Ross 7022000b5c kernel/mutex: Fix races, make unlock rescheduling
The k_mutex is a priority-inheriting mutex, so on unlock it's possible
that a thread's priority will be lowered.  Make this a reschedule
point so that reasoning about thread priorities is easier (possibly at
the cost of performance): most users are going to expect that the
priority elevation stops at exactly the moment of unlock.

Note that this also reorders the code to fix what appear to be obvious
race conditions.  After the call to z_ready_thread(), that thread may
be run (e.g. by an interrupt preemption or on another SMP core), yet
the return value and mutex weren't correctly set yet.  The spinlock
was also prematurely released.

Fixes #20802

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-12-03 12:33:17 -06:00
Andrew Boie 4f77c2ad53 kernel: rename z_arch_ to arch_
Promote the private z_arch_* namespace, which specifies
the interface between the core kernel and the
architecture code, to a new top-level namespace named
arch_*.

This allows our documentation generation to create
online documentation for this set of interfaces,
and this set of interfaces is worth treating in a
more formal way anyway.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 15:21:46 -08:00
Stephanos Ioannidis 2d7460482d headers: Refactor kernel and arch headers.
This commit refactors kernel and arch headers to establish a boundary
between private and public interface headers.

The refactoring strategy used in this commit is detailed in the issue

This commit introduces the following major changes:

1. Establish a clear boundary between private and public headers by
  removing "kernel/include" and "arch/*/include" from the global
  include paths. Ideally, only kernel/ and arch/*/ source files should
  reference the headers in these directories. If these headers must be
  used by a component, these include paths shall be manually added to
  the CMakeLists.txt file of the component. This is intended to
  discourage applications from including private kernel and arch
  headers either knowingly and unknowingly.

  - kernel/include/ (PRIVATE)
    This directory contains the private headers that provide private
   kernel definitions which should not be visible outside the kernel
   and arch source code. All public kernel definitions must be added
   to an appropriate header located under include/.

  - arch/*/include/ (PRIVATE)
    This directory contains the private headers that provide private
   architecture-specific definitions which should not be visible
   outside the arch and kernel source code. All public architecture-
   specific definitions must be added to an appropriate header located
   under include/arch/*/.

  - include/ AND include/sys/ (PUBLIC)
    This directory contains the public headers that provide public
   kernel definitions which can be referenced by both kernel and
   application code.

  - include/arch/*/ (PUBLIC)
    This directory contains the public headers that provide public
   architecture-specific definitions which can be referenced by both
   kernel and application code.

2. Split arch_interface.h into "kernel-to-arch interface" and "public
  arch interface" divisions.

  - kernel/include/kernel_arch_interface.h
    * provides private "kernel-to-arch interface" definition.
    * includes arch/*/include/kernel_arch_func.h to ensure that the
     interface function implementations are always available.
    * includes sys/arch_interface.h so that public arch interface
     definitions are automatically included when including this file.

  - arch/*/include/kernel_arch_func.h
    * provides architecture-specific "kernel-to-arch interface"
     implementation.
    * only the functions that will be used in kernel and arch source
     files are defined here.

  - include/sys/arch_interface.h
    * provides "public arch interface" definition.
    * includes include/arch/arch_inlines.h to ensure that the
     architecture-specific public inline interface function
     implementations are always available.

  - include/arch/arch_inlines.h
    * includes architecture-specific arch_inlines.h in
     include/arch/*/arch_inline.h.

  - include/arch/*/arch_inline.h
    * provides architecture-specific "public arch interface" inline
     function implementation.
    * supersedes include/sys/arch_inline.h.

3. Refactor kernel and the existing architecture implementations.

  - Remove circular dependency of kernel and arch headers. The
   following general rules should be observed:

    * Never include any private headers from public headers
    * Never include kernel_internal.h in kernel_arch_data.h
    * Always include kernel_arch_data.h from kernel_arch_func.h
    * Never include kernel.h from kernel_struct.h either directly or
     indirectly. Only add the kernel structures that must be referenced
     from public arch headers in this file.

  - Relocate syscall_handler.h to include/ so it can be used in the
   public code. This is necessary because many user-mode public codes
   reference the functions defined in this header.

  - Relocate kernel_arch_thread.h to include/arch/*/thread.h. This is
   necessary to provide architecture-specific thread definition for
   'struct k_thread' in kernel.h.

  - Remove any private header dependencies from public headers using
   the following methods:

    * If dependency is not required, simply omit
    * If dependency is required,
      - Relocate a portion of the required dependencies from the
       private header to an appropriate public header OR
      - Relocate the required private header to make it public.

This commit supersedes #20047, addresses #19666, and fixes #3056.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-11-06 16:07:32 -08:00
Andrew Boie 4ad9f687df kernel: rename thread return value functions
z_set_thread_return_value is part of the core kernel -> arch
interface and has been renamed to z_arch_thread_return_value_set.

z_set_thread_return_value_with_data renamed to
z_thread_return_value_set_with_data for consistency.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Anas Nashif 0bf1f9a408 tracing: add missing end_call for k_mutex_unlock
k_mutex_unlock had no end_call tracing call.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-09-30 10:49:37 -04:00
Andy Ross 6564974bae userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words.  So
passing wider values requires splitting them into two registers at
call time.  This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.

Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths.  So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.

Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types.  So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*().  The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function.  It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.

This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs.  Future commits will port the less testable code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Andy Ross 6f13980fc7 kernel/mutex: Fix locking to be SMP-safe
The mutex locking was written to use k_sched_lock(), which doesn't
work as a synchronization primitive if there is another CPU running
(it prevents the current CPU from preempting the thread, it says
nothing about what the others are doing).

Use the pre-existing spinlock for all synchronization.  One wrinkle is
that the priority code was needing to call z_thread_priority_set(),
which is a rescheduling call that cannot be called with a lock held.
So that got split out with a low level utility that can update the
schedule state but allow the caller to defer yielding until later.

Fixes #17584

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-22 17:58:16 -04:00
Anas Nashif ee9dd1a54a cleanup: include/: move misc/dlist.h to sys/dlist.h
move misc/dlist.h to sys/dlist.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 10291a0789 cleanup: include/: move tracing.h to debug/tracing.h
move tracing.h to debug/tracing.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Nicolas Pitre aa9228854f linker generated list: provide an iterator to simplify list access
Given that the section name and boundary simbols can be inferred from
the struct object name, it makes sense to create an iterator that
abstracts away the access details and reduce the possibility for
mistakes.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-06 14:21:32 -07:00
Patrik Flykt 24d71431e9 all: Add 'U' suffix when using unsigned variables
Add a 'U' suffix to values when computing and comparing against
unsigned variables.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Daniel Leung 416d94cd30 kernel/mutex: remove object monitoring empty loop macros
There are some remaining code from object monitoring which simply
expands to empty loop macros. Remove them as they are not
functional anyway.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2019-03-28 08:55:12 -05:00
Patrik Flykt 4344e27c26 all: Update reserved function names
Update reserved function names starting with one underscore, replacing
them as follows:
   '_k_' with 'z_'
   '_K_' with 'Z_'
   '_handler_' with 'z_handl_'
   '_Cstart' with 'z_cstart'
   '_Swap' with 'z_swap'

This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.

Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.

Various generator scripts have also been updated as well as perf,
linker and usb files. These are
   drivers/serial/uart_handlers.c
   include/linker/kobject-text.ld
   kernel/include/syscall_handler.h
   scripts/gen_kobject_list.py
   scripts/gen_syscall_header.py

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-11 13:48:42 -04:00
Andy Ross 7df0216d1e kernel/mutex: Spinlockify
Use a subsystem lock, not a per-object lock.  Really we want to lock
at mutex granularity where possible, but (1) that has non-trivial
memory overhead vs. e.g. directly spinning on the mutex state and (2)
the locking in a few places was originally designed to protect access
to the mutex *owner* priority, which is not 1:1 with a single mutex.

Basically the priority-inheriting mutex code will need some rework
before it works as a fine-grained locking abstraction in SMP.

Note that this fixes an invisible bug: with the older code,
k_mutex_unlock() would actually call irq_unlock() twice along the path
where there was a new owner, which is benign on existing architectures
(so long as the key argument is unchanged) but was never guaranteed to
work.  With a spinlock, unlocking an unlocked/unowned lock is a
detectable assertion condition.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross ec554f44d9 kernel: Split reschdule & pend into irq/spin lock versions
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch.  The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.

Just refactoring.  No logic changes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Nicolás Bértolo 258fd2dbeb kernel: mutex: delay setting lock_count = 0.
It is necessary to delay setting lock_count = 0 because an unlocking thread
maybe swapped out when it calls adjust_owner_prio(). If the thread that starts
running sees lock_count = 0 it will successfully acquire the mutex even though
it is not fully unlocked yet.

Fixes #11798.

Signed-off-by: Nicolás Bértolo <nicolasbertolo@gmail.com>
2018-12-05 11:00:10 +01:00
Flavio Ceolin 4369363f6c kernel: mutex: Change variable declaration
This is not violating any MISRA-C rule, though, it seems to be
triggering a false (rule 9.1) positive in some static analysis
tools. Nevertheless, it is more readable declare all variables in the
same scope together.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-04 11:37:24 -05:00
Adithya Baglody 87e592ebda kernel: mutex.c: MISRA C compliance.
This patch fixes few MISRA issues present in mutex.c.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-31 08:44:47 -04:00
Flavio Ceolin 6fdc56d286 kernel: Using boolean types for boolean constants
Make boolean expressions use boolean types.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Flavio Ceolin 4218d5f8f0 kernel: Make If statement have essentially Boolean type
Make if statement using pointers explicitly check whether the value is
NULL or not.

The C standard does not say that the null pointer is the same as the
pointer to memory address 0 and because of this is a good practice
always compare with the macro NULL.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-18 13:57:15 -04:00
Anas Nashif b6304e66f6 tracing: support generic tracing hooks
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Flavio Ceolin 0866d18d03 irq: Fix irq_lock api usage
irq_lock returns an unsigned int, though, several places was using
signed int. This commit fix this behaviour.

In order to avoid this error happens again, a coccinelle script was
added and can be used to check violations.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Andy Ross ccf3bf7ed3 kernel: Fix sloppy wait queue API
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs.  Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages.  Now replacement of wait_q with a different
data structure is much cleaner.

Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro.  While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andrew Boie 8345e5ebf0 syscalls: remove policy from handler checks
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:

* System call handlers that acquire resources in the handler
  have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
  to the caller instead of just killing the calling thread,
  even though the base API doesn't do these checks.

These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.

At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/

The macros now use the Z_ notation for private APIs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andy Ross 22642cf309 kernel: Clean up _unpend_thread() API
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons.  The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.

So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout().  (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 5792ee6da2 kernel/mutex: Clean up k_mutex_unlock()
Recent changes to the scheduler API means we can simplify this
further: move the assignment to mutex->owner outside the if(), which
removes the need to have an else clause (which just set that field to
NULL when the new_owner was already NULL); and we can likewise move
the irq_unlock() outside the block.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 0447a73f6c kernel: include cleanup
Recent changes have eliminated most use of _Swap() in favor of higher
level scheduler abstractions.  We can remove the header too.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross e0a572beeb kernel: Refactor, unifying _pend_current_thread() + _Swap() idiom
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps.  Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Leandro Pereira bf44bacd24 kernel: mutex: Copy assertions to assertions to syscall handler
Always ensure that the mutex owner is the current thread and that the
count is sane.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-06 11:52:32 -07:00
Andy Ross 9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 32a444c54e kernel: Fix nano_internal.h inclusion
_Swap() is defined in nano_internal.h.  Everything calls _Swap().
Pretty much nothing that called _Swap() included nano_internal.h,
expecting it to be picked up automatically through other headers (as
it happened, from the kernel arch-specific include file).  A new
_Swap() is going to need some other symbols in the inline definition,
so I needed to break that cycle.  Now nothing sees _Swap() defined
anymore.  Put nano_internal.h everywhere it's needed.

Our kernel includes remain a big awful yucky mess.  This makes things
more correct but no less ugly.  Needs cleanup.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Ramakrishna Pallala 67426261ad kernel: Remove dead or commented code from k_mutex_lock()
Remove dead code from k_mutex_lock() function and
also fix typo in a comment block.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2017-10-24 11:11:00 -07:00
Leandro Pereira 6f99bdb02a kernel: Provide only one _SYSCALL_HANDLER() macro
Use some preprocessor trickery to automatically deduce the amount of
arguments for the various _SYSCALL_HANDLERn() macros.  Makes the grunt
work of converting a bunch of kernel APIs to system calls slightly
easier.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2017-10-16 13:42:15 -04:00
Andrew Boie 225e4c0e76 kernel: greatly simplify syscall handlers
We now have macros which should significantly reduce the amount of
boilerplate involved with defining system call handlers.

- Macros which define the proper prototype based on number of arguments
- "SIMPLE" variants which create handlers that don't need anything
  other than object verification

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-12 16:26:28 -05:00
Andrew Boie 37ff5a9bc5 kernel: system call handler cleanup
Use new _SYSCALL_OBJ/_SYSCALL_OBJ_INIT macros.

Use new _SYSCALL_MEMORY_READ/_SYSCALL_MEMORY_WRITE macros.

Some non-obvious checks changed to use _SYSCALL_VERIFY_MSG.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 17:54:47 -07:00
Andrew Boie 2f7519bfd2 kernel: convert mutex APIs to system calls
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Andrew Boie 945af95f42 kernel: introduce object validation mechanism
All system calls made from userspace which involve pointers to kernel
objects (including device drivers) will need to have those pointers
validated; userspace should never be able to crash the kernel by passing
it garbage.

The actual validation with _k_object_validate() will be in the system
call receiver code, which doesn't exist yet.

- CONFIG_USERSPACE introduced. We are somewhat far away from having an
  end-to-end implementation, but at least need a Kconfig symbol to
  guard the incoming code with. Formal documentation doesn't exist yet
  either, but will appear later down the road once the implementation is
  mostly finalized.

- In the memory region for RAM, the data section has been moved last,
  past bss and noinit. This ensures that inserting generated tables
  with addresses of kernel objects does not change the addresses of
  those objects (which would make the table invalid)

- The DWARF debug information in the generated ELF binary is parsed to
  fetch the locations of all kernel objects and pass this to gperf to
  create a perfect hash table of their memory addresses.

- The generated gperf code doesn't know that we are exclusively working
  with memory addresses and uses memory inefficently. A post-processing
  script process_gperf.py adjusts the generated code before it is
  compiled to work with pointer values directly and not strings
  containing them.

- _k_object_init() calls inserted into the init functions for the set of
  kernel object types we are going to support so far

Issue: ZEP-2187
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:33:33 -07:00
Anas Nashif 397d29db42 linker: move all linker headers to include/linker
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-06-18 09:24:04 -05:00
Maciek Borzecki 059544d1ae kernel: make sure that CONFIG_OBJECT_TRACING structs are properly ifdef'ed
Fixes sparse warnings:
<snip>/zephyr/kernel/timer.c:15:16: warning: symbol '_trace_list_k_timer' was not declared. Should it be static?
<snip>/zephyr/kernel/sem.c:32:14: warning: symbol'_trace_list_k_sem' was not declared. Should it be static?
<snip>/zephyr/kernel/stack.c:24:16: warning: symbol '_trace_list_k_stack' was not declared. Should it be static?
<snip>/zephyr/kernel/queue.c:27:16: warning: symbol '_trace_list_k_queue' was not declared. Should it be static?
<snip>/zephyr/kernel/pipes.c:40:15: warning: symbol '_trace_list_k_pipe' was not declared. Should it be static?
<snip>/zephyr/kernel/mutex.c:46:16: warning: symbol '_trace_list_k_mutex' was not declared. Should it be static?
<snip>/zephyr/kernel/msg_q.c:26:15: warning: symbol '_trace_list_k_msgq' was not declared. Should it be static?
<snip>/zephyr/kernel/mem_slab.c:20:19: warning: symbol '_trace_list_k_mem_slab' was not declared. Should it be static?
<snip>/zephyr/kernel/mailbox.c:53:15: warning: symbol '_trace_list_k_mbox' was not declared. Should it be static?

Change-Id: I42d55aea9855b9c1dd560852ca033c9a19f1ac21
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
2017-05-18 12:41:56 -05:00
Kumar Gala cc334c7273 Convert remaining code to using newly introduced integer sized types
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types.  This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies.  We also convert the PRI printf formatters in the arch
code over to normal formatters.

Jira: ZEP-2051

Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 11:38:23 -05:00