Commit Graph

71 Commits

Author SHA1 Message Date
Andrew Boie 7f4d006959 kernel: fix errno access for user mode
The errno "variable" is required to be thread-specific.
It gets defined to a macro which dereferences a pointer
returned by a kernel function.

In user mode, we cannot simply read/write the thread struct.
We do not have thread-local storage mechanism, so for now
use the lowest address of the thread stack to store this
value, since this is guaranteed to be read/writable by
a user thread.

The downside of this approach is potential stack corruption
if the stack pointer goes down this far but does not exceed
the location, since a fault won't be generated in this case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-07-19 16:44:59 -07:00
Ramakrishna Pallala e74d85d816 kernel: thread: Simplify k_thread_foreach conditional inclusion
Simplify k_thread_foreach API conditional inclusion by putting
the whole logic under CONFIG_THREAD_MONITOR config option.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-07-18 15:42:28 -04:00
Spoorthi K 47a9f9a617 kernel: thread: Exclude deprecated function from lcov
Do not consider deprecated function for code coverage

Signed-off-by: Spoorthi K <spoorthi.k@intel.com>
2018-07-18 13:26:18 -04:00
Andrew Boie 2dd91eca0e kernel: move thread monitor init to common code
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.

This is problematic:

- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.

Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-06 14:26:45 -04:00
Michael Scott f669a08eea kernel: thread: fix _THREAD_DUMMY check in _check_stack_sentinel()
All other checks of thread_state use a bit wise & operator incase
there are other flags attached to the thread_state.  Let's fix
the only outlier in _check_stack_sentinel() to be the same.

Signed-off-by: Michael Scott <michael@opensourcefoundries.com>
2018-06-01 09:03:48 -04:00
Andrew Boie 538754cb28 kernel: handle early entropy issues
We generalize querying the entropy driver directly with
a new internal API, which is now used by CONFIG_STACK_RANDOM
and stack canary initialization.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-23 19:38:06 -07:00
Andy Ross 4a2e50f6b0 kernel: Earliest-deadline-first scheduling policy
Very simple implementation of deadline scheduling.  Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andrew Boie 8345e5ebf0 syscalls: remove policy from handler checks
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:

* System call handlers that acquire resources in the handler
  have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
  to the caller instead of just killing the calling thread,
  even though the base API doesn't do these checks.

These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.

At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/

The macros now use the Z_ notation for private APIs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie 92e5bd7473 kernel: internal APIs for thread resource pools
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.

Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Adithya Baglody 5133cf56aa kernel: thread: Move out the function _thread_entry() to lib
The _thread_entry() is not really a part of the kernel but a part of
the zephyr's C runtime support library. Hence moving just the
function to lib/thread_entry.c

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-05-15 17:48:18 +03:00
Ramakrishna Pallala 110b8e42ff kernel: Add k_thread_foreach API
Add k_thread_foreach API to iterate over all the threads in
the system.

This API can be used for debugging threads in multi threaded
environment to dump and analyze various thread parameters like
priority, state, stack address etc...

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-05-15 13:43:00 +03:00
Andy Ross 22642cf309 kernel: Clean up _unpend_thread() API
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons.  The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.

So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout().  (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 15cb5d7293 kernel: Further unify _reschedule APIs
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 8606fabf74 kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().

And some places were just calling _reschedule_threads(), which did all
this already.

Unify all this madness.  The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield().  The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor.  I'm not at all convinced it
should exist...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Leandro Pereira 541c3cb18b kernel: sched: Fix validation of priority levels
A priority value cannot be simultaneously higher than the maximum
possible value and smaller than the minimum value.  Rewrite the
_VALID_PRIO() macro as a function so that this if either of these
invariants are invalid, the priority is considered invalid.

Coverity-CID: 182584
Coverity-CID: 182585
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-21 08:39:42 -07:00
Kumar Gala 79d151f81d kernel: Fix building of k_thread_create
commit ec7ecf7900 moved some code around
such that the total_size variable is used regardless of how
CONFIG_MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT is set.  So move the
decleration of total_size outside of the ifndef block so things build
properly.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-04-10 22:26:01 -04:00
Andrew Boie ec7ecf7900 kernel: restore stack size check
The handler for k_thread_create() wasn't verifying that the
provided stack size actually fits in the requested stack object
on systems that enforce power-of-two size/alignment for stacks.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-04-10 10:58:12 -04:00
Leandro Pereira 1ccd715577 kernel: thread: Consider stack pointer fuzz underflow
When randomizing the stack pointer on thread creation
(CONFIG_STACK_POINTER_RANDOM), the fuzz amount might exceed the stack
size, causing an underflow.

Ensure that this will never underflow by only adjusting the stack size
if there's enough space.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-03 12:32:56 -07:00
Andy Ross 85bc0a3fe6 kernel: Cleanup, unify _add_thread_to_ready_q() and _ready_thread()
The scheduler exposed two APIs to do the same thing:
_add_thread_to_ready_q() was a low level primitive that in most cases
was wrapped by _ready_thread(), which also (1) checks that the thread
_is_ready() or exits, (2) flags the thread as "started" to handle the
case of a thread running for the first time out of a waitq timeout,
and (3) signals a logger event.

As it turns out, all existing usage was already checking case #1.
Case #2 can be better handled in the timeout resume path instead of on
every call.  And case #3 was probably wrong to have been skipping
anyway (there were paths that could make a thread runnable without
logging).

Now _add_thread_to_ready_q() is an internal scheduler API, as it
probably always should have been.

This also moves some asserts from the inline _ready_thread() wrapper
to the underlying true function for code size reasons, otherwise the
extra use of the inline added by this patch blows past code size
limits on Quark D2000.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-18 16:58:12 -04:00
Andrew Boie 83752c1cfe kernel: introduce initial stack randomization
This is a component of address space layout randomization that we can
implement even though we have a physical address space.

Support for upward-growing stacks omitted for now, it's not done
currently on any of our current or planned architectures.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-03-16 16:25:22 -07:00
Andy Ross 245b54ed56 kernel/include: Missed nano_internal.h -> kernel_internal.h spots
Update heading naming given recent rename

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 32a444c54e kernel: Fix nano_internal.h inclusion
_Swap() is defined in nano_internal.h.  Everything calls _Swap().
Pretty much nothing that called _Swap() included nano_internal.h,
expecting it to be picked up automatically through other headers (as
it happened, from the kernel arch-specific include file).  A new
_Swap() is going to need some other symbols in the inline definition,
so I needed to break that cycle.  Now nothing sees _Swap() defined
anymore.  Put nano_internal.h everywhere it's needed.

Our kernel includes remain a big awful yucky mess.  This makes things
more correct but no less ugly.  Needs cleanup.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Ramakrishna Pallala 3f2f1223ac kernel: thread: Remove unused _k_thread_single_start()
Remove unused _k_thread_single_start() as this logic is
now moved to _impl_k_thread_start().

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-02-13 17:26:21 -05:00
Andy Gross 1c047c9bef arm: userspace: Add ARM userspace infrastructure
This patch adds support for userspace on ARM architectures.  Arch
specific calls for transitioning threads to user mode, system calls,
and associated handlers.

Signed-off-by: Andy Gross <andy.gross@linaro.org>
2018-02-13 12:42:37 -08:00
Adithya Baglody 10db82bfed kernel: thread: Repeated thread abort crashes.
When CONFIG_THREAD_MONITOR is enabled, repeated thread abort
calls on a dead thread will cause the _thread_monitor_exit to
crash.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-01-24 18:18:53 +05:30
Anas Nashif 94d034dd5e kernel: support custom k_busy_wait()
Support architectures implementing their own k_busy_wait.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-12-27 14:16:08 -05:00
Anas Nashif fb4eecaf5f kernel: threads: remove thread groups
We have removed this features when we moved to the unified kernel. Those
functions existed to support migration from the old kernel and can go
now.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-12-09 08:48:51 -06:00
Andrew Boie a7fedb7073 _setup_new_thread: fix crash on ARM
On arches which have custom logic to do the initial swap into
the main thread, _current may be NULL. This happens when
instantiating the idle and main threads.

If this is the case, skip checks for memory domain and object
permission inheritance, in this case there is never anything to
inherit.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-13 16:25:40 -08:00
Andrew Boie 0bf9d33602 mem_domain: inherit from parent thread
New threads inherit any memory domain membership held by the
parent thread.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-08 09:14:52 -08:00
Andrew Boie 818a96d3af userspace: assign thread IDs at build time
Kernel object metadata had an extra data field added recently to
store bounds for stack objects. Use this data field to assign
IDs to thread objects at build time. This has numerous advantages:

* Threads can be granted permissions on kernel objects before the
  thread is initialized. Previously, it was necessary to call
  k_thread_create() with a K_FOREVER delay, assign permissions, then
  start the thread. Permissions are still completely cleared when
  a thread exits.

* No need for runtime logic to manage thread IDs

* Build error if CONFIG_MAX_THREAD_BYTES is set too low

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-03 11:29:23 -07:00
Ramakrishna Pallala 1777c57bec kernel: fix bit clearing logic in _k_thread_group_leave
Fix init_group bit clearing in _k_thread_group_leave()

Fix _k_object_uninit calling order. Though the order won't
make much difference in this case it is always good to destroy
or uninitialize in the reverse order of the object creation or
initialization.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2017-10-27 10:56:58 -07:00
Andrew Boie e12857aabf kernel: add k_thread_access_grant()
This is a runtime counterpart to K_THREAD_ACCESS_GRANT().
This function takes a thread and a NULL-terminated list of kernel
objects and runs k_object_access_grant() on each of them.
This function doesn't require any special permissions and doesn't
need to become a system call.

__attribute__((sentinel)) added to warn users if they omit the
required NULL termination.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-18 07:37:38 -07:00
Andrew Boie 877f82e847 userspace: add K_THREAD_ACCCESS_GRANT()
It's possible to declare static threads that start up as K_USER,
but these threads can't do much since they start with permissions on
no kernel objects other than their own thread object.

Rather than do some run-time synchronization to have some other thread
grant the necessary permissions, we introduce macros
to conveniently assign object permissions to these threads when they
are brought up at boot by the kernel. The tables generated here
are constant and live in ROM when possible.

Example usage:

K_THREAD_DEFINE(my_thread, STACK_SIZE, my_thread_entry,
                NULL, NULL, NULL, 0, K_USER, K_NO_WAIT);

K_THREAD_ACCESS_GRANT(my_thread, &my_sem, &my_mutex, &my_pipe);

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-18 07:37:38 -07:00
Andrew Boie c5c104f91e kernel: fix k_thread_stack_t definition
Currently this is defined as a k_thread_stack_t pointer.
However this isn't correct, stacks are defined as arrays. Extern
references to k_thread_stack_t doesn't work properly as the compiler
treats it as a pointer to the stack array and not the array itself.

Declaring as an unsized array of k_thread_stack_t doesn't work
well either. The least amount of confusion is to leave out the
pointer/array status completely, use pointers for function prototypes,
and define K_THREAD_STACK_EXTERN() to properly create an extern
reference.

The definitions for all functions and struct that use
k_thread_stack_t need to be updated, but code that uses them should
be unchanged.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-17 08:24:29 -07:00
Andrew Boie 662c345cb6 kernel: implement k_thread_create() as a syscall
User threads can only create other nonessential user threads
of equal or lower priority and must have access to the entire
stack area.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 19:02:00 -07:00
Andrew Boie bca15da650 userspace: treat thread stacks as kernel objects
We need to track permission on stack memory regions like we do
with other kernel objects. We want stacks to live in a memory
area that is outside the scope of memory domain permission
management. We need to be able track what stacks are in use,
and what stacks may be used by user threads trying to call
k_thread_create().

Some special handling is needed because thread stacks appear as
variously-sized arrays of struct _k_thread_stack_element which is
just a char. We need the entire array to be considered an object,
but also properly handle arrays of stacks.

Validation of stacks also requires that the bounds of the stack
are not exceeded. Various approaches were considered. Storing
the size in some header region of the stack itself would not allow
the stack to live in 'noinit'. Having a stack object be a data
structure that points to the stack buffer would confound our
current APIs for declaring stacks as arrays or struct members.
In the end, the struct _k_object was extended to store this size.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 19:02:00 -07:00
Andrew Boie 04caa679c9 userspace: allow thread IDs to be re-used
It's currently too easy to run out of thread IDs as they
are never re-used on thread exit.

Now the kernel maintains a bitfield of in-use thread IDs,
updated on thread creation and termination. When a thread
exits, the permission bitfield for all kernel objects is
updated to revoke access for that retired thread ID, so that
a new thread re-using that ID will not gain access to objects
that it should not have.

Because of these runtime updates, setting the permission
bitmap for an object to all ones for a "public" object doesn't
work properly any more; a flag is now set for this instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 16:16:28 -07:00
Andrew Boie 885fcd5147 userspace: de-initialize aborted threads
This will allow these thread objects to be re-used.

_mark_thread_as_dead() removed, it was only being called in one
place.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 16:16:28 -07:00
Leandro Pereira 6f99bdb02a kernel: Provide only one _SYSCALL_HANDLER() macro
Use some preprocessor trickery to automatically deduce the amount of
arguments for the various _SYSCALL_HANDLERn() macros.  Makes the grunt
work of converting a bunch of kernel APIs to system calls slightly
easier.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2017-10-16 13:42:15 -04:00
Andrew Boie 47f8fd1d4d kernel: add K_INHERIT_PERMS flag
By default, threads are created only having access to their own thread
object and nothing else. This new flag to k_thread_create() gives the
thread access to all objects that the parent had at the time it was
created, with the exception of the parent thread itself.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-13 12:17:13 -07:00
Andrew Boie 225e4c0e76 kernel: greatly simplify syscall handlers
We now have macros which should significantly reduce the amount of
boilerplate involved with defining system call handlers.

- Macros which define the proper prototype based on number of arguments
- "SIMPLE" variants which create handlers that don't need anything
  other than object verification

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-12 16:26:28 -05:00
Andrew Boie 37ff5a9bc5 kernel: system call handler cleanup
Use new _SYSCALL_OBJ/_SYSCALL_OBJ_INIT macros.

Use new _SYSCALL_MEMORY_READ/_SYSCALL_MEMORY_WRITE macros.

Some non-obvious checks changed to use _SYSCALL_VERIFY_MSG.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 17:54:47 -07:00
Andrew Boie 468190a795 kernel: convert most thread APIs to system calls
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Andrew Boie 217017c924 kernel: rename k_object_grant_access()
Zephyr naming convention is to have the verb last.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-05 12:53:41 -04:00
Andrew Boie 93eb603f48 kernel: expose API when userspace not enabled
We want applications to be able to enable and disable userspace without
changing any code. k_thread_user_mode_enter() now just jumps into the
entry point if CONFIG_USERSPACE is disabled.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-04 13:00:03 -04:00
Andrew Boie 3f091b5dd9 kernel: add common functions for user mode
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie 2acfcd6b05 userspace: add thread-level permission tracking
Now creating a thread will assign it a unique, monotonically increasing
id which is used to reference the permission bitfield in the kernel
object metadata.

Stub functions in userspace.c now implemented.

_new_thread is now wrapped in a common function with pre- and post-
architecture thread initialization tasks.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie 26d1eb38e6 stack_sentinel: remove check in _new_thread
We already check the stack sentinel for outgoing thread when we _Swap,
just leverage that.

The thread state check in _check_stack_sentinel now only exits if the
current thread is a dummy thread.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:32:00 -07:00
Andrew Boie 9a74a081e5 _thread_entry: don't use _current
Thread may be in user mode when it returns and can't look at
_current. Use k_current_get() which will be a system call.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:32:00 -07:00