Commit Graph

90 Commits

Author SHA1 Message Date
Andy Ross 5d203523b6 kernel/timeout: Eliminate wait_q parameters from API
Now that this is known to be an unused value, remove it from the API.
Note that this caught a few spots where we were passing values (a
non-NULL wait_q with a NULL thread handle) that were always being
ignored before.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 25863549be kernel: Remove clock_always_on control from k_busy_wait()
This feature was a useless noop based on mistaken API understanding.

The idea seems to have been that k_busy_wait() included guards to
ensure "clock_always_on" was true duing the loop, presumably because
the original author was afraid that "turning the clock off" would
affect the operation of k_cycle_get_32().

Then later someone came around and "optimized" this for Quark SE,
where the cycle counter is the RTC and unrelated to the timer driver
used by the clock_always_on feature.  (Except even there it presumably
should have been done at the SoC level and not just in the C1000
devboard -- note that Arduino 101 never would have gotten this).

But it was all a mistake: "clock_always_on" has nothing to do with
en/disabling the system cycle timer (which never happens when the
system is active, that's a feature of idle), it's a control over the
delivery of timer interrupts.  And needless to say we don't care about
timer interrupts when we're spinning on a cycle counter.

Yank the whole mess.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 220d4f8347 sys_clock.h: Make "global variable" APIs into proper functions
The existing API defined sys_clock_{hw_cycles,ticks}_per_sec as simple
"variables" to be shared, except that they were only real storage in
certain modes (the HPET driver, basically) and everywhere else they
were a build constant.

Properly, these should be an API defined by the timer driver (who
controls those rates) and consumed by the clock subsystem.  So give
them function syntax as a stepping stone to get there.

Note that this also removes the deprecated variable
_sys_clock_us_per_tick rather than give it the same treatment.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Anas Nashif c77c043071 kernel: remove deprecated k_thread_cancel
Remove deprecated function k_thread_cancel. We now use k_thread_abort.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-10-09 13:58:01 -04:00
Flavio Ceolin 18af4c6299 kernel: Fix overflow test problem introduced in 92ea2f9
The builtin function __builtin_umul_overflow returns a boolean and
should not checked as an integer.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-10-04 05:20:29 -07:00
Flavio Ceolin ea716bf023 kernel: Explicitly comparing pointer with NULL
MISRA-C rule: 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Flavio Ceolin 92ea2f9189 kernel: Calling Z_SYSCALL_VERIFY_MSG with boolean expressions
Explicitly making a boolean expression when calling
Z_SYSCALL_VERIFY_MSG macro.

MISRA-C rule: 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Flavio Ceolin d8837c6888 kernel: Using boolean expression on ASSERT macros
ASSERT macro expects a boolean expression, making it
explicit.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Anas Nashif 57554055d2 kernel: add a new API for setting thread names
Added k_thread_name_set() and enable thread name setting when declaring
static threads. This is enabled only when THREAD_MONITOR is used. System
threads get a name by default.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-09-27 08:58:55 +05:30
Paul Sokolovsky 2df1829c55 kernel: thread: Typo fixes in comment
Typo fixes in comment to k_thread_foreach().

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2018-09-26 17:46:23 +05:30
Ioannis Glaropoulos 66192618a7 arch: arm: Minor style and typo fixes in inline comments
Several style and typo fixes in inline comments of arm kernel
files and thread.c.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-09-24 04:56:34 -07:00
Flavio Ceolin c806ac3d36 kernel: Compare pointers with NULL in while statements
Make while statement using pointers explicitly check whether
the value is NULL or not.

The C standard does not say that the null pointer is the same
as the pointer to memory address 0 and because of this is a good
practice always compare with the macro NULL.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-18 13:57:15 -04:00
Flavio Ceolin b3d9202704 kernel: Using boolean constants instead of 0 or 1
MISRA C requires that every controlling expression of and if or while
statement have a boolean type.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-18 13:57:15 -04:00
Flavio Ceolin 8f72f245bd kernel: Explicitly check _abort_thread_timemout
A lot of times this API is called during some cleanup even if the
timeout was not set to make the code simpler. In these cases it's not
necessary checking the return. Adding a cast to acknowledge it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Flavio Ceolin 5884c7f54b kernel: Explicitly ignoring _Swap return
Ignoring _Swap return where there is no treatment or nothing to do.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Anas Nashif a2248782a2 kernel: event_logger: remove kernel_event_logger
Move to more generic tracing hooks that can be implemented in different
ways and do not interfere with the kernel.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Anas Nashif b6304e66f6 tracing: support generic tracing hooks
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Daniel Leung fc182430c0 kernel: userspace: reserve stack space to store local data
This enables reserving little space on the top of stack to store
data local to thread when CONFIG_USERSPACE. The first customer
of this is errno.

Note that ARC, due to how it lays out the user stack and
privilege stack, sets the pointer itself rather than
relying on the common way.

Fixes: #9067

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2018-08-17 09:40:52 -07:00
Flavio Ceolin 0866d18d03 irq: Fix irq_lock api usage
irq_lock returns an unsigned int, though, several places was using
signed int. This commit fix this behaviour.

In order to avoid this error happens again, a coccinelle script was
added and can be used to check violations.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Andrew Boie 7f4d006959 kernel: fix errno access for user mode
The errno "variable" is required to be thread-specific.
It gets defined to a macro which dereferences a pointer
returned by a kernel function.

In user mode, we cannot simply read/write the thread struct.
We do not have thread-local storage mechanism, so for now
use the lowest address of the thread stack to store this
value, since this is guaranteed to be read/writable by
a user thread.

The downside of this approach is potential stack corruption
if the stack pointer goes down this far but does not exceed
the location, since a fault won't be generated in this case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-07-19 16:44:59 -07:00
Ramakrishna Pallala e74d85d816 kernel: thread: Simplify k_thread_foreach conditional inclusion
Simplify k_thread_foreach API conditional inclusion by putting
the whole logic under CONFIG_THREAD_MONITOR config option.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-07-18 15:42:28 -04:00
Spoorthi K 47a9f9a617 kernel: thread: Exclude deprecated function from lcov
Do not consider deprecated function for code coverage

Signed-off-by: Spoorthi K <spoorthi.k@intel.com>
2018-07-18 13:26:18 -04:00
Andrew Boie 2dd91eca0e kernel: move thread monitor init to common code
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.

This is problematic:

- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.

Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-06 14:26:45 -04:00
Michael Scott f669a08eea kernel: thread: fix _THREAD_DUMMY check in _check_stack_sentinel()
All other checks of thread_state use a bit wise & operator incase
there are other flags attached to the thread_state.  Let's fix
the only outlier in _check_stack_sentinel() to be the same.

Signed-off-by: Michael Scott <michael@opensourcefoundries.com>
2018-06-01 09:03:48 -04:00
Andrew Boie 538754cb28 kernel: handle early entropy issues
We generalize querying the entropy driver directly with
a new internal API, which is now used by CONFIG_STACK_RANDOM
and stack canary initialization.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-23 19:38:06 -07:00
Andy Ross 4a2e50f6b0 kernel: Earliest-deadline-first scheduling policy
Very simple implementation of deadline scheduling.  Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andrew Boie 8345e5ebf0 syscalls: remove policy from handler checks
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:

* System call handlers that acquire resources in the handler
  have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
  to the caller instead of just killing the calling thread,
  even though the base API doesn't do these checks.

These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.

At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/

The macros now use the Z_ notation for private APIs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie 92e5bd7473 kernel: internal APIs for thread resource pools
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.

Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Adithya Baglody 5133cf56aa kernel: thread: Move out the function _thread_entry() to lib
The _thread_entry() is not really a part of the kernel but a part of
the zephyr's C runtime support library. Hence moving just the
function to lib/thread_entry.c

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-05-15 17:48:18 +03:00
Ramakrishna Pallala 110b8e42ff kernel: Add k_thread_foreach API
Add k_thread_foreach API to iterate over all the threads in
the system.

This API can be used for debugging threads in multi threaded
environment to dump and analyze various thread parameters like
priority, state, stack address etc...

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-05-15 13:43:00 +03:00
Andy Ross 22642cf309 kernel: Clean up _unpend_thread() API
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons.  The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.

So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout().  (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 15cb5d7293 kernel: Further unify _reschedule APIs
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 8606fabf74 kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().

And some places were just calling _reschedule_threads(), which did all
this already.

Unify all this madness.  The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield().  The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor.  I'm not at all convinced it
should exist...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Leandro Pereira 541c3cb18b kernel: sched: Fix validation of priority levels
A priority value cannot be simultaneously higher than the maximum
possible value and smaller than the minimum value.  Rewrite the
_VALID_PRIO() macro as a function so that this if either of these
invariants are invalid, the priority is considered invalid.

Coverity-CID: 182584
Coverity-CID: 182585
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-21 08:39:42 -07:00
Kumar Gala 79d151f81d kernel: Fix building of k_thread_create
commit ec7ecf7900 moved some code around
such that the total_size variable is used regardless of how
CONFIG_MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT is set.  So move the
decleration of total_size outside of the ifndef block so things build
properly.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-04-10 22:26:01 -04:00
Andrew Boie ec7ecf7900 kernel: restore stack size check
The handler for k_thread_create() wasn't verifying that the
provided stack size actually fits in the requested stack object
on systems that enforce power-of-two size/alignment for stacks.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-04-10 10:58:12 -04:00
Leandro Pereira 1ccd715577 kernel: thread: Consider stack pointer fuzz underflow
When randomizing the stack pointer on thread creation
(CONFIG_STACK_POINTER_RANDOM), the fuzz amount might exceed the stack
size, causing an underflow.

Ensure that this will never underflow by only adjusting the stack size
if there's enough space.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-03 12:32:56 -07:00
Andy Ross 85bc0a3fe6 kernel: Cleanup, unify _add_thread_to_ready_q() and _ready_thread()
The scheduler exposed two APIs to do the same thing:
_add_thread_to_ready_q() was a low level primitive that in most cases
was wrapped by _ready_thread(), which also (1) checks that the thread
_is_ready() or exits, (2) flags the thread as "started" to handle the
case of a thread running for the first time out of a waitq timeout,
and (3) signals a logger event.

As it turns out, all existing usage was already checking case #1.
Case #2 can be better handled in the timeout resume path instead of on
every call.  And case #3 was probably wrong to have been skipping
anyway (there were paths that could make a thread runnable without
logging).

Now _add_thread_to_ready_q() is an internal scheduler API, as it
probably always should have been.

This also moves some asserts from the inline _ready_thread() wrapper
to the underlying true function for code size reasons, otherwise the
extra use of the inline added by this patch blows past code size
limits on Quark D2000.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-18 16:58:12 -04:00
Andrew Boie 83752c1cfe kernel: introduce initial stack randomization
This is a component of address space layout randomization that we can
implement even though we have a physical address space.

Support for upward-growing stacks omitted for now, it's not done
currently on any of our current or planned architectures.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-03-16 16:25:22 -07:00
Andy Ross 245b54ed56 kernel/include: Missed nano_internal.h -> kernel_internal.h spots
Update heading naming given recent rename

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 32a444c54e kernel: Fix nano_internal.h inclusion
_Swap() is defined in nano_internal.h.  Everything calls _Swap().
Pretty much nothing that called _Swap() included nano_internal.h,
expecting it to be picked up automatically through other headers (as
it happened, from the kernel arch-specific include file).  A new
_Swap() is going to need some other symbols in the inline definition,
so I needed to break that cycle.  Now nothing sees _Swap() defined
anymore.  Put nano_internal.h everywhere it's needed.

Our kernel includes remain a big awful yucky mess.  This makes things
more correct but no less ugly.  Needs cleanup.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Ramakrishna Pallala 3f2f1223ac kernel: thread: Remove unused _k_thread_single_start()
Remove unused _k_thread_single_start() as this logic is
now moved to _impl_k_thread_start().

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-02-13 17:26:21 -05:00
Andy Gross 1c047c9bef arm: userspace: Add ARM userspace infrastructure
This patch adds support for userspace on ARM architectures.  Arch
specific calls for transitioning threads to user mode, system calls,
and associated handlers.

Signed-off-by: Andy Gross <andy.gross@linaro.org>
2018-02-13 12:42:37 -08:00
Adithya Baglody 10db82bfed kernel: thread: Repeated thread abort crashes.
When CONFIG_THREAD_MONITOR is enabled, repeated thread abort
calls on a dead thread will cause the _thread_monitor_exit to
crash.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-01-24 18:18:53 +05:30
Anas Nashif 94d034dd5e kernel: support custom k_busy_wait()
Support architectures implementing their own k_busy_wait.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-12-27 14:16:08 -05:00
Anas Nashif fb4eecaf5f kernel: threads: remove thread groups
We have removed this features when we moved to the unified kernel. Those
functions existed to support migration from the old kernel and can go
now.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-12-09 08:48:51 -06:00
Andrew Boie a7fedb7073 _setup_new_thread: fix crash on ARM
On arches which have custom logic to do the initial swap into
the main thread, _current may be NULL. This happens when
instantiating the idle and main threads.

If this is the case, skip checks for memory domain and object
permission inheritance, in this case there is never anything to
inherit.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-13 16:25:40 -08:00
Andrew Boie 0bf9d33602 mem_domain: inherit from parent thread
New threads inherit any memory domain membership held by the
parent thread.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-08 09:14:52 -08:00
Andrew Boie 818a96d3af userspace: assign thread IDs at build time
Kernel object metadata had an extra data field added recently to
store bounds for stack objects. Use this data field to assign
IDs to thread objects at build time. This has numerous advantages:

* Threads can be granted permissions on kernel objects before the
  thread is initialized. Previously, it was necessary to call
  k_thread_create() with a K_FOREVER delay, assign permissions, then
  start the thread. Permissions are still completely cleared when
  a thread exits.

* No need for runtime logic to manage thread IDs

* Build error if CONFIG_MAX_THREAD_BYTES is set too low

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-03 11:29:23 -07:00