Commit Graph

16 Commits

Author SHA1 Message Date
Marek Pieta e87193896a subsys: debug: tracing: Fix thread tracing
Change fixes issue with thread execution tracing.

Signed-off-by: Marek Pieta <Marek.Pieta@nordicsemi.no>
2018-10-29 22:09:12 -04:00
Andy Ross 9098a45c84 kernel: New timeslicing implementation
Instead of checking every time we hit the low-level context switch
path to see if the new thread has a "partner" with which it needs to
share time, just run the slice timer always and reset it from the
scheduler at the points where it has already decided a switch needs to
happen.  In TICKLESS_KERNEL situations, we pay the cost of extra timer
interrupts at ~10Hz or whatever, which is low (note also that this
kind of regular wakeup architecture is required on SMP anyway so the
scheduler can "notice" threads scheduled by other CPUs).  Advantages:

1. Much simpler logic.  Significantly smaller code.  No variance or
   dependence on tickless modes or timer driver (beyond setting a
   simple timeout).

2. No arch-specific assembly integration with _Swap() needed

3. Better performance on many workloads, as the accounting now happens
   at most once per timer interrupt (~5 Hz) and true rescheduling and
   not on every unrelated context switch and interrupt return.

4. It's SMP-safe.  The previous scheme kept the slice ticks as a
   global variable, which was an unnoticed bug.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Flavio Ceolin a7fffa9e00 headers: Fix headers guards
Any word started with underscore followed by and uppercase letter or a
second underscore is a reserved word according with C99.

With have *many* violations on Zephyr's code, this commit is tackling
only the violations caused by headers guards. It also takes the
opportunity to normalize them using the filename in uppercase and
replacing dot with underscore. e.g file.h -> FILE_H

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-17 15:49:26 -04:00
Flavio Ceolin 8a9ba10c2c kernel: swap: Fix __swap signature
__swap function was returning -EAGAIN in some case, though its return
value was declared as unsigned int.

This commit changes this function to return int since it can return a
negative value and its return was already been propagate as int.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Anas Nashif 483910ab4b systemview: add support natively using tracing hooks
Add needed hooks as a subsystem that can be enabled in any application.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Anas Nashif a2248782a2 kernel: event_logger: remove kernel_event_logger
Move to more generic tracing hooks that can be implemented in different
ways and do not interfere with the kernel.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Anas Nashif b6304e66f6 tracing: support generic tracing hooks
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Adithya Baglody bb918d85f8 tests: benchmarks: timing_info: Enable benchmarks for xtensa.
This patch provides support needed to get timing related
information from xtensa based SOC.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-08-20 06:51:25 -07:00
Andy Ross eace1df539 kernel/sched: Fix SMP scheduling
Recent changes post-scheduler-rewrite broke scheduling on SMP:

The "preempt_ok" feature added to isolate preemption points wasn't
honored in SMP mode.  Fix this by adding a "swap_ok" field to the CPU
record (not the thread) which is set at the same time out of
update_cache().

The "queued" flag wasn't being maintained correctly when swapping away
from _current (it was added back to the queue, but the flag wasn't
set).

Abstract out a "should_preempt()" predicate so SMP and uniprocessor
paths share the same logic, which is distressingly subtle.

There were two places where _Swap() was predicated on
_get_next_ready_thread() != _current.  That's no longer a benign
optimization in SMP, where the former function REMOVES the next thread
from the queue.  Just call _Swap() directly in SMP, which has a
unified C implementation that does this test already.  Don't change
other architectures in case it exposes bugs with _Swap() switching
back to the same thread (it should work, I just don't want to break
anything).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-31 14:02:03 -04:00
Andy Ross 1acd8c2996 kernel: Scheduler rewrite
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one.  Behavior as to thread
selection does not change.  New features:

+ Unifies SMP and uniprocessing selection code (with the sole
  exception of the "cache" trick not being possible in SMP).

+ The old static multi-queue implementation is gone and has been
  replaced with a build-time choice of either a "dumb" list
  implementation (faster and significantly smaller for apps with only
  a few threads) or a balanced tree queue which scales well to
  arbitrary numbers of threads and priority levels.  This is
  controlled via the CONFIG_SCHED_DUMB kconfig variable.

+ The balanced tree implementation is usable symmetrically for the
  wait_q abstraction, fixing a scalability glitch Zephyr had when many
  threads were waiting on a single object.  This can be selected via
  CONFIG_WAITQ_FAST.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross c0ba11b281 kernel: Don't _arch_switch() to yourself
The SMP testing missed the case where _Swap() decides to return back
into the _current.  Obviously there is no valid switch handle for the
running thread into which we can restore, and everything blows up.
(What happened is that the new scheduler code opened up a spot where
k_thread_priority_set() does a _reschedule() unconditionally and
doens't check to see whether or not it's needed like the old code).

But that isn't incorrect!  It's entirely possible that _Swap() may
find that no thread is runnable except _current (due, for example, to
another CPU racing the other thread you expected off to sleep or
something).  Don't blow up, check and return a noop.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross 15c400774e kernel: Rework SMP irq_lock() compatibility layer
This was wrong in two ways, one subtle and one awful.

The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it).  So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.

And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock.  Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do.  The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention.  That this didn't break more things is amazing to me.

The rework is actually simpler than the original, thankfully.  Though
there are some further subtleties:

* The lock state implied by irq_lock() allows the lock to be
  implicitly released on context switch (i.e. you can _Swap() with the
  lock held at a recursion level higher than 1, which needs to allow
  other processes to run).  So return paths into threads from _Swap()
  and interrupt/exception exit need to check and restore the global
  lock state, spinning as needed.

* The idle loop design specifies a k_cpu_idle() function that is on
  common architectures expected to enable interrupts (for obvious
  reasons), but there is no place to put non-arch code to wire it into
  the global lock accounting.  So on SMP, even CPU0 needs to use the
  "dumb" spinning idle loop.

Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Andy Ross 28192fd8ea kernel/kswap.h: Hook event logger from switch-based _Swap
The new generic _Swap() forgot the event logger hook

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 2724fd11cb kernel: SMP-aware scheduler
The scheduler needs a few tweaks to work in SMP mode:

1. The "cache" field just doesn't work.  With more than one CPU,
   caching the highest priority thread isn't useful as you may need N
   of them at any given time before another thread is returned to the
   scheduler.  You could recalculate it at every change, but that
   provides no performance benefit.  Remove.

2. The "bitmask" designed to prevent the need to individually check
   priorities is likewise dropped.  This could work, but in fact on
   our only current SMP system and with current K_NUM_PRIOPRITIES
   values it provides no real benefit.

3. The individual threads now have a "current cpu" and "active" flag
   so that the choice of the next thread to run can correctly skip
   threads that are active on other CPUs.

The upshot is that a decent amount of code gets #if'd out, and the new
SMP implementations for _get_highest_ready_prio() and
_get_next_ready_thread() are simpler and smaller, at the expense of
having to drop older optimizations.

Note that scheduler synchronization is unchanged: all scheduler APIs
used to require that an irq_lock() be held, which means that they now
require the global spinlock via the same API.  This should be a very
early candidate for lock granularity attention!

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross e694656345 kernel: Move per-cpu _kernel_t fields into separate struct
When in SMP mode, the nested/irq_stack/current fields are specific to
the current CPU and not to the kernel as a whole, so we need an array
of these.  Place them in a _cpu_t struct and implement a
_arch_curr_cpu() function to retrieve the pointer.

When not in SMP mode, the first CPU's fields are defined as a unioned
with the first _cpu_t record.  This permits compatibility with legacy
assembly on other platforms.  Long term, all users, including
uniprocessor architectures, should be updated to use the new scheme.

Fundamentally this is just renaming: the structure layout and runtime
code do not change on any existing platforms and won't until someone
defines a second CPU.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00