zephyr/tests/benchmarks/sched
Gerard Marull-Paretas 79e6b0e0f6 includes: prefer <zephyr/kernel.h> over <zephyr/zephyr.h>
As of today <zephyr/zephyr.h> is 100% equivalent to <zephyr/kernel.h>.
This patch proposes to then include <zephyr/kernel.h> instead of
<zephyr/zephyr.h> since it is more clear that you are including the
Kernel APIs and (probably) nothing else. <zephyr/zephyr.h> sounds like a
catch-all header that may be confusing. Most applications need to
include a bunch of other things to compile, e.g. driver headers or
subsystem headers like BT, logging, etc.

The idea of a catch-all header in Zephyr is probably not feasible
anyway. Reason is that Zephyr is not a library, like it could be for
example `libpython`. Zephyr provides many utilities nowadays: a kernel,
drivers, subsystems, etc and things will likely grow. A catch-all header
would be massive, difficult to keep up-to-date. It is also likely that
an application will only build a small subset. Note that subsystem-level
headers may use a catch-all approach to make things easier, though.

NOTE: This patch is **NOT** removing the header, just removing its usage
in-tree. I'd advocate for its deprecation (add a #warning on it), but I
understand many people will have concerns.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-09-05 16:31:47 +02:00
..
src
CMakeLists.txt
README.rst
prj.conf
testcase.yaml

README.rst

Scheduler Microbenchmark
########################

This is a scheduler microbenchmark, designed to measure minimum
latencies (not scaling performance) of specific low level scheduling
primitives independent of overhead from application or API
abstractions.  It works very simply: a main thread creates a "partner"
thread at a higher priority, the partner then sleeps using
_pend_curr_irqlock().  From this initial state:

1. The main thread calls _unpend_first_thread()
2. The main thread calls _ready_thread()
3. The main thread calls k_yield()
   (the kernel switches to the partner thread)
4. The partner thread then runs and calls _pend_curr_irqlock() again
   (the kernel switches to the main thread)
5. The main thread returns from k_yield()

It then iterates this many times, reporting timestamp latencies
between each numbered step and for the whole cycle, and a running
average for all cycles run.