Commit Graph

37 Commits

Author SHA1 Message Date
Andy Ross 4f911e192f kernel: Add missing include
These files were using z_thread_malloc() without including
kernel_internal.h.  On existing architectures that works due to
transitive includes, but x86_64 has a thinner include layer and
doesn't do it for us.  Include the files required for the APIs we use.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-11 15:18:52 -05:00
Flavio Ceolin 76b3518ce6 kernel: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Flavio Ceolin ac14685211 kernel: Delimiting the scope of some variables
According with MISRA-C an object should be defined in a block scope if
it is used in a single function.

MISRA-C rule 8.9

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-04 11:37:24 -05:00
Mark Ruvald Pedersen d67096da05 portability: Avoid void* arithmetics which is a GNU extension
Under GNU C, sizeof(void) = 1. This commit merely makes it explicit u8.

Pointer arithmetics over void types is:
 * A GNU C extension
 * Not supported by Clang
 * Illegal across all ISO C standards

See also: https://gcc.gnu.org/onlinedocs/gcc/Pointer-Arith.html

Signed-off-by: Mark Ruvald Pedersen <mped@oticon.com>
2018-09-28 07:57:28 +05:30
Flavio Ceolin ea716bf023 kernel: Explicitly comparing pointer with NULL
MISRA-C rule: 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Flavio Ceolin c806ac3d36 kernel: Compare pointers with NULL in while statements
Make while statement using pointers explicitly check whether
the value is NULL or not.

The C standard does not say that the null pointer is the same
as the pointer to memory address 0 and because of this is a good
practice always compare with the macro NULL.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-18 13:57:15 -04:00
Flavio Ceolin 4218d5f8f0 kernel: Make If statement have essentially Boolean type
Make if statement using pointers explicitly check whether the value is
NULL or not.

The C standard does not say that the null pointer is the same as the
pointer to memory address 0 and because of this is a good practice
always compare with the macro NULL.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-18 13:57:15 -04:00
Flavio Ceolin 0a4478434e kernel; Checking functions return
Checking the return of some scattered functions across kernel.
MISRA-C requires that all non-void functions have their return value
checked, though, in some cases there is nothing to do. Just
acknowledging it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Flavio Ceolin 1663ca8590 kernel: Ignore _pend_current_thread return in some cases
There are some cases that there is nothing to do with
_pend_current_thread() return (that is _Swap return value).

As MISRA-C requires that all non-void functions have their
return value checked, we are explicitly ignoring it when there is
nothing to do.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Daniel Leung 1c6d202ee8 kernel: pipes: fix k_pipe_block_put() when not enough space
If k_pipe_block_put() is called and the pipe does not have enough
space to accomodate all the data in the memory pool, the subsequent
get operation will cause a CPU fault. The CPU fault is caused by
the timeout struct in the dummy thread not being initialized and
thus the scheduler will read bad memory. After fixing this,
another issue came up where the get operation would stall with
k_pipe_block_put() in same situation. This is due to the async
descriptor not being setup correctly. So fix this too.

This was discovered when debugging #9273.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2018-08-29 15:57:28 -04:00
Andy Ross 3ce9c84ba8 kernel: Wait queues aren't dlists anymore
These assertions snuck through in crossed pull requests.  There's a
specific API for _wait_q_t now, you can't hit the list directly
(because it might be a tree).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross 1acd8c2996 kernel: Scheduler rewrite
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one.  Behavior as to thread
selection does not change.  New features:

+ Unifies SMP and uniprocessing selection code (with the sole
  exception of the "cache" trick not being possible in SMP).

+ The old static multi-queue implementation is gone and has been
  replaced with a build-time choice of either a "dumb" list
  implementation (faster and significantly smaller for apps with only
  a few threads) or a balanced tree queue which scales well to
  arbitrary numbers of threads and priority levels.  This is
  controlled via the CONFIG_SCHED_DUMB kconfig variable.

+ The balanced tree implementation is usable symmetrically for the
  wait_q abstraction, fixing a scalability glitch Zephyr had when many
  threads were waiting on a single object.  This can be selected via
  CONFIG_WAITQ_FAST.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross ccf3bf7ed3 kernel: Fix sloppy wait queue API
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs.  Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages.  Now replacement of wait_q with a different
data structure is much cleaner.

Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro.  While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andrew Boie 8345e5ebf0 syscalls: remove policy from handler checks
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:

* System call handlers that acquire resources in the handler
  have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
  to the caller instead of just killing the calling thread,
  even though the base API doesn't do these checks.

These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.

At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/

The macros now use the Z_ notation for private APIs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie 44fe81228d kernel: pipes: add k_pipe_alloc_init()
User mode can't be trusted to provide the kernel buffers for
internal use. The syscall for k_pipe_init() has been removed
in favor of a new API to draw the buffer memory from the
calling thread's resource pool.

K_PIPE_DEFINE() now properly locates the allocated buffer into
kernel memory.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andy Ross 22642cf309 kernel: Clean up _unpend_thread() API
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons.  The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.

So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout().  (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 15cb5d7293 kernel: Further unify _reschedule APIs
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 0447a73f6c kernel: include cleanup
Recent changes have eliminated most use of _Swap() in favor of higher
level scheduler abstractions.  We can remove the header too.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross e0a572beeb kernel: Refactor, unifying _pend_current_thread() + _Swap() idiom
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps.  Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 8606fabf74 kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().

And some places were just calling _reschedule_threads(), which did all
this already.

Unify all this madness.  The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield().  The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor.  I'm not at all convinced it
should exist...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Leandro Pereira a1ae8453f7 kernel: Name of static functions should not begin with an underscore
Names that begin with an underscore are reserved by the C standard.
This patch does not change names of functions defined and implemented
in header files.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-03-10 08:39:10 -05:00
Andy Ross 9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 32a444c54e kernel: Fix nano_internal.h inclusion
_Swap() is defined in nano_internal.h.  Everything calls _Swap().
Pretty much nothing that called _Swap() included nano_internal.h,
expecting it to be picked up automatically through other headers (as
it happened, from the kernel arch-specific include file).  A new
_Swap() is going to need some other symbols in the inline definition,
so I needed to break that cycle.  Now nothing sees _Swap() defined
anymore.  Put nano_internal.h everywhere it's needed.

Our kernel includes remain a big awful yucky mess.  This makes things
more correct but no less ugly.  Needs cleanup.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Leandro Pereira 6f99bdb02a kernel: Provide only one _SYSCALL_HANDLER() macro
Use some preprocessor trickery to automatically deduce the amount of
arguments for the various _SYSCALL_HANDLERn() macros.  Makes the grunt
work of converting a bunch of kernel APIs to system calls slightly
easier.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2017-10-16 13:42:15 -04:00
Andrew Boie 225e4c0e76 kernel: greatly simplify syscall handlers
We now have macros which should significantly reduce the amount of
boilerplate involved with defining system call handlers.

- Macros which define the proper prototype based on number of arguments
- "SIMPLE" variants which create handlers that don't need anything
  other than object verification

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-12 16:26:28 -05:00
Andrew Boie 37ff5a9bc5 kernel: system call handler cleanup
Use new _SYSCALL_OBJ/_SYSCALL_OBJ_INIT macros.

Use new _SYSCALL_MEMORY_READ/_SYSCALL_MEMORY_WRITE macros.

Some non-obvious checks changed to use _SYSCALL_VERIFY_MSG.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 17:54:47 -07:00
Andrew Boie b9a0578777 kernel: convert pipe APIs to system calls
k_pipe_block_put() will be done in another patch, we need to design
handling for the k_mem_block object.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Andrew Boie 945af95f42 kernel: introduce object validation mechanism
All system calls made from userspace which involve pointers to kernel
objects (including device drivers) will need to have those pointers
validated; userspace should never be able to crash the kernel by passing
it garbage.

The actual validation with _k_object_validate() will be in the system
call receiver code, which doesn't exist yet.

- CONFIG_USERSPACE introduced. We are somewhat far away from having an
  end-to-end implementation, but at least need a Kconfig symbol to
  guard the incoming code with. Formal documentation doesn't exist yet
  either, but will appear later down the road once the implementation is
  mostly finalized.

- In the memory region for RAM, the data section has been moved last,
  past bss and noinit. This ensures that inserting generated tables
  with addresses of kernel objects does not change the addresses of
  those objects (which would make the table invalid)

- The DWARF debug information in the generated ELF binary is parsed to
  fetch the locations of all kernel objects and pass this to gperf to
  create a perfect hash table of their memory addresses.

- The generated gperf code doesn't know that we are exclusively working
  with memory addresses and uses memory inefficently. A post-processing
  script process_gperf.py adjusts the generated code before it is
  compiled to work with pointer values directly and not strings
  containing them.

- _k_object_init() calls inserted into the init functions for the set of
  kernel object types we are going to support so far

Issue: ZEP-2187
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:33:33 -07:00
Anas Nashif 397d29db42 linker: move all linker headers to include/linker
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-06-18 09:24:04 -05:00
Maciek Borzecki 059544d1ae kernel: make sure that CONFIG_OBJECT_TRACING structs are properly ifdef'ed
Fixes sparse warnings:
<snip>/zephyr/kernel/timer.c:15:16: warning: symbol '_trace_list_k_timer' was not declared. Should it be static?
<snip>/zephyr/kernel/sem.c:32:14: warning: symbol'_trace_list_k_sem' was not declared. Should it be static?
<snip>/zephyr/kernel/stack.c:24:16: warning: symbol '_trace_list_k_stack' was not declared. Should it be static?
<snip>/zephyr/kernel/queue.c:27:16: warning: symbol '_trace_list_k_queue' was not declared. Should it be static?
<snip>/zephyr/kernel/pipes.c:40:15: warning: symbol '_trace_list_k_pipe' was not declared. Should it be static?
<snip>/zephyr/kernel/mutex.c:46:16: warning: symbol '_trace_list_k_mutex' was not declared. Should it be static?
<snip>/zephyr/kernel/msg_q.c:26:15: warning: symbol '_trace_list_k_msgq' was not declared. Should it be static?
<snip>/zephyr/kernel/mem_slab.c:20:19: warning: symbol '_trace_list_k_mem_slab' was not declared. Should it be static?
<snip>/zephyr/kernel/mailbox.c:53:15: warning: symbol '_trace_list_k_mbox' was not declared. Should it be static?

Change-Id: I42d55aea9855b9c1dd560852ca033c9a19f1ac21
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
2017-05-18 12:41:56 -05:00
Andy Ross 73cb9586ce k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib.  The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout.  Major changes:

Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer.  This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping.  And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).

IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs.  Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).

Deterministic performance: there is no more "defragmentation" step
that must be manually managed.  Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.

Cleaner behavior with odd sizes.  The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available.  If you want precise layout control, you can still
specify the sizes rigorously.  It just doesn't break if you don't.

More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements.  Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax.  This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.

Related changes that had to be rolled into this patch for bisectability:

* The new allocator has a firm minimum block size of 8 bytes (to store
  the dlist_node_t).  It will "work" with smaller requested min_size
  values, but obviously makes no firm promises about layout or how
  many will be available.  Unfortunately many of the tests were
  written with very small 4-byte minimum sizes and to assume exactly
  how many they could allocate.  Bump the sizes to match the allocator
  minimum.

* The mbox and pipes API made use of the internals of k_mem_block and
  had to be ported to the new scheme.  Blocks no longer store a
  backpointer to the pool that allocated them (it's an integer ID in a
  bitfield) , so if you want to "nullify" them you have to use the
  data pointer.

* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
  that it sent through the mailbox.  This worked in the old allocator
  because the memory wouldn't be touched when freed, but now we stuff
  list pointers in there and the bug was exposed.

* Remove test_mpool_options: the options (related to defragmentation
  behavior) tested no longer exist.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-13 14:39:41 -04:00
Kumar Gala cc334c7273 Convert remaining code to using newly introduced integer sized types
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types.  This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies.  We also convert the PRI printf formatters in the arch
code over to normal formatters.

Jira: ZEP-2051

Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 11:38:23 -05:00
Benjamin Walsh a8978aba8f kernel: rename thread states symbols
They are not part of the API, so rename from K_<state> to
_THREAD_<state>.

Change-Id: Iaebb7d3083b80b9769bee5616e0f96ed2abc5c56
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:49 +00:00
David B. Kinder ac74d8b652 license: Replace Apache boilerplate with SPDX tag
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.

Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.

Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file.  Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.

Jira: ZEP-1457

Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-01-19 03:50:58 +00:00
Benjamin Walsh f955476559 kernel/arch: optimize memory use of some thread fields
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.

- prio can fit in one byte, limiting the priorities range to -128 to 127

- recursive scheduler locking can be limited to 255; a rollover results
  most probably from a logic error

- flags are split into execution flags and thread states; 8 bits is
  enough for each of them currently, with at worst two states and four
  flags to spare (on x86, on other archs, there are six flags to spare)

Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.

Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-09 20:52:24 +00:00
Anas Nashif 2f203c2f92 tracing: rename CONFIG_DEBUG_TRACING_KERNEL_OBJECTS
Use a short name for this option CONFIG_OBJECT_TRACING.

Change-Id: Id27de7ef9ca299492b6b7d2324d9f5bcf8059a31
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00
Anas Nashif d687a95611 kernel: move kernel code to kernel/ directly
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.

Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00