The possibility of passing K_FOREVER as the initial duration argument
to k_timer_start() wasn't being handled, with the result that the
computed value became an zero timeout (effecitvely treating it as
K_NO_WAIT and firing at the next tick).
This is obviously pathlogical, but it should still do what the code
says it should and wait forever.
Make k_timer_start(..., K_FOREVER, ...) a noop.
Fixes#25820
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The period argument of a k_timer needs an offset of one tick from the
value computed in user code (because periods get reset from within the
ISR, see the comment above this code for an explanation). When the
computed tick value was 1, it would become 0. This is actually
perfectly correct as a k_timeout_t to be passed to z_add_timeout().
BUT: to k_timer's API, K_NO_WAIT means "never" (i.e. the same as
K_FOREVER) and not "as soon as possible", so the period timer would
not be reset. This is sort of a wart, but it's the way the API has
been specified forever.
The upshot is that for the case of calling k_timer_start() with a
minimal period argument (i.e. one that produces "one tick"), the
period would be ignored and the timer would act like a one shot. Fix
the clamp so it can't produce K_NO_WAIT.
This also adds a filter for absolute timeouts, which (while that's
sort of a pathological usage) were getting that one tick offset when
it wasn't appropriate.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add tick-based (i.e. precision resistant) inspection APIs for kernel
timeouts visible via k_timer, k_delayed work and thread timeouts
(i.e. pended/sleeping threads). These are each available in
"remaining" and "expires" variants returning time values relative to
current time and system start. All have system calls where applicable
(i.e. everywhere but k_delayed_work, which is not a userspace API)
The pre-existing millisecond "remaining_get()" predicates for timer
and delayed work remain, but are expressed in terms of the newer
calls.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add support for "absolute" timeouts, which are expressed relative to
system uptime instead of deltas from current time. These allow for
more race-resistant code to be written by allowing application code to
do a single timeout computation, once, and then reuse the timeout
value even if the thread wakes up and needs to suspend again later.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument. Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created. This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.
The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.
The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.
Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.
For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided. When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.
Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions. These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig. These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.
k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.
Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate. Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure. But k_poll() does not fail
spuriously, so the loop was removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Mark the old time conversion APIs deprecated, leave compatibility
macros in place, and replace all usage with the new API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Promote the private z_arch_* namespace, which specifies
the interface between the core kernel and the
architecture code, to a new top-level namespace named
arch_*.
This allows our documentation generation to create
online documentation for this set of interfaces,
and this set of interfaces is worth treating in a
more formal way anyway.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This commit refactors kernel and arch headers to establish a boundary
between private and public interface headers.
The refactoring strategy used in this commit is detailed in the issue
This commit introduces the following major changes:
1. Establish a clear boundary between private and public headers by
removing "kernel/include" and "arch/*/include" from the global
include paths. Ideally, only kernel/ and arch/*/ source files should
reference the headers in these directories. If these headers must be
used by a component, these include paths shall be manually added to
the CMakeLists.txt file of the component. This is intended to
discourage applications from including private kernel and arch
headers either knowingly and unknowingly.
- kernel/include/ (PRIVATE)
This directory contains the private headers that provide private
kernel definitions which should not be visible outside the kernel
and arch source code. All public kernel definitions must be added
to an appropriate header located under include/.
- arch/*/include/ (PRIVATE)
This directory contains the private headers that provide private
architecture-specific definitions which should not be visible
outside the arch and kernel source code. All public architecture-
specific definitions must be added to an appropriate header located
under include/arch/*/.
- include/ AND include/sys/ (PUBLIC)
This directory contains the public headers that provide public
kernel definitions which can be referenced by both kernel and
application code.
- include/arch/*/ (PUBLIC)
This directory contains the public headers that provide public
architecture-specific definitions which can be referenced by both
kernel and application code.
2. Split arch_interface.h into "kernel-to-arch interface" and "public
arch interface" divisions.
- kernel/include/kernel_arch_interface.h
* provides private "kernel-to-arch interface" definition.
* includes arch/*/include/kernel_arch_func.h to ensure that the
interface function implementations are always available.
* includes sys/arch_interface.h so that public arch interface
definitions are automatically included when including this file.
- arch/*/include/kernel_arch_func.h
* provides architecture-specific "kernel-to-arch interface"
implementation.
* only the functions that will be used in kernel and arch source
files are defined here.
- include/sys/arch_interface.h
* provides "public arch interface" definition.
* includes include/arch/arch_inlines.h to ensure that the
architecture-specific public inline interface function
implementations are always available.
- include/arch/arch_inlines.h
* includes architecture-specific arch_inlines.h in
include/arch/*/arch_inline.h.
- include/arch/*/arch_inline.h
* provides architecture-specific "public arch interface" inline
function implementation.
* supersedes include/sys/arch_inline.h.
3. Refactor kernel and the existing architecture implementations.
- Remove circular dependency of kernel and arch headers. The
following general rules should be observed:
* Never include any private headers from public headers
* Never include kernel_internal.h in kernel_arch_data.h
* Always include kernel_arch_data.h from kernel_arch_func.h
* Never include kernel.h from kernel_struct.h either directly or
indirectly. Only add the kernel structures that must be referenced
from public arch headers in this file.
- Relocate syscall_handler.h to include/ so it can be used in the
public code. This is necessary because many user-mode public codes
reference the functions defined in this header.
- Relocate kernel_arch_thread.h to include/arch/*/thread.h. This is
necessary to provide architecture-specific thread definition for
'struct k_thread' in kernel.h.
- Remove any private header dependencies from public headers using
the following methods:
* If dependency is not required, simply omit
* If dependency is required,
- Relocate a portion of the required dependencies from the
private header to an appropriate public header OR
- Relocate the required private header to make it public.
This commit supersedes #20047, addresses #19666, and fixes#3056.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
z_set_thread_return_value is part of the core kernel -> arch
interface and has been renamed to z_arch_thread_return_value_set.
z_set_thread_return_value_with_data renamed to
z_thread_return_value_set_with_data for consistency.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is part of the core kernel -> architecture interface
and is appropriately renamed z_arch_is_in_isr().
References from test cases changed to k_is_in_isr().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The callback function has been ignored in z_timeout_init() since the
timer rework in fall 2018. Passing real handlers to it in code is
distracting when they will be overridden by whatever callback is
provided in z_add_timeout().
As this function is an internal API deprecation is not necessary.
Remove the parameter and change all call sites to drop the argument.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
The semi-automated API changes weren't checkpatch aware. Fix up
whitespace warnings that snuck into the previous patches. Really this
should be squashed, but that's somewhat difficult given the structure
of the series.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Given that the section name and boundary simbols can be inferred from
the struct object name, it makes sense to create an iterator that
abstracts away the access details and reduce the possibility for
mistakes.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Update reserved function names starting with one underscore, replacing
them as follows:
'_k_' with 'z_'
'_K_' with 'Z_'
'_handler_' with 'z_handl_'
'_Cstart' with 'z_cstart'
'_Swap' with 'z_swap'
This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.
Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.
Various generator scripts have also been updated as well as perf,
linker and usb files. These are
drivers/serial/uart_handlers.c
include/linker/kobject-text.ld
kernel/include/syscall_handler.h
scripts/gen_kobject_list.py
scripts/gen_syscall_header.py
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
Simple global lock around the timer API. Actually a lot of this usage
was using needless vestigial locking around existing scheduler and
timeout APIs that are now internally synchronized.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch. The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.
Just refactoring. No logic changes.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Whether a timeout is linked into the timeout queue can be determined
from the corresponding sys_dnode_t linked state. This removes the need
to use a special flag value in dticks to determine that the timeout is
inactive.
Update _abort_timeout to return an error code, rather than the flag
value, when the timeout to be aborted was not active.
Remove the _INACTIVE flag value, and replace its external uses with an
internal API function that checks whether a timeout is inactive.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
MISRA-C says all declarations of an object or function must use the
same name and qualifiers.
MISRA-C rule 8.3
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
In C90 was introduced function prototype, that allows argument types
to be checked against parameter types, though it is not necessary
specify names for the parameters. MISRA-C requires names for function
prototype parameters, it claims that names can provide useful
information regarding the function interface.
MISRA-C rule 8.2
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Now that the API has been fixed up, replace the existing timeout queue
with a much smaller version. The basic algorithm is unchanged:
timeouts are stored in a sorted dlist with each node nolding a delta
time from the previous node in the list; the announce call just walks
this list pulling off the heads as needed. Advantages:
* Properly spinlocked and SMP-aware. The earlier timer implementation
relied on only CPU 0 doing timeout work, and on an irq_lock() being
taken before entry (something that was violated in a few spots).
Now any CPU can wake up for an event (or all of them) and everything
works correctly.
* The *_thread_timeout() API is now expressible as a clean wrapping
(just one liners) around the lower-level interface based on function
pointer callbacks. As a result the timeout objects no longer need
to store backpointers to the thread and wait_q and have shrunk by
33%.
* MUCH smaller, to the tune of hundreds of lines of code removed.
* Future proof, in that all operations on the queue are now fronted by
just two entry points (_add_timeout() and z_clock_announce()) which
can easily be augmented with fancier data structures.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
_timeout_remaining_get() was a function on a struct _timeout, doing
iteration on the timeout list, but it was defined in timer.c (the
higher level abstraction).
Move it to where it belongs. Also have it return ticks instead of ms
to conform to scheme in the rest of the timeout API. And rename it to
a more standard zephyr name.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add the callback parameter to add_timeout(), and remove the thread
argument. Now the "low level" timeout API can be expressed without
reference to threads.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that this is known to be an unused value, remove it from the API.
Note that this caught a few spots where we were passing values (a
non-NULL wait_q with a NULL thread handle) that were always being
ignored before.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Make if statement using pointers explicitly check whether the value is
NULL or not.
The C standard does not say that the null pointer is the same as the
pointer to memory address 0 and because of this is a good practice
always compare with the macro NULL.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Ignoring the return of _abort_timeout when there is nothing to
do. Either because the condition to return something different from 0
was prior checked or because it was called during come cleanup.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
There are some cases that there is nothing to do with
_pend_current_thread() return (that is _Swap return value).
As MISRA-C requires that all non-void functions have their
return value checked, we are explicitly ignoring it when there is
nothing to do.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
irq_lock returns an unsigned int, though, several places was using
signed int. This commit fix this behaviour.
In order to avoid this error happens again, a coccinelle script was
added and can be used to check violations.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs. Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages. Now replacement of wait_q with a different
data structure is much cleaner.
Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro. While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:
* System call handlers that acquire resources in the handler
have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
to the caller instead of just killing the calling thread,
even though the base API doesn't do these checks.
These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.
At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/
The macros now use the Z_ notation for private APIs.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons. The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.
So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout(). (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Recent changes have eliminated most use of _Swap() in favor of higher
level scheduler abstractions. We can remove the header too.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps. Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().
And some places were just calling _reschedule_threads(), which did all
this already.
Unify all this madness. The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield(). The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor. I'm not at all convinced it
should exist...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.
Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.
Break out the _Swap definition (only) into a separate header and use
that instead. Cleaner. Seems not to have any more hidden gotchas.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
_Swap() is defined in nano_internal.h. Everything calls _Swap().
Pretty much nothing that called _Swap() included nano_internal.h,
expecting it to be picked up automatically through other headers (as
it happened, from the kernel arch-specific include file). A new
_Swap() is going to need some other symbols in the inline definition,
so I needed to break that cycle. Now nothing sees _Swap() defined
anymore. Put nano_internal.h everywhere it's needed.
Our kernel includes remain a big awful yucky mess. This makes things
more correct but no less ugly. Needs cleanup.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Use some preprocessor trickery to automatically deduce the amount of
arguments for the various _SYSCALL_HANDLERn() macros. Makes the grunt
work of converting a bunch of kernel APIs to system calls slightly
easier.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
We now have macros which should significantly reduce the amount of
boilerplate involved with defining system call handlers.
- Macros which define the proper prototype based on number of arguments
- "SIMPLE" variants which create handlers that don't need anything
other than object verification
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Use new _SYSCALL_OBJ/_SYSCALL_OBJ_INIT macros.
Use new _SYSCALL_MEMORY_READ/_SYSCALL_MEMORY_WRITE macros.
Some non-obvious checks changed to use _SYSCALL_VERIFY_MSG.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
All system calls made from userspace which involve pointers to kernel
objects (including device drivers) will need to have those pointers
validated; userspace should never be able to crash the kernel by passing
it garbage.
The actual validation with _k_object_validate() will be in the system
call receiver code, which doesn't exist yet.
- CONFIG_USERSPACE introduced. We are somewhat far away from having an
end-to-end implementation, but at least need a Kconfig symbol to
guard the incoming code with. Formal documentation doesn't exist yet
either, but will appear later down the road once the implementation is
mostly finalized.
- In the memory region for RAM, the data section has been moved last,
past bss and noinit. This ensures that inserting generated tables
with addresses of kernel objects does not change the addresses of
those objects (which would make the table invalid)
- The DWARF debug information in the generated ELF binary is parsed to
fetch the locations of all kernel objects and pass this to gperf to
create a perfect hash table of their memory addresses.
- The generated gperf code doesn't know that we are exclusively working
with memory addresses and uses memory inefficently. A post-processing
script process_gperf.py adjusts the generated code before it is
compiled to work with pointer values directly and not strings
containing them.
- _k_object_init() calls inserted into the init functions for the set of
kernel object types we are going to support so far
Issue: ZEP-2187
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
k_timer_start(timer, duration, period) is API used to
start a timer. Currently duration parameters accepts
only positive number.
But a user may require to do some periodic activity
ASAP and start timer with 0 value. So this patch
allows 0 as minimum value of duration.
In this patch, when duration value is set as 0 then
timer expiration handler is called instead of submiting
this into timeout queue.
Jira: ZEP-2497
Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
Fixes sparse warnings:
<snip>/zephyr/kernel/timer.c:15:16: warning: symbol '_trace_list_k_timer' was not declared. Should it be static?
<snip>/zephyr/kernel/sem.c:32:14: warning: symbol'_trace_list_k_sem' was not declared. Should it be static?
<snip>/zephyr/kernel/stack.c:24:16: warning: symbol '_trace_list_k_stack' was not declared. Should it be static?
<snip>/zephyr/kernel/queue.c:27:16: warning: symbol '_trace_list_k_queue' was not declared. Should it be static?
<snip>/zephyr/kernel/pipes.c:40:15: warning: symbol '_trace_list_k_pipe' was not declared. Should it be static?
<snip>/zephyr/kernel/mutex.c:46:16: warning: symbol '_trace_list_k_mutex' was not declared. Should it be static?
<snip>/zephyr/kernel/msg_q.c:26:15: warning: symbol '_trace_list_k_msgq' was not declared. Should it be static?
<snip>/zephyr/kernel/mem_slab.c:20:19: warning: symbol '_trace_list_k_mem_slab' was not declared. Should it be static?
<snip>/zephyr/kernel/mailbox.c:53:15: warning: symbol '_trace_list_k_mbox' was not declared. Should it be static?
Change-Id: I42d55aea9855b9c1dd560852ca033c9a19f1ac21
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types. This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies. We also convert the PRI printf formatters in the arch
code over to normal formatters.
Jira: ZEP-2051
Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>