This test was just wrong. If the current thread did not race with any
others during the allocation process, then the result will be false
because it was detected so earlier in the function. If we did race,
then sure: it might be true now if someone snuck in and freed a block.
But so what? We already have the block we want to break. The
behavior in the code as written was to early-exit from the break loop,
returning a buffer that was larger than the one requested (though
otherwise benign -- we wouldn't leak, just waste memory). No idea
what I was thinking.
Thanks to Du Quanwen for the diagnosis.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Subsequent patches will set this guard page as unmapped,
triggering a page fault on access. If this is due to
stack overflow, a double fault will be triggered,
which we are now capable of handling with a switch to
a know good stack.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Adds common and Kinetis-specific adc device tree properties, and updates
all Kinetis SoC and board dts files to include adc nodes.
Jira: ZEP-1396
Signed-off-by: Maureen Helm <maureen.helm@nxp.com>
Introduce a configurable boot delay option (defaulting to none) that
happens right after printing a boot delay banner, #before calling
main() in kernel/init.c:_main(), before taking timestamps for _main()
and once all the infrastructure is in place. Move also the boot banner
to happen after this delay.
The rationale for this is some boards will boot really fast and print
out some test case output in the serial port before the system that is
monitoring the serial port is able to read from the serial port.
This happens in MCUs whose serial port is embedded in a USB connection
which also is used to power the MCU board. When powering it on by
powering the USB port, there is a time it takes the host system to
detect the USB connection, enumerate the serial port, configure it and
load, start and read from the serial port. At this time, it might have
printed the output of the serial port.
While manually it is possible to press a reset button, on automation
setups this adds a lot of overhead and cabling or modifications to the
MCU that are easier (and cheaper) to overcome with this delay. Other
options (like using a separate serial line) might not be possible or
add a lot of cabling and cost, plus it'd also add extra build
configuration.
Change-Id: I2f4d1ba356de6cefa19b4ef5c9f19f87885d4dfd
Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
Commit 1bc2fdc70 ("dts: arm: STM32 boards use DT to configure I2C")
added a new Kconfig option, HAS_DTS_I2C, which should be set when the
target supports configuration of I2C peripherals via Device Tree.
Currently, STM32 targets select this. However, the fact that
HAS_DTS_I2C has no default is causing prompting when building Zephyr
on other targets with DTS. To avoid this and allow builds to complete
as usual, have HAS_DTS_I2C default to n.
Signed-off-by: Marti Bolivar <marti.bolivar@linaro.org>
Upcoming memory protection features will be placing some additional
constraints on kernel objects:
- They need to reside in memory owned by the kernel and not the
application
- Certain kernel object validation schemes will require some run-time
initialization of all kernel objects before they can be used.
Per Ben these initializer macros were never intended to be public. It is
not forbidden to use them, but doing so requires care: the memory being
initialized must reside in kernel space, and extra runtime
initialization steps may need to be peformed before they are fully
usable as kernel objects. In particular, kernel subsystems or drivers
whose objects are already in kernel memory may still need to use these
macros if they define kernel objects as members of a larger data
structure.
It is intended that application developers instead use the
K_<object>_DEFINE macros, which will automatically put the object in the
right memory and add them to a section which can be iterated over at
boot to complete initiailization.
There was no K_WORK_DEFINE() macro for creating struct k_work objects,
this is now added.
k_poll_event and k_poll_signal are intended to be instatiated from
application memory and have not been changed.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Configure I2C using DT for the following STM32 boards:
disco_l475_iot1
nucleo_f401re
96b_carbon
olimexino_stm32
Signed-off-by: Yannis Damigos <giannis.damigos@gmail.com>
Applications will have their own BSS and data sections which
will need to be additionally copied.
This covers the common C implementation of these functions.
Arches which implement their own optimized versions will need
to be updated.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
1. Changed _tsc_read() to k_cycles_get_32(). Thus reading the
time stamp will be agnostic of the architecutre used.
2. Changed the variable names from *_tsc to *_time_stamp.
JIRA: ZEP-1426
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
The existing __stack decorator is not flexible enough for upcoming
thread stack memory protection scenarios. Wrap the entire thing in
a declaration macro abstraction instead, which can be implemented
on a per-arch or per-SOC basis.
Issue: ZEP-2185
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
One of the stack sentinel policies was to check the sentinel
any time a cooperative context switch is done (i.e, _Swap is
called).
This was done by adding a hook to _check_stack_sentinel in
every arch's __swap function.
This way is cleaner as we just have the hook in one inline
function rather than implemented in several different assembly
dialects.
The check upon interrupt is now made unconditionally rather
than checking if we are calling __swap, since the check now
is only called on cooperative _Swap(). The interrupt is always
serviced first.
Issue: ZEP-2244
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The kernel tracks time slice usage with the _time_slice_elapsed global.
Every time the timer interrupt goes off and the timer driver calls
_nano_sys_clock_tick_announce() with the elapsed time, this is added to
_time_slice_elapsed. If it exceeds the total time slice, the thread is
moved to the back of the queue for that priority level and
_time_slice_elapsed is reset to zero.
In a non-tickless kernel, this is the only time _time_slice_elapsed is
reset. If a thread uses up a partial time slice, and then cooperatively
switches to another thread, the next thread will inherit the remaining
time slice, causing it not to be able to run as long as it ought to.
There does exist code to properly reset the elapsed count, but it was
only compiled in a tickless kernel. Now it is built any time
CONFIG_TIMESLICING is enabled.
Issue: ZEP-2107
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Fixes sparse warning:
<snip>/zephyr/kernel/sched.c:368:6: warning: symbol '_dump_ready_q' was not declared. Should it be static?
Change-Id: I156e89f1d74178bbd99cc25e532da544c7ebee60
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
Fixes sparse warnings:
<snip>/zephyr/kernel/timer.c:15:16: warning: symbol '_trace_list_k_timer' was not declared. Should it be static?
<snip>/zephyr/kernel/sem.c:32:14: warning: symbol'_trace_list_k_sem' was not declared. Should it be static?
<snip>/zephyr/kernel/stack.c:24:16: warning: symbol '_trace_list_k_stack' was not declared. Should it be static?
<snip>/zephyr/kernel/queue.c:27:16: warning: symbol '_trace_list_k_queue' was not declared. Should it be static?
<snip>/zephyr/kernel/pipes.c:40:15: warning: symbol '_trace_list_k_pipe' was not declared. Should it be static?
<snip>/zephyr/kernel/mutex.c:46:16: warning: symbol '_trace_list_k_mutex' was not declared. Should it be static?
<snip>/zephyr/kernel/msg_q.c:26:15: warning: symbol '_trace_list_k_msgq' was not declared. Should it be static?
<snip>/zephyr/kernel/mem_slab.c:20:19: warning: symbol '_trace_list_k_mem_slab' was not declared. Should it be static?
<snip>/zephyr/kernel/mailbox.c:53:15: warning: symbol '_trace_list_k_mbox' was not declared. Should it be static?
Change-Id: I42d55aea9855b9c1dd560852ca033c9a19f1ac21
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
Fixes sparse warning:
CHECK <snip>/zephyr/kernel/thread.c
<snip>/zephyr/kernel/thread.c:184:20: error: symbol '_thread_entry' redeclared with different type (originally declared at <snip>/zephyr/kernel/include/nano_internal.h:43) - different modifiers
CC kernel/thread.o
Change-Id: I2223493cdf97c811c661773f8fd430e6c00cbaa0
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
This places a sentinel value at the lowest 4 bytes of a stack
memory region and checks it at various intervals, including when
servicing interrupts or context switching.
This is implemented on all arches except ARC, which supports stack
bounds checking directly in hardware.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The initial dummy thread context used for the initial __swap to
the main thread at early kernel initialization was not marked as a dummy
thread as it ought to be.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib. The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout. Major changes:
Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer. This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping. And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).
IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs. Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).
Deterministic performance: there is no more "defragmentation" step
that must be manually managed. Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.
Cleaner behavior with odd sizes. The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available. If you want precise layout control, you can still
specify the sizes rigorously. It just doesn't break if you don't.
More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements. Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax. This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.
Related changes that had to be rolled into this patch for bisectability:
* The new allocator has a firm minimum block size of 8 bytes (to store
the dlist_node_t). It will "work" with smaller requested min_size
values, but obviously makes no firm promises about layout or how
many will be available. Unfortunately many of the tests were
written with very small 4-byte minimum sizes and to assume exactly
how many they could allocate. Bump the sizes to match the allocator
minimum.
* The mbox and pipes API made use of the internals of k_mem_block and
had to be ported to the new scheme. Blocks no longer store a
backpointer to the pool that allocated them (it's an integer ID in a
bitfield) , so if you want to "nullify" them you have to use the
data pointer.
* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
that it sent through the mailbox. This worked in the old allocator
because the memory wouldn't be touched when freed, but now we stuff
list pointers in there and the bug was exposed.
* Remove test_mpool_options: the options (related to defragmentation
behavior) tested no longer exist.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Unline k_thread_spawn(), the struct k_thread can live anywhere and not
in the thread's stack region. This will be useful for memory protection
scenarios where private kernel structures for a thread are not
accessible by that thread, or we want to allow the thread to use all the
stack space we gave it.
This requires a change to the internal _new_thread() API as we need to
provide a separate pointer for the k_thread.
By default, we still create internal threads with the k_thread in stack
memory. Forthcoming patches will change this, but we first need to make
it easier to define k_thread memory of variable size depending on
whether we need to store coprocessor state or not.
Change-Id: I533bbcf317833ba67a771b356b6bbc6596bf60f5
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Newlib names this function __errno(), so if we want Zephyr to work
with Newlib seamlessly, it's better to just follow Newlib's naming
convention for Zephyr's own minimal libc.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Currently, a queue/fifo getter chooses how long to wait for an
element. But there are scenarios when putter would know better,
there should be a way to expire getter's timeout to make it run
again. k_queue_cancel_wait() and k_fifo_cancel_wait() functions
do just that. They cause corresponding *_get() functions to return
with NULL value, as if timeout expired on getter's side (even
K_FOREVER).
This can be used to signal out of band conditions from putter to
getter, e.g. end of processing, error, configuration change, etc.
A specific event would be communicated to getter by other means
(e.g. using existing shared context structures).
Without this call, achieving the same effect would require e.g.
calling k_fifo_put() with a pointer to a special sentinal memory
structure - such structure would need to be allocated somewhere
and somehow, and getter would need to recognize it from a normal
data item. Having cancel_wait() functions offers an elegant
alternative. From this perspective, these calls can be seen as
an equivalent to e.g. k_fifo_put(fifo, NULL), except that such
call won't work in practice.
Change-Id: I47b7f690dc325a80943082bcf5345c41649e7024
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
fix misspelling in Kconfig files that would show up in configuration
documentation and screens.
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Adds event based scheduling logic to the kernel. Updates
management of timeouts, timers, idling etc. based on
time tracked at events rather than periodic ticks. Provides
interfaces for timers to announce and get next timer expiry
based on kernel scheduling decisions involving time slicing
of threads, timeouts and idling. Uses wall time units instead
of ticks in all scheduling activities.
The implementation involves changes in the following areas
1. Management of time in wall units like ms/us instead of ticks
The existing implementation already had an option to configure
number of ticks in a second. The new implementation builds on
top of that feature and provides option to set the size of the
scheduling granurality to mili seconds or micro seconds. This
allows most of the current implementation to be reused. Due to
this re-use and co-existence with tick based kernel, the names
of variables may contain the word "tick". However, in the
tickless kernel implementation, it represents the currently
configured time unit, which would be be mili seconds or
micro seconds. The APIs that take time as a parameter are not
impacted and they continue to pass time in mili seconds.
2. Timers would not be programmed in periodic mode
generating ticks. Instead they would be programmed in one
shot mode to generate events at the time the kernel scheduler
needs to gain control for its scheduling activities like
timers, timeouts, time slicing, idling etc.
3. The scheduler provides interfaces that the timer drivers
use to announce elapsed time and get the next time the scheduler
needs a timer event. It is possible that the scheduler may not
need another timer event, in which case the system would wait
for a non-timer event to wake it up if it is idling.
4. New APIs are defined to be implemented by timer drivers. Also
they need to handler timer events differently. These changes
have been done in the HPET timer driver. In future other timers
that support tickles kernel should implement these APIs as well.
These APIs are to re-program the timer, update and announce
elapsed time.
5. Philosopher and timer_api applications have been enabled to
test tickless kernel. Separate configuration files are created
which define the necessary CONFIG flags. Run these apps using
following command
make pristine && make BOARD=qemu_x86 CONF_FILE=prj_tickless.conf qemu
Jira: ZEP-339 ZEP-1946 ZEP-948
Change-Id: I7d950c31bf1ff929a9066fad42c2f0559a2e5983
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
Future tickless kernel patches would be inserting some
code before call to Swap. To enable this it will create
a mcro named as the current _Swap which would call first
the tickless kernel code and then call the real __swap()
Jira: ZEP-339
Change-Id: Id778bfcee4f88982c958fcf22d7f04deb4bd572f
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
Historically, space for struct k_thread was always carved out of the
thread's stack region. However, we want more control on where this data
will reside; in memory protection scenarios the stack may only be used
for actual stack data and nothing else.
On some platforms (particularly ARM), including kernel_arch_data.h from
the toplevel kernel.h exposes intractable circular dependency issues.
We create a new per-arch header "kernel_arch_thread.h" with very limited
scope; it only defines the three data structures necessary to instantiate
the arch-specific bits of a struct k_thread.
Change-Id: I3a55b4ed4270512e58cf671f327bb033ad7f4a4f
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This patck adds the stack information into the k_thread data structure.
The information will be set by when creating a new thread (_new_thread)
and will be used by the scheduling process.
Change-Id: Ibe79fe92a9ef8bce27bf8616d8e0c878508c267d
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@linaro.org>
This adds a new event type to the kernel event logger that tracks
thread-related events: being added to the ready queue, pending a
thread, and exiting a thread.
It's the only event type that contains "subevents" and thus has a
non-void parameter in their respective _sys_k_event_logger_*()
function. Luckily, as isn't the case with other events (such as IRQs
and thread switching), these functions are called from
platform-agnostic places, so there's no need to worry about changing
the assembly guts.
This is the first patch in a series adding support for better real-time
profiling of Zephyr applications.
Jira: ZEP-1463
Change-Id: I6d63607ba347f7a9cac3d016fef8f5a0a830e267
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
Unlike assertions, these APIs are active at all times. The kernel will
treat these errors in the same way as fatal CPU exceptions. Ultimately,
the policy of what to do with these errors is implemented in
_SysFatalErrorHandler.
If the archtecture supports it, a real CPU exception can be triggered
which will provide a complete register dump and PC value when the
problem occurs. This will provide more helpful information than a fake
exception stack frame (_default_esf) passed to the arch-specific exception
handling code.
Issue: ZEP-843
Change-Id: I8f136905c05bb84772e1c5ed53b8e920d24eb6fd
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
For exceptions where we are just going to abort the current thread, we
need to exit handler mode properly so that PendSV can run and perform a
context switch. For ARM architecture this means that the fatal error
handling code path can indeed return if we were 1) in handler mode and
2) only wish to abort the current thread.
Fixes a very long-standing bug where a thread that generates an
exception, and should only abort the thread, instead takes down the
entire system.
Issue: ZEP-2052
Change-Id: Ib356a34a6fda2e0f8aff39c4b3270efceb81e54d
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We do the same thing on all arch's right now for thread_monitor_init so
lets put it in a common place. This also should fix an issue on xtensa
when thread monitor can be enabled (reference to _nanokernel.threads).
Change-Id: If2f26c1578aa1f18565a530de4880ae7bd5a0da2
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
We do a bit of the same stuff on all the arch's to setup a new thread.
So lets put that code in a common place so we unify it for everyone and
reduce some duplicated code.
Change-Id: Ic04121bfd6846aece16aa7ffd4382bdcdb6136e3
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
There are a few places that we used an naked unsigned type, lets be
explicit and make it 'unsigned int'.
Change-Id: I33fcbdec4a6a1c0b1a2defb9a5844d282d02d80e
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types. This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies. We also convert the PRI printf formatters in the arch
code over to normal formatters.
Jira: ZEP-2051
Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This is a start to move away from the C99 {u}int{8,16,32,64}_t types to
Zephyr defined u{8,16,32,64}_t and s{8,16,32,64}_t. This allows Zephyr
to define the sized types in a consistent manor across all the
architectures we support and not conflict with what various compilers
and libc might do with regards to the C99 types.
We introduce <zephyr/types.h> as part of this and have it include
<stdint.h> for now until we transition all the code away from the C99
types.
We go with u{8,16,32,64}_t and s{8,16,32,64}_t as there are some
existing variables defined u8 & u16 as well as to be consistent with
Zephyr naming conventions.
Jira: ZEP-2051
Change-Id: I451fed0623b029d65866622e478225dfab2c0ca8
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This reverts commit 7b9dc107a8.
We revert this as we intent to move away from {u}int{8,16,32,64}_t types
to our own internal types for sized variables so we shouldn't need the
PRI macros anymore.
Change-Id: I1d9d797fee47ca266867ae65656c150f8fe2adb2
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>