MPU devices that enforce power-of-two alignment now
specify the size of the buffer used for the newlib heap.
This buffer will be properly aligned and a pointer
exposed in a kernel header, such that it can be added
to a user thread's memory domain configuration if
necessary.
MPU devices that don't have these restrictions allocate
the heap as normal.
In all cases, if an MPU/MMU region needs to be programmed,
the z_newlib_get_heap_bounds() API will return the necessary
information.
Given how precious MPU regions are, no automatic programming
of the MPU is done; applications will need to do this as
needed in their memory domain configurations.
On x86, the x86 MMU-specific code has been moved to arch/x86
using the new z_newlib_get_heap_bounds() API.
Fixes: #6814
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Newlib uses any RAM between _end and the bounds of physical
RAM for the _sbrk() heap. Set up a user-writable region
so that this works properly on x86.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Calling POSIX exit() function in Zephyr w/newlib leads to printing
"exit" to stdout followed by infinite loop. That message was
printed without a newline though, leading to confusing artifacts
in the console output.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
This trades a little bit over 40 bytes (on x86) of text for a lot of
savings in rodata. This is accomplished by using bitfields to pack the
field name length, offset, alignment, and the type tag into a single
32-bit unsigned integer instead of scattering this information into
four different integers.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
Fix potential overflow of interger expression for by fixing
variable type to s64_t.
CID: 185275
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
The POSIX layer had a simple ready_one_thread() utility. Move this to
the scheduler API (with a prepended underscore -- it's an internal
API) so that it can be synchronized along with the rest of the
scheduler.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons. The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.
So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout(). (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Originally, pthread_cond_signal() was written to yield even in
circumstances where the current thread is at a cooperative priority
and would not expect to be context-switched out until it blocks. This
makes sense, as in most cases you want the newly signaled thread to
get a chance to run as soon as possible.
On further reflection (and also because it complicates the scheduler),
I think that's wrong. The point to cooperative scheduling is that it
allows the cooperative code to make synchronization assumptions about
exactly when it might yield to other threads, and having arbitrary
APIs be "preemption points" like this complicates that analysis
significantly.
Use _reschedule() like other code does.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Recent changes have eliminated most use of _Swap() in favor of higher
level scheduler abstractions. We can remove the header too.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps. Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().
And some places were just calling _reschedule_threads(), which did all
this already.
Unify all this madness. The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield(). The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor. I'm not at all convinced it
should exist...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The compiler can remove the NULL check since the dereference happens
before it (and assume that the pointer is always valid).
Coverity-Id: 185281
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
Returns true if the specified node is in the tree. Allows the tree to
be used for "set" style semantics along with a lessthan_fn that simply
compares the nodes by their address.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
A balanced tree implementation for Zephyr as we grow into bigger
regimes where simpler data structures aren't appropriate.
This implements an intrusive balanced tree that guarantees O(log2(N))
runtime for all operations and amortized O(1) behavior for creation
and destruction of whole trees. The algorithms and naming are
conventional per existing academic and didactic implementations, c.f.:
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
The implementation is size-optimized to prioritize runtime memory
usage. The data structure is intrusive, which is to say the struct
rbnode handle is intended to be placed in a separate struct the same
way other such structures (e.g. Zephyr's dlist list) and requires no
data pointer to be stored in the node. The color bit is unioned with
a pointer (fairly common for such libraries). Most notably, there is
no "parent" pointer stored in the node, the upper structure of the
tree being generated dynamically via a stack as the tree is recursed.
So the overall memory overhead of a node is just two pointers,
identical with a doubly-linked list.
Code size above dlist is about 2-2.5k on most architectures, which is
significant by Zephyr standards but probably still worthwhile in many
situations.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Move posix layer from 'kernel' to 'lib' folder as it is not
a core kernel feature.
Fixed posix header file dependencies as part of the move and
also removed NEWLIBC related macros from posix headers.
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
We would like to offer the capability to have memory pool heap data
structures that are usable from user mode threads. The current
k_mem_pool implementation uses IRQ locking and system-wide membership
lists that make it incompatible with user mode constraints.
However, much of the existing memory pool code can be abstracted to some
common functions that are used by both k_mem_pool and the new
sys_mem_pool implementations.
The sys_mem_pool implementation has the following differences:
* The alloc/free APIs work directly with pointers, no internal memory
block structures are exposed to the end user. A pointer to the source
pool is provided for allocation, but freeing memory just requires the
pointer and nothing else.
* k_mem_pool uses IRQ locks and required very fine-grained locking in
order to not affect system latency. sys_mem_pools just use a semaphore
to protect the pool data structures at the API level, since there aren't
implications for system responsiveness with this kind of concurrency
control.
* sys_mem_pools do not support the notion of timeouts for requesting
memory.
* sys_mem_pools are specified at compile time with macros, just like
kernel memory pools. Alternative forms of specification at runtime
will be a later enhancement.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
As per the Apache v2 License, state changes made to the original code in
the modified version of the files.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Since base64 is such a simple and commonly used feature it makes no
sense to build the whole of mbedTLS for it. Instead take the
implementation that comes with mbedTLS and import it as a native library
outside of ext/ for all to use directly.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
* ring_bufffer is in lib, so move the Kconfig out of the kernel.
* move one Kconfig used for json to lib/Kconfig alongside other
Kconfigs.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This is a minor change that makes the data pointer const and shifts
the length to a size_t to match the other CRC functions.
Signed-off-by: Michael Hope <mlhx@google.com>
We want to support other toolchain not based on GCC, so the variable is
confusing, use ZEPHYR_TOOLCHAIN_VARIANT instead.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The existing version of crc16_ccitt() is actually CRC-16/AUG-CCITT and
gives different results to Linux, Contiki, and the CRC unit in the
SAM0 SOC. This version matches Linux.
Note that this is an incompatible API change.
Signed-off-by: Michael Hope <mlhx@google.com>
Enable stdio to work by default if Newlib is used as libc - it's
reasonable expectation that if full-fledged libc (like Newlib) is
selected, then printf() works out of the box.
Fixes: #5566
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Add abs function to the minimal libc. This is present in
NEWLIB_LIBC, but adding it here avoid to make a dependency
with NEWLIB_LIBC.
Signed-off-by: Vincent Veron <vincent.veron@st.com>
CROSS_COMPILE is a KBuild feature that was dropped during the CMake
migration. It is now re-introduced. Documentation for it is still
lacking, but at least it now behaves as expected.
Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
This code is commonly used in the Linux kernel for reporting a
retryable error like a failed CRC. This name and value is already
present in Linux and newlib.
Signed-off-by: Michael Hope <mlhx@google.com>
When building a native application, we use the host provided libc, so do
not build minimal libc or newlib.
Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Introducing CMake is an important step in a larger effort to make
Zephyr easy to use for application developers working on different
platforms with different development environment needs.
Simplified, this change retains Kconfig as-is, and replaces all
Makefiles with CMakeLists.txt. The DSL-like Make language that KBuild
offers is replaced by a set of CMake extentions. These extentions have
either provided simple one-to-one translations of KBuild features or
introduced new concepts that replace KBuild concepts.
This is a breaking change for existing test infrastructure and build
scripts that are maintained out-of-tree. But for FW itself, no porting
should be necessary.
For users that just want to continue their work with minimal
disruption the following should suffice:
Install CMake 3.8.2+
Port any out-of-tree Makefiles to CMake.
Learn the absolute minimum about the new command line interface:
$ cd samples/hello_world
$ mkdir build && cd build
$ cmake -DBOARD=nrf52_pca10040 ..
$ cd build
$ make
PR: zephyrproject-rtos#4692
docs: http://docs.zephyrproject.org/getting_started/getting_started.html
Signed-off-by: Sebastian Boe <sebastian.boe@nordicsemi.no>
The C11 standard requires this. From 7.2 "Diagnostics <assert.h>"
paragraph 1:
> The header <assert.h> defines the assert and static_assert macros...
paragraph 3:
> The macro
> static_assert
> expands to _Static_assert.
Since static_assert is a keyword in C++11, don't define it if C++.
Signed-off-by: Thiago Macieira <thiago.macieira@intel.com>
The C standard requires assert() to be a void result, so you
could write something like:
return assert(x), x;
From the C11 standard (7.2 Diagnostic <assert.h>):
> If NDEBUG is defined as a macro name at the point in the source file
> where <assert.h> is included, the assert macro is defined simply as
> #define assert(ignore) ((void)0)
Signed-off-by: Thiago Macieira <thiago.macieira@intel.com>
This was causing an unaligned pointer read on some architectures,
leading to crashes. This could be alternatively solved by rounding
the size to the nearest power of 2, but this wouldn't work with
packed structs.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
This appears to be a bug in GCC: when an anonymous union contains
anonymous structs, GCC issues a warning that a field in one of the
anonymous structs has not been initialized. Fix by making the
structs not anonymous.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
append_bytes_to_buf() already writes a NUL byte; no need to call
append_bytes() again with "" and size 1.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>