Both variables were used (with the same value) interchangeably
throughout CMake files and per the discussion in GH issue,
ZEPHYR_BASE is preferred.
Also add a comment with explanation of one vs. the other.
Tested by building hello_world for several boards ensuring no errors.
Fixes#7173.
Signed-off-by: Alex Tereschenko <alext.mkrs@gmail.com>
Use k_uptime_get() to compute both tv_sec and tv_nsec members
of timespec structure.
Fixes#8009
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
Make sure the name string is NULL terminated in the readdir().
CID: 186037
Fixes Issue #7733
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
The pthread mutex changes went in with an adaptation to build with the
new wait queue API, but they did it by using the old dlist hooks
directly through typecasting and union assignment. That... is sort of
the opposite of the intent to having the new API be abstracted. The
pthread code worked, but failed once wait queues (on x86) stopped
being dlists.
Simple fix once I saw the problem, anyway.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This constant should be defined in limits.h. Define it in limits.h in
the minimal libc, and use the definition found in newlib's includes.
Values in newlib includes range from 1024 to 4096.
The rationale is that all code should use the same value; having
buffers specified with different sizes will lead to interoperability
and out of bounds array writes.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
Add IEEE 1003.1 Posix Style file system API support.
These API's will internally use corresponding Zephyr
File System API's.
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs. Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages. Now replacement of wait_q with a different
data structure is much cleaner.
Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro. While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
pthread_attr_init() should not return EBUSY as per POSIX spec
so fixed this by return ENOMEM if the attr pointer is NULL.
Also fixed the attribute initialization logic by copying the
init_pthread_attrs to the attr.
Fixes Issue #7480
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
Fix potential overflow of interger expression for by fixing
variable type to s64_t.
CID: 185275
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
The POSIX layer had a simple ready_one_thread() utility. Move this to
the scheduler API (with a prepended underscore -- it's an internal
API) so that it can be synchronized along with the rest of the
scheduler.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons. The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.
So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout(). (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Originally, pthread_cond_signal() was written to yield even in
circumstances where the current thread is at a cooperative priority
and would not expect to be context-switched out until it blocks. This
makes sense, as in most cases you want the newly signaled thread to
get a chance to run as soon as possible.
On further reflection (and also because it complicates the scheduler),
I think that's wrong. The point to cooperative scheduling is that it
allows the cooperative code to make synchronization assumptions about
exactly when it might yield to other threads, and having arbitrary
APIs be "preemption points" like this complicates that analysis
significantly.
Use _reschedule() like other code does.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Recent changes have eliminated most use of _Swap() in favor of higher
level scheduler abstractions. We can remove the header too.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps. Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().
And some places were just calling _reschedule_threads(), which did all
this already.
Unify all this madness. The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield(). The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor. I'm not at all convinced it
should exist...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The compiler can remove the NULL check since the dereference happens
before it (and assume that the pointer is always valid).
Coverity-Id: 185281
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
Move posix layer from 'kernel' to 'lib' folder as it is not
a core kernel feature.
Fixed posix header file dependencies as part of the move and
also removed NEWLIBC related macros from posix headers.
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>