Names that begin with an underscore are reserved by the C standard.
This patch does not change names of functions defined and implemented
in header files.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
pthread_setschedparam() uses k_thread_priority_set()
to set pthread priority. There is an error in argument
in k_thread_priority_seti() due to which system correct
priority was not set. Correcting this error.
Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
timer_gettime() internally uses k_timer_remaining_get()
to get time remaining to expire. Time unit for
k_timer_remaining_get is msec not ticks.
Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
_sem_give_non_preemptible is non preemptible and no need to move thread
to ready queue for any real use case. Remove old code. This is also
not public API
Signed-off-by: Punit Vara <punit.vara@intel.com>
Commit 08de658eb ("kernel: mem_domain: Check for overlapping regions
when considering W^X") introduced some compile issues on various
platforms.
The k_mem_partition_attr_t member is attr not attrs. Also, fix an issue
where sane_partition_domain neesd a pointer to a parition.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
During system initialization, the global static variable (to
mem_domain.c) is initialized with the number of maximum partitions per
domain. This variable is of u8_t type.
Assertions throughout the code will check ranges and test for overflow
by relying on implicit type conversion.
Use an u8_t instead of u32_t to avoid doubts. Also, reorder the
k_mem_partition struct to remove the alignment hole created by reducing
sizeof(num_partitions).
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
Multiple partitions can be added to a domain, and if they overlap, they
can have different attributes. The previous check would only check for
W^X for individual partitions, and this is insufficient. Overlapping
partitions could have W^X attributes, but in the end, a memory region
would be writable and executable.
The way this is performed is quite "heavyweight", as it is implemented
in a O(n^2) operation. The number of partitions per domain is small on
most devices, so this isn't an issue. CONFIG_EXECUTE_XOR_WRITE is
still an optional feature.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
This patch does following:-
1. Default scheduling policy should be set to SCHED_RR only when
Preemptive is enabled.
2. Default priority in attr object should equivalent to
K_LOWEST_APPLICATION_THREAD_PRIO. Posix priority corresponding
to K_LOWEST_APPLICATION_THREAD_PRIO is 1.
Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
As per IEEE 1003.1 POSIX APIs should return ERROR_CODE on error.
But currently these are returning -ERROR_CODE instead of ERROR_CODE.
So fixing the return value.
Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
The attributes are an u32_t only on ARM and ARC; on x86, it's something
else entirely. Use the proper type to avoid attributes being
truncated.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
Traditionally k_thread_abort() of the current thread has done a
synchronous _Swap() to the new context. Doing this from an ISR has
never worked portably (some architectures can do it, some can't) for
this reason.
But on Xtensa/asm2, exception handlers now run in interrupt context
and it's a very reasonable requirement for them to abort the excepting
thread.
So simply don't swap, but do the rest of the bookeeping, returning to
the calling context. As a side effect it's now possible to terminate
threads from interrupts, even if they have been interrupted.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
In SMP, the system timer is used for timeslicing on auxiliary CPUs,
but the base system timekeeping via _nano_sys_clock_tick_announce() is
still done on CPU0 only (because the framework isn't prepared for
asynchronous notification yet). Skip processing on CPU1+.
Also, due to a hardware interaction* that is difficult to work around,
timer initialization on the auxiliary CPUs is done at the very end of
the CPU bringup, just before the swap into the scheduler. A
smp_timer_init() API has been added for this purpose.
* On ESP-32, enabling the timer seems to result in a near-synchronous
interrupt being delivered despite my best attempts to keep it
masked, then blowing things up because the CPU record isn't set up
to handle it yet.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that all the pieces are in place, enable SMP for real:
Initialize the CPU records, launch the CPUs at the end of kernel
initialization, have them wait for a flag to release them into the
scheduler, then enter into the runnable threads via _Swap().
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
A pure timer-based idle won't work well in SMP. Without an IPI to
wake up idle CPUs out of the scheduler they will sleep far too long
and the main CPU will do all the scheduling of wake-up-and-sleep
processes. Instead just have the auxilary CPUs do a traditional
busy-wait scheduler in their idle loop.
We will need to revisit an architecture that allows both
wait-for-timer-interrupt idle and SMP.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The scheduler needs a few tweaks to work in SMP mode:
1. The "cache" field just doesn't work. With more than one CPU,
caching the highest priority thread isn't useful as you may need N
of them at any given time before another thread is returned to the
scheduler. You could recalculate it at every change, but that
provides no performance benefit. Remove.
2. The "bitmask" designed to prevent the need to individually check
priorities is likewise dropped. This could work, but in fact on
our only current SMP system and with current K_NUM_PRIOPRITIES
values it provides no real benefit.
3. The individual threads now have a "current cpu" and "active" flag
so that the choice of the next thread to run can correctly skip
threads that are active on other CPUs.
The upshot is that a decent amount of code gets #if'd out, and the new
SMP implementations for _get_highest_ready_prio() and
_get_next_ready_thread() are simpler and smaller, at the expense of
having to drop older optimizations.
Note that scheduler synchronization is unchanged: all scheduler APIs
used to require that an irq_lock() be held, which means that they now
require the global spinlock via the same API. This should be a very
early candidate for lock granularity attention!
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
In SMP mode, the idea of a single "IRQ lock" goes away. Long term,
all usage needs to migrate to spinlocks (which become simple IRQ locks
in the uniprocessor case). For the near term, we can ease the
migration (at the expense of performance) by providing a compatibility
implementation around a single global lock.
Note that one complication is that the older lock was recursive, while
spinlocks will deadlock if you try to lock them twice. So we
implement a simple "count" semantic to handle multiple locks.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Simple implementation that caps at 4 CPUs. Long term we should use
some linker magic to define as many as needed and loop over them
without needlessly increasing data or code size for the tracking.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When in SMP mode, the nested/irq_stack/current fields are specific to
the current CPU and not to the kernel as a whole, so we need an array
of these. Place them in a _cpu_t struct and implement a
_arch_curr_cpu() function to retrieve the pointer.
When not in SMP mode, the first CPU's fields are defined as a unioned
with the first _cpu_t record. This permits compatibility with legacy
assembly on other platforms. Long term, all users, including
uniprocessor architectures, should be updated to use the new scheme.
Fundamentally this is just renaming: the structure layout and runtime
code do not change on any existing platforms and won't until someone
defines a second CPU.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.
Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.
Break out the _Swap definition (only) into a separate header and use
that instead. Cleaner. Seems not to have any more hidden gotchas.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Simply define the Kconfig variables in this patch so they can be used
in later patches. Define MP_NUM_CPUS correctly on esp32. No code
changes.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When using _arch_switch() context switching, the thread return value
is a generic hook and not provided by the architecture.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The existing __swap() mechanism is too high level for some
applications because of its scheduler-awareness. This introduces a
new _arch_switch() mechanism, which is a simpler primitive that looks
like:
void _arch_switch(void *handle, void **old_handle_out);
The new thread handle (typically just a stack pointer) is specified
explicitly instead of being picked up from the scheduler by
per-architecture code, and on return the "old" thread handle that got
switched out is returned through the pointer.
The new primitive (currently available only on xtensa) is selected
when CONFIG_USE_SWITCH is "y". A new C _Swap() implementation based
on this primitive is then added which operates compatibly.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
K_NUM_PRIORITIES and K_NUM_PRIO_BITMAPS were defined in
nano_internal.h, but used in only a handful of places. Move to
kernel_structs.h (somewhat higher up in the hierarchy) to help with
include file cycle-breaking. Arguably they are a better fit there
anyway.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
_Swap() is defined in nano_internal.h. Everything calls _Swap().
Pretty much nothing that called _Swap() included nano_internal.h,
expecting it to be picked up automatically through other headers (as
it happened, from the kernel arch-specific include file). A new
_Swap() is going to need some other symbols in the inline definition,
so I needed to break that cycle. Now nothing sees _Swap() defined
anymore. Put nano_internal.h everywhere it's needed.
Our kernel includes remain a big awful yucky mess. This makes things
more correct but no less ugly. Needs cleanup.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Fix Kconfig help sections and add spacing to be consistent across all
Kconfig file. In a previous run we missed a few.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Split the search into two loops: in the common scenario, where device
names are stored in ROM (and are referenced by the user with CONFIG_*
macros), only cheap pointer comparisons will be performed.
Reserve string comparisons for a fallback second pass.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
Instead of composing expressions with a logical AND, break down it into
multiple assertions. Smaller assertions are easier to read. While at
it, compare pointers against the NULL value, and numbers against 0
instead of relying on implicit conversion to boolean-ish values.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
Without the parenthesis, the code was asserting this expression:
start + (size > start)
Where it should be this instead:
(start + size) > start
For a quick sanity check when adding these two unsigned values together.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
As discovered in https://github.com/zephyrproject-rtos/zephyr/issues/5952
...a duplicate call to k_delayed_work_submit_to_queue() on a work item
whose timeout had expired but which had not yet executed (i.e. it was
pending in the queue for the active work queue thread) would fail,
because the cancellation step wouldn't clear the PENDING bit, causing
the resubmission to see the object in an invalid state. Trivially
fixed by adding a bit clear.
It also turns out that the behavior of the code doesn't match the
docs, which state that a PENDING work item is not supposed to be
cancelled at all. Fix the docs to remove that.
And on yet further review, it turns out that there's no way to make a
test like the one in the linked bug threadsafe. The work queue does
no synchronization by design, so if the user code does no external
synchronization it might very well clobber the running handler. Added
a sentence to the docs to reflect this gotcha.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Remove unused _k_thread_single_start() as this logic is
now moved to _impl_k_thread_start().
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
This patch adds support for userspace on ARM architectures. Arch
specific calls for transitioning threads to user mode, system calls,
and associated handlers.
Signed-off-by: Andy Gross <andy.gross@linaro.org>
As per current policy of requiring supervisor mode to register
callbacks, dma_config() is omitted.
A note added about checking the channel ID for start/stop, current
implementations already do this but best make it explicitly
documented.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Rename the nano_internal.h to kernel_internal.h and modify the
header file name accordingly wherever it is used.
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
This change proposes to handle the case where the handle_timeouts
function is called after a number of ticks greater than the first
timeout delta of the _timeout_q list. In the current implementation if
the case occurs, after subtracting the number of ticks the
delta_ticks_from_prev field becomes negative and the first timeout is
never processed. It is therefore necessary to treat this case and to
prevent delta_ticks_from_prev from becoming negative. Moreover, the lag
produced by the initial delay must also be applied to following timeouts
by browsing the list until it was entirely consumed.
Fixes#5401
Signed-off-by: Holman Greenhand <greenhandholman@gmail.com>
When CONFIG_THREAD_MONITOR is enabled, repeated thread abort
calls on a dead thread will cause the _thread_monitor_exit to
crash.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
We don't need to store the full k_mem_block, rather just the
k_mem_block_id. In effect, this saves 4 bytes of memory per allocated
memory chunk. Also take advantage of the newly introduced
k_mem_pool_free_id API here.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
The k_mem_pool_free API has no use for the full k_mem_block struct. In
particular, it only needs the k_mem_block_id. Introduce a new API
which takes only this essential struct. This paves the way to
simplify & improve the k_malloc/k_free implementation a bit.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>