Prepares the kernel event logger APIs for inclusion in the
API guide. Also corrects a couple of other issues:
* Gets rid of obsolete thread monitor code.
* Renames "timer_func" global variable to "_sys_k_timer_func"
to align it with kernel naming conventions.
Change-Id: I93d403f83ae44ff45dda489c2ead7bfec6ce1fa3
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Event logger APIs still express timeout delays in ticks;
need to convert to milliseconds when using unified kernel APIs.
Change-Id: I5fab66be660621cd2029417eaff3758e3ef4ba2c
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
When moving arch-specific thread structure to arch-agnostic, some field
accesses were missed when used in K_DEBUG statements, which are turned
off by default.
Change-Id: Ife0f49b8185a0db468deab73555f7034f20ca3e8
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Prio should be an int, since values are small integers, not a fixed-size
int32_t. It aligns with the prio parameters of the other APIs.
Stack size should be size_t.
Change-Id: Id29751b86c4ad7a7c2a7ffe446c2a96ae83c77bf
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
NANOKERNEL is obsolete and this kernel service is still using it causing
deperecaton warnings. Move it to POST_KERNEL
Change-Id: I17fabd080645f93a8599f4ea25da844e1ec5f4bb
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Replaces confusing (and excessively long) configuration option
names with more intuitive names. Also enhances the description
of each option to clarify its use.
Change-Id: If4d4541407627482b1e90302cfc9df3bc8130d44
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
SYS_DLIST_FOR_EACH_NODE() is marked as non-safe when an item is removed
from the list while looping over it. This is not true per-se, since the
item, when removed, keeps its next and prev pointers intact; however, it
is true if the item is then put into a list, be it a different one or
the same one. To prevent this, SYS_DLIST_FOR_EACH_NODE_SAFE() must be
used.
_mbox_message_put() can remove items from the rx queue and then put them
in the ready queue: this would cause the loop to start processing other
ready threads as item in the rx queue.
k_mbox_get() also removes items, from the tx queue, but does not seem to
add them to another list; however, it now uses the safe version as well,
since that is the proper usage.
Change-Id: Ieccbff238fc8a036c0d53d873eaaf55f4f5a14af
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
There was a lot of duplication between architectures for the definition
of threads and the "nanokernel" guts. These have been consolidated.
Now, a common file kernel/unified/include/kernel_structs.h holds the
common definitions. Architectures provide two files to complement it:
kernel_arch_data.h and kernel_arch_func.h. The first one contains at
least the struct _thread_arch and struct _kernel_arch data structures,
as well as the struct _callee_saved and struct _caller_saved register
layouts. The second file contains anything that needs what is provided
by the common stuff in kernel_structs.h. Those two files are only meant
to be included in kernel_structs.h in very specific locations.
The thread data structure has been separated into three major parts:
common struct _thread_base and struct k_thread, and arch-specific struct
_thread_arch. The first and third ones are included in the second.
The struct s_NANO data structure has been split into two: common struct
_kernel and arch-specific struct _kernel_arch. The latter is included in
the former.
Offsets files have also changed: nano_offsets.h has been renamed
kernel_offsets.h and is still included by the arch-specific offsets.c.
Also, since the thread and kernel data structures are now made of
sub-structures, offsets have to be added to make up the full offset.
Some of these additions have been consolidated in shorter symbols,
available from kernel/unified/include/offsets_short.h, which includes an
arch-specific offsets_arch_short.h. Most of the code include
offsets_short.h now instead of offsets.h.
Change-Id: I084645cb7e6db8db69aeaaf162963fe157045d5a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The unified kernel is now the only supported kernel, so this
option is unnessary. Eliminating this option also enables
the removal of some legacy code that is no longer required.
Change-Id: Ibfc339d643c8de16a2ed2009c9b468848b8b4972
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
k_alert_init() needs to set the "flags" field of its associated
work item to zero, indicating that the work item has not yet
been submitted to the system workqueue. Using the standard work
item initializer macro ensures this is done correctly.
Change-Id: I0001a5920f20fb1d8dc182191e6a549c5bf89be5
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
The API to disable _sys_soc_resume notification is currently
called _sys_soc_disable_wake_event_notification. This is
misleading because it is possible that the ISR from which
_sys_soc_resume is called could be from a different interrupt
with higher priority that happened before interrupts were
enabled. More accurately, it is a notification of exit from
kernel idling after pm operations.
Jira: ZEP-1271
Change-Id: I83747f2cacac1bc17f135d12f4aa4478970fc02d
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
_sys_soc_resume hook is over loaded to handle to different
scenarios. It is primarily called to notify exit of kernel idling
after PM operations. It is also used to notify exit from deep sleep.
This is very confusing and also makes the implementation of the
hook function very difficult because of very different conditions
involved in the 2 different use cases. Further, users may not require
either or both use cases depending of their custom boot flow and
power state handling. To simplify, create a separate hook for the
purpose of deep sleep exit notification. Use the existing one to
only notify kernel idling exit after PM operations.
Jira: ZEP-1256
Change-Id: I96350199a0fd37f16590c8ee5302a94a3d71b8ba
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
There was no check to see if the current context was running an ISR when
taking a decision whether to do a context switch or not.
Change-Id: Ib9c426de8c0893b3d9383290bb59f6e0e41e9f52
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Useful for finding out if the current thread is protected against
preemption when using non-preemption to protect data structures.
Change-Id: Ib545a3609af3646ba49eeeb5a2c50dc51af010d4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Oversight. These functions are used extensively in the kernel guts, but
are also supposed to be an API.
k_sched_lock used to be implemented as a static inline. However, until
the header files are cleaned-up, and everything, including applications
get access to the kernel internal data structures, it must be
implemented as a function. To reduce the cost to the internals of the
kernel, the new internal _sched_lock() contains the same implemetation,
but is inlined.
Change-Id: If2f61d7714f87d81ddbeed69fedd111b8ce01376
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
An application-supplied main() routine is now considered to be
essential to system operation. Thus, if main() experiences an
error that aborts the main thread a fatal system error is raised.
Note: If main() completes its work and does a standard return-
to-caller the main thread terminates normally.
Change-Id: Icc9499f13578299244a856a246ad2a7d34a72f54
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
A thread defined via a legacy MDEF that belongs to the FPU or
SSE task group must set the thread option bits for FP or SSE
register use prior to being spawned.
If this is not done, and the kernel is configured for SSE support,
the kernel will auto-enable the thread's use of floating point
so that the thread saves SSE register context info even if it
belongs to just the FPU task group, which could cause the thread
to overflow its stack.
Note that this change only increases footprint for x86-based
applications that enable floating point register sharing.
Change-Id: Idfe4d20bcd7bc42b4cee6ac40ad7987e2a45ccf6
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
PRIMARY, SECONDARY, NANOKERNEL, MICROKERNEL init levels are now
deprecated.
New init levels introduced: PRE_KERNEL_1, PRE_KERNEL_2, POST_KERNEL
to replace them.
Most existing code has instances of PRIMARY replaced with PRE_KERNEL_1,
SECONDARY with POST_KERNEL as SECONDARY has had a longstanding bug
where the documentation specified SECONDARY ran before the kernel started
up, but actually ran afterwards.
Change-Id: I771bc634e9caf7f17dbf214a270bc9967eed7d32
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Verify the thread priorities are within the bounds when starting a new
thread and when changing the priority of a thread.
Change-Id: I007b3b249e4b80235b6439cbee44cad2f31973bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Since lower-numbered thread priorities are higher, the code can be
misleading when comparing priorities, and often require the same type of
comments. Instead, use utility inline functions that does the
comparisons.
_is_prio_higher already existed, but add comparisons for "lower than",
"higher than or equal to" and "lower than or equal to".
Change-Id: I8b58fe9a3dd0eb70e224e970fe851a2575ad468b
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
- Add missing irq_lock() before invoking power management.
- Only yield if the idle thread is a coop thread (in coop-only
configurations).
Change-Id: I030795e782590b3023f1d7883bbd058da2c45f4f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Add an assertion against unlocking mutex that is not locked.
Change-Id: I1032fb904e364015b486502c035529c8fe31de7a
Signed-off-by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
include/ will be cleaned up in a subsequent patch.
Change-Id: If3609f5fc8562ec4a6fec4592aefeec155599cfb
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Making a reference to the common work queue code should not necessarily
drag in the system workqueue, since it is possible to use a workqueue
that is not the system workqueue. This is done by moving the system
workqueue into its own code module.
Moving the system workqueue to its own code module allows removing the
NANO_WORKQUEUE and SYSTEM_WORKQUEUE kconfig options, and compiling the
common workqueue code and system workqueue all the time. They are only
linked in the final image if a reference to them exist, same as the
other kernel modules.
Change-Id: I6f48d2542bda24f4702e7c2e317818dd082b3c11
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
It is now possible to specify the expiry and stop functions
of a statically-defined timer, just as can be done for a
dynamically-defined timer.
[Part of fix to ZEP-1186]
Change-Id: Ibb9096f3fdafdc6c904184587f86ecd52accdd66
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Adds standard prefix to symbolic option that flags a thread
as essential to system operation.
Change-Id: Ia904a81ce343fdd1cd44caaaeae641d822777f9b
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Port the power management test app to use unified kernel.
Change-Id: I2f10748be5ca7d9792f6e97c35f5f2aabab769e7
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
QMSI 1.3 natively supports restoring the SoC and peripherals
after sleep.
The Zephyr Power Management shim layer is updated
in order to support QMSI functions.
The following functions have been added:
void _sys_soc_set_power_state(enum power_state);
void _sys_soc_power_state_post_ops(void);
In order to fully support deep sleep, the function
_sys_soc_set_power_state now support saving and
restoring CPU context and returns to the application.
_sys_soc_set_power_state function also abstracts
QMSI cpu states and enable the application to choose
between C1/C2 or C2LP states.
The QMSI power states are mapped as follows:
SYS_SOC_POWER_STATE_CPU_LPS -> power_cpu_c2lp
SYS_SOC_POWER_STATE_CPU_LPS_1 -> power_cpu_c2
SYS_SOC_POWER_STATE_CPU_LPS_2 -> power_cpu_c1
SYS_SOC_POWER_STATE_DEEP_SLEEP -> power_soc_deep_sleep
SYS_SOC_POWER_STATE_DEEP_SLEEP_1 -> power_soc_sleep
The following functions have been removed:
void _sys_soc_set_power_policy(uint32_t pm_policy);
int _sys_soc_get_power_policy(void);
FUNC_NORETURN void _sys_soc_put_deep_sleep(void);
void _sys_soc_put_low_power_state(void);
void _sys_soc_deep_sleep_post_ops(void);
Those changes are propagated to the samples.
All calls to QMSI are removed.
Jira: ZEP-1045, ZEP-993, ZEP-1047
Change-Id: I26822727985b63be0a310cc3590a3e71b8e72c8c
Signed-off-by: Julien Delayen <julien.delayen@intel.com>
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
Defines an object tracing list for each kernel object type
that supports object tracing, and ensures that both statically
and dynamically defined objects are added to the appropriate list.
Ensure that each static kernel object is grouped together with
the other static objects of the same type. Revise the initialization
function for each kernel type (or create it, if needed) so that
each static object is added to the object tracing list for its
associated type.
Note 1: Threads are handled a bit differently than other kernel
object types. A statically-defined thread is added to the thread
list when the thread is started, not when the kernel initializes.
Also, a thread is removed from the thread list when the thread
terminates or aborts, unlike other types of kernel objects which
are never removed from an object tracing list. (Such support would
require the creation of APIs to "uninitialize" the kernel object.)
Note 2: The list head variables for all kernel object types
are now explicitly defined. However, the list head variable for
the ring buffer type continues to be implicitly defined for the
time being, since it isn't considered to be an core kernel object
type.
Change-Id: Ie24d41023e05b3598dc6b344e6871a9692bba02d
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
Allows event objects to pend signals in a cumulative way using
the semaphore in a non-binary way.
Jira: ZEP-928
Change-Id: I3ce8a075ef89309118596ec5781c15d4f3289d34
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
Enables boot time timestamps for unified kernel.
Also Splits the source code into microkernel and nanokernel versions
instead of having common code. Not only does this make the code for
each project easier to read, but it also easily allows the nanokernel
version to link against the correct version of main().
Change-Id: Ie0afa2272c3347ebdacc0e3daeebbfe9583fe596
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
Before, the kernel would run the main() function twice; first
as an entry in k_task_list, and then again from _main(). The
_main() invocation would be using a potentially insufficient stack
size.
Now if an MDEF file declares a main() thread, invoke it from
_main(), but honor the desired priority and stack size.
Issue: ZEP-1145
Change-Id: I1abf38fc038e270059589b11d96fae1b3f265208
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Event is such an overloaded and generic term (event logger, *kernel*
event logger, "protocol" events in other subsystems, etc.), that it is
confusing for the name an object. Events are kinda like signals, but not
exactly, so we chose not to name them 'signals' to prevent further
confusion. "Alerts" felt like a good fit, since they are used to "alert"
an application that something of significance should be addressed and
because an "alert handler" can be proactively registered with an alert.
Change-Id: Ibfeb5eaf0e6e62702ac3fec281d17f8a63145fa1
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
This better aligns with the actual functionality of the object.
Change-Id: I70abf54f994e92abd7367251089ea4f735d273fe
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Fix the error in thread rescheduling:
Fix Fast IRQ exit routine error when it reschedules threads if
(prio >= 0) || (sched_locked == 0) || (next_thread == _current),
while the correct condition for thread rescheduling is:
(prio >= 0) && (sched_locked == 0) && (next_thread != _current),
Fix regular IRQ error when the regular IRQ exit routine rescheduled
threads when (next_thread == _current) instead of
(next_thread != current).
Increased IDLE_STACK_SIZE for ARC architecture, to hold saved
registers.
Change-Id: I1d87a968e231e13822844b7564567e6ca310cde2
Signed-off-by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
They were the same, standardize on the lowercase one.
Change-Id: I8bca080e45f3e0970697d4451e468b9081f96f5f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>