2016-09-03 04:20:19 +08:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2016 Wind River Systems, Inc.
|
|
|
|
*
|
2017-01-19 09:01:01 +08:00
|
|
|
* SPDX-License-Identifier: Apache-2.0
|
2016-09-03 04:20:19 +08:00
|
|
|
*/
|
|
|
|
|
|
|
|
/**
|
|
|
|
* @file
|
|
|
|
* @brief ARM Cortex-M k_thread_abort() routine
|
|
|
|
*
|
|
|
|
* The ARM Cortex-M architecture provides its own k_thread_abort() to deal
|
|
|
|
* with different CPU modes (handler vs thread) when a thread aborts. When its
|
|
|
|
* entry point returns or when it aborts itself, the CPU is in thread mode and
|
|
|
|
* must call _Swap() (which triggers a service call), but when in handler
|
|
|
|
* mode, the CPU must exit handler mode to cause the context switch, and thus
|
|
|
|
* must queue the PendSV exception.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <kernel.h>
|
2016-11-08 23:36:50 +08:00
|
|
|
#include <kernel_structs.h>
|
2016-09-03 04:20:19 +08:00
|
|
|
#include <toolchain.h>
|
2017-06-17 23:30:47 +08:00
|
|
|
#include <linker/sections.h>
|
2016-10-13 22:31:48 +08:00
|
|
|
#include <ksched.h>
|
2018-01-26 07:24:15 +08:00
|
|
|
#include <kswap.h>
|
2016-09-03 04:20:19 +08:00
|
|
|
#include <wait_q.h>
|
2017-08-29 07:02:24 +08:00
|
|
|
#include <misc/__assert.h>
|
2016-09-03 04:20:19 +08:00
|
|
|
|
2017-03-27 22:35:09 +08:00
|
|
|
extern void _k_thread_single_abort(struct k_thread *thread);
|
2016-09-03 04:20:19 +08:00
|
|
|
|
2017-09-30 05:00:48 +08:00
|
|
|
void _impl_k_thread_abort(k_tid_t thread)
|
2016-09-03 04:20:19 +08:00
|
|
|
{
|
|
|
|
unsigned int key;
|
|
|
|
|
|
|
|
key = irq_lock();
|
|
|
|
|
2017-08-29 07:02:24 +08:00
|
|
|
__ASSERT(!(thread->base.user_options & K_ESSENTIAL),
|
|
|
|
"essential thread aborted");
|
|
|
|
|
2016-09-03 04:20:19 +08:00
|
|
|
_k_thread_single_abort(thread);
|
2016-10-26 01:45:05 +08:00
|
|
|
_thread_monitor_exit(thread);
|
2016-09-03 04:20:19 +08:00
|
|
|
|
|
|
|
if (_current == thread) {
|
2017-01-25 22:29:03 +08:00
|
|
|
if ((SCB->ICSR & SCB_ICSR_VECTACTIVE_Msk) == 0) {
|
2016-09-03 04:20:19 +08:00
|
|
|
_Swap(key);
|
|
|
|
CODE_UNREACHABLE;
|
|
|
|
} else {
|
2017-01-25 22:24:57 +08:00
|
|
|
SCB->ICSR |= SCB_ICSR_PENDSVSET_Msk;
|
2016-09-03 04:20:19 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* The abort handler might have altered the ready queue. */
|
kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().
And some places were just calling _reschedule_threads(), which did all
this already.
Unify all this madness. The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield(). The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor. I'm not at all convinced it
should exist...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-27 01:54:40 +08:00
|
|
|
_reschedule_noyield(key);
|
2016-09-03 04:20:19 +08:00
|
|
|
}
|