Using the ts/tick conversion functions provided in clock.h
Do this caused we want speed up the time calculation, so change:
clock_time2ticks, clock_ticks2time, clock_timespec_add,
clock_timespec_compare, clock_timespec_subtract... to MACRO
Signed-off-by: ligd <liguiding1@xiaomi.com>
Most tools used for compliance and SBOM generation use SPDX identifiers
This change brings us a step closer to an easy SBOM generation.
Signed-off-by: Alin Jerpelea <alin.jerpelea@sony.com>
Since pthread_mutex is implemented by sem, it is impossible to see in ps who holds the lock and causes the wait.
Replace sem with mutex implementation to solve the above problems
Signed-off-by: yinshengkai <yinshengkai@xiaomi.com>
In order to ensure the detached thread obtain the correct return
value from pthread_join()/pthread_cancel(), the detached thread
will create joininfo to save the detached status after thread
destroyed. If there are too many of detached threads in the
process group, the joininfo will consume too much memory.
This is not friendly to embedded MCU devices.
This commit keep the semantics as #11898 was introduced,
will no longer save joininfo for detached threads to avoid wasting memory.
Signed-off-by: chao an <anchao@lixiang.com>
1. add support to join main task
| static pthread_t self;
|
| static void *join_task(void *arg)
| {
| int ret;
| ret = pthread_join(self, NULL); <--- /* Fix Task could not be joined */
| return NULL;
| }
|
| int main(int argc, char *argv[])
| {
| pthread_t thread;
|
| self = pthread_self();
|
| pthread_create(&thread, NULL, join_task, NULL);
| sleep(1);
|
| pthread_exit(NULL);
| return 0;
| }
2. Detach active thread will not alloc for additional join, just update the task flag.
3. Remove the return value waiting lock logic (data_sem),
the return value will be stored in the waiting tcb.
4. Revise the return value of pthread_join(), consistent with linux
e.g:
Joining a detached and canceled thread should return EINVAL, not ESRCH
https://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_join.html
[EINVAL]
The value specified by thread does not refer to a joinable thread.
NOTE:
This PR will not increase stack usage, but struct tcb_s will increase 32 bytes.
Signed-off-by: chao an <anchao@lixiang.com>
move task group into task_tcb_s to avoid access allocator to improve performance
for Task Termination, the time consumption will be reduced ~2us (Tricore TC397 300MHZ):
15.97(us) -> 13.55(us)
Signed-off-by: chao an <anchao@lixiang.com>
This moves task / thread cancel point logic from the NuttX kernel into
libc, while the data needed by the cancel point logic is moved to TLS.
The change is an enabler to move user-space APIs to libc as well, for
a coherent user/kernel separation.
fix ltp pthread_cond_wait_1 test question, child should inherit parent
priority by default.
nsh> ltp_pthread_cond_wait_1
Error: the policy or priority not correct
Signed-off-by: yangyalei <yangyalei@xiaomi.com>
1. Update all CMakeLists.txt to adapt to new layout
2. Fix cmake build break
3. Update all new file license
4. Fully compatible with current compilation environment(use configure.sh or cmake as you choose)
------------------
How to test
From within nuttx/. Configure:
cmake -B build -DBOARD_CONFIG=sim/nsh -GNinja
cmake -B build -DBOARD_CONFIG=sim:nsh -GNinja
cmake -B build -DBOARD_CONFIG=sabre-6quad/smp -GNinja
cmake -B build -DBOARD_CONFIG=lm3s6965-ek/qemu-flat -GNinja
(or full path in custom board) :
cmake -B build -DBOARD_CONFIG=$PWD/boards/sim/sim/sim/configs/nsh -GNinja
This uses ninja generator (install with sudo apt install ninja-build). To build:
$ cmake --build build
menuconfig:
$ cmake --build build -t menuconfig
--------------------------
2. cmake/build: reformat the cmake style by cmake-format
https://github.com/cheshirekow/cmake_format
$ pip install cmakelang
$ for i in `find -name CMakeLists.txt`;do cmake-format $i -o $i;done
$ for i in `find -name *\.cmake`;do cmake-format $i -o $i;done
Co-authored-by: Matias N <matias@protobits.dev>
Signed-off-by: chao an <anchao@xiaomi.com>
A normal user task calls pthread_exit(), will crash at DEBUGASSERT.
Cause pthread_exit limit in user pthread task.
test case:
int main(int argc, FAR char *argv[])
{
pthread_exit(NULL);
return 0;
}
>> test
>> echo $?
>> 0
Signed-off-by: fangxinyong <fangxinyong@xiaomi.com>
pthread_cond_wait() should be an atomic operation in the mutex lock/unlock.
Since the sched_lock() has been wrongly deleted in the previous commit,
the context switch will occurred after the mutex was unlocked:
--------------------------------------------------------------------
Task1(Priority 100) | Task2(Priority 101)
|
pthread_mutex_lock(mutex); |
| |
pthread_cond_wait(cond, mutex) |
| | |
| | |
| ->enter_critical_section() |
| ->pthread_mutex_give(mutex) | ----> pthread_mutex_lock(mutex); // contex switch to high priority task
| | pthread_cond_signal(cond); // signal before wait
| | <---- pthread_mutex_unlock(mutex); // switch back to original task
| ->pthread_sem_take(cond->sem)| // try to wait the signal, Deadlock.
| ->leave_critical_section() |
|
| ->pthread_mutex_take(mutex) |
| |
pthread_mutex_lock(mutex); |
---------------------------------------------------------------------
This PR will bring back sched_lock()/sched_unlock() to avoid context switch to ensure atomicity
Signed-off-by: chao an <anchao@xiaomi.com>
Resolving the issue with the ltp_interfaces_pthread_join_6_2 test case.
In SMP mode, the pthread may still be in the process of exiting when
pthread_join returns, and calling pthread_join again at this time will
result in an error. The error code returned should be ESRCH.
Signed-off-by: zhangyuan21 <zhangyuan21@xiaomi.com>
use PTHREAD_CLEANUP_STACKSIZE to enable or disable interfaces pthread_cleanup_push() and pthread_cleanup_pop().
reasons:(1)same as TLS_TASK_NELEM (2)it is no need to use two variables
Signed-off-by: yanghuatao <yanghuatao@xiaomi.com>
This is preparation to use kernel stack for everything when the user
process enters the kernel. Now the user stack is in use when the user
process runs a system call, which might not be the safest option.
https://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_detach.html
If an implementation detects that the value specified by the thread argument
to pthread_detach() does not refer to a joinable thread, it is recommended
that the function should fail and report an [EINVAL] error.
If an implementation detects use of a thread ID after the end of its lifetime,
it is recommended that the function should fail and report an [ESRCH] error.
Signed-off-by: zhangyuan21 <zhangyuan21@xiaomi.com>
The current implementation requires the use of enter_critical_section, so the source code needs to be moved to kernel space
Signed-off-by: hujun5 <hujun5@xiaomi.com>
pthread_cond_wait is preempted after releasing the lock, sched_lock cannot lock threads from other CPUs, use enter_critical_section
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Refer to issue #8867 for details and rational.
Convert sigset_t to an array type so that more than 32 signals can be supported.
Why not use a uin64_t?
- Using a uin32_t is more flexible if we decide to increase the number of signals beyound 64.
- 64-bit accesses are not atomic, at least not on 32-bit ARMv7-M and similar
- Keeping the base type as uint32_t does not introduce additional overhead due to padding to achieve 64-bit alignment of uin64_t
- Some architectures still supported by NuttX do not support uin64_t
types,
Increased the number of signals to 64. This matches Linux. This will support all xsignals defined by Linux and also 32 real time signals (also like Linux).
This is is a work in progress; a draft PR that you are encouraged to comment on.
Remove calls to the userspace API exit() from the kernel. The problem
with doing such calls is that the exit functions are called with kernel
mode privileges which is a big security no-no.
There is currently a big problem in the address environment handling which
is that the address environment is released too soon when the process is
exiting. The current MMU mappings will always be the exiting process's, which means
the system needs them AT LEAST until the next context switch happens. If
the next thread is a kernel thread, the address environment is needed for
longer.
Kernel threads "lend" the address environment of the previous user process.
This is beneficial in two ways:
- The kernel processes do not need an allocated address environment
- When a context switch happens from user -> kernel or kernel -> kernel,
the TLB does not need to be flushed. This must be done only when
changing to a different user address environment.
Another issue is when a new process is created; the address environment
of the new process must be temporarily instantiated by up_addrenv_select().
However, the system scheduler does not know that the process has a different
address environment to its own and when / if a context restore happens, the
wrong MMU page directory is restored and the process will either crash or
do something horribly wrong.
The following changes are needed to fix the issues:
- Add mm_curr which is the current address environment of the process
- Add a reference counter to safeguard the address environment
- Whenever an address environment is mapped to MMU, its reference counter
is incremented
- Whenever and address environment is unmapped from MMU, its reference
counter is decremented, and tested. If no more references -> drop the
address environment and release the memory as well
- To limit the context switch delay, the address environment is freed in
a separate low priority clean-up thread (LPWORK)
- When a process temporarily instantiates another process's address
environment, the scheduler will now know of this and will restore the
correct mappings to MMU
Why is this not causing more noticeable issues ? The problem only happens
under the aforementioned special conditions, and if a context switch or
IRQ occurs during this time.