2019-11-05 23:09:42 +08:00
|
|
|
/*
|
2024-05-29 07:39:06 +08:00
|
|
|
* Copyright (c) 2024, Tenstorrent AI ULC
|
2019-11-05 23:09:42 +08:00
|
|
|
*
|
|
|
|
* SPDX-License-Identifier: Apache-2.0
|
|
|
|
*/
|
|
|
|
|
posix: eventfd: revise locking, signaling, and allocation
TL;DR - a complete rewrite.
Previously, the prototypical `eventfd()` usage (one thread
performing a blocking `read()`, followed by another thread
performing a `write()`) would deadlock Zephyr. This shortcoming
has existed in Zephyr's `eventfd()` implementation from the
start and the suggested workaround was to use `poll()`.
However, that is not sufficient for integrating 3rd-party
libraries that may rely on proper `eventfd()` blocking
operations such as `eventfd_read()` and `eventfd_write()`.
The culprit was the per-fdtable-entry `struct k_mutex`.
Here we perform a minor revision of the locking strategy
and employ `k_condvar_broadcast()` and `k_condvar_wait()`
to signal and wait on the holder of a given `struct k_mutex`.
It is important to note, however, that the primary means of
synchronizing the eventfd state is actually the eventfd
spinlock. The fdtable mutex and condition variable are mainly
used for the purposes of blocking io (r,w,close) and are not
used in the code path of non-blocking reads.
The `wait_q` and `k_poll_signal` entries were removed from
`struct eventfd` as they were unnecessary.
Additionally, switch to using a bitarray because it is
possibly faster than linear search for allocating and
deallocating eventfd resources.
Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2023-05-31 05:16:20 +08:00
|
|
|
#include <zephyr/posix/sys/eventfd.h>
|
2024-05-29 07:39:06 +08:00
|
|
|
#include <zephyr/zvfs/eventfd.h>
|
posix: eventfd: revise locking, signaling, and allocation
TL;DR - a complete rewrite.
Previously, the prototypical `eventfd()` usage (one thread
performing a blocking `read()`, followed by another thread
performing a `write()`) would deadlock Zephyr. This shortcoming
has existed in Zephyr's `eventfd()` implementation from the
start and the suggested workaround was to use `poll()`.
However, that is not sufficient for integrating 3rd-party
libraries that may rely on proper `eventfd()` blocking
operations such as `eventfd_read()` and `eventfd_write()`.
The culprit was the per-fdtable-entry `struct k_mutex`.
Here we perform a minor revision of the locking strategy
and employ `k_condvar_broadcast()` and `k_condvar_wait()`
to signal and wait on the holder of a given `struct k_mutex`.
It is important to note, however, that the primary means of
synchronizing the eventfd state is actually the eventfd
spinlock. The fdtable mutex and condition variable are mainly
used for the purposes of blocking io (r,w,close) and are not
used in the code path of non-blocking reads.
The `wait_q` and `k_poll_signal` entries were removed from
`struct eventfd` as they were unnecessary.
Additionally, switch to using a bitarray because it is
possibly faster than linear search for allocating and
deallocating eventfd resources.
Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2023-05-31 05:16:20 +08:00
|
|
|
|
|
|
|
int eventfd(unsigned int initval, int flags)
|
|
|
|
{
|
2024-05-29 07:39:06 +08:00
|
|
|
return zvfs_eventfd(initval, flags);
|
2019-11-05 23:09:42 +08:00
|
|
|
}
|
2023-05-30 20:33:37 +08:00
|
|
|
|
|
|
|
int eventfd_read(int fd, eventfd_t *value)
|
|
|
|
{
|
2024-05-29 07:39:06 +08:00
|
|
|
return zvfs_eventfd_read(fd, value);
|
2023-05-30 20:33:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
int eventfd_write(int fd, eventfd_t value)
|
|
|
|
{
|
2024-05-29 07:39:06 +08:00
|
|
|
return zvfs_eventfd_write(fd, value);
|
2023-05-30 20:33:37 +08:00
|
|
|
}
|