This patch fixed userspace headers conflict. Architecture-related definition and API should not be exposed to users.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
When we do not drop notifier from g_notifier_pending, we need an
isolated dq entry for this queue, otherwise the queued work_s's dq entry
may be modified by the work queue and breaks the chain of
g_notifier_pending.
Signed-off-by: Zhe Weng <wengzhe@xiaomi.com>
Most tools used for compliance and SBOM generation use SPDX identifiers
This change brings us a step closer to an easy SBOM generation.
Signed-off-by: Alin Jerpelea <alin.jerpelea@sony.com>
Resolution of Issue 619 will require multiple steps, this part of the first step in that resolution: Every call to nxsem_wait_uninterruptible() must handle the return value from nxsem_wait_uninterruptible properly. This commit is only for those files under graphics/, mm/, net/, sched/, wireless/bluetooth.
Still to do: Files under fs/, drivers/, and arch. The last is 116 files and will take some effort.
kworker dqueue corruption if the garbage in work->worker handler
Change-Id: Ice5e66e8ed080a7539d5fe02f946a66794cbda7d
Signed-off-by: chao.an <anchao@xiaomi.com>
* Simplify EINTR/ECANCEL error handling
1. Add semaphore uninterruptible wait function
2 .Replace semaphore wait loop with a single uninterruptible wait
3. Replace all sem_xxx to nxsem_xxx
* Unify the void cast usage
1. Remove void cast for function because many place ignore the returned value witout cast
2. Replace void cast for variable with UNUSED macro
Author: Gregory Nutt <gnutt@nuttx.org>
Run all .h and .c files modified in last PR through nxstyle.
Author: Xiang Xiao <xiaoxiang@xiaomi.com>
Net cleanup (#17)
* Fix the semaphore usage issue found in tcp/udp
1. The count semaphore need disable priority inheritance
2. Loop again if net_lockedwait return -EINTR
3. Call nxsem_trywait to avoid the race condition
4. Call nxsem_post instead of sem_post
* Put the work notifier into free list to avoid the heap fragment in the long run. Since the allocation strategy is encapsulated internally, we can even refine the implementation later.
* Network stack shouldn't allocate memory in the poll implementation to avoid the heap fragment in the long run, other modification include:
1. Select MM_IOB automatically since ICMP[v6] socket can't work without the read ahead buffer
2. Remove the net lock since xxx_callback_free already do the same thing
3. TCP/UDP poll should work even the read ahead buffer isn't enabled at all
* Add NET_ prefix for UDP_NOTIFIER and TCP_NOTIFIER option to align with other UDP/TCP option convention
* Remove the unused _SF_[IDLE|ACCEPT|SEND|RECV|MASK] flags since there are code to set/clear these flags, but nobody check them.
sched/wqueue/kwork_notifier.c: Redesign some data structures. struct works_s must appear at the beginning of the notifier entry structure. That is because it contains the work queue indices. This solves a harfault issue.
net/tcp/tcp_netpoll.c: tcp_iob_work() needs to free the allocated argument when it is finished.
net/tcp/tcp_send_buffered.c: Extend psock_tcp_cansend() so that it also requires that at least on IOB is also avaialble.
mm/iob: iob_navail() was returning the number of free IOB chain queue entries, not the number of free IOBs. Completely misnamed.
net/tcp/tcp_netpoll.c: Add logic to receive notifications when IOBs are freed (Needs CONFIG_NET_TCP_WRITE_BUFFERS and CONFIG_IOB_NOTIFIER). At present, does nothing because the logic in in psock_tcp_cansend() does not check for the availability of IOBs. That will change.
Squashed commit of the following:
Fix up some final compile isses.
net/netdev: Convert the network down notification logic to use the new wqueue-based notification factility.
net/udp: Convert the UDP readahead notification logic to use the new wqueue-based notification factility.
net/tcp: Convert the TCP readahead notification logic to use the new wqueue-based notification factility.
mm/iob: Convert the IOB notification logic to use the new wqueue-based notification factility.
sched/wqueue: Signals are not good IPCs to support the target poll functionality for several reasons including the amount of data that can be passed with a signal and in the fact that in protected and kernel modes, user threads executing signal handlers in protected, kernel memory is problematic. Instead, convert the same logic to perform the notifications via function callback on the high priority work queue.