Most tools used for compliance and SBOM generation use SPDX identifiers
This change brings us a step closer to an easy SBOM generation.
Signed-off-by: Alin Jerpelea <alin.jerpelea@sony.com>
Normally, `SO_ERROR` for disconn events will be set by `tcp_poll_eventhandler`, but when the socket is closed before poll, we should also set the `SO_ERROR`.
On Linux, `tcp_poll` returns `EPOLLERR` event when `sk->sk_err` has value but doesn't let `poll` fail (doesn't set `errno`). https://github.com/torvalds/linux/blob/v6.5/net/ipv4/tcp.c#L594-L596
Note: `sk->sk_err` can be get by socket option `SO_ERROR` on Linux, so `POLLERR` will always be together with `SO_ERROR`.
Common libs like curl may try to read `SO_ERROR` on `POLLERR`.
Signed-off-by: Zhe Weng <wengzhe@xiaomi.com>
When do poll operation and the tcp connection state is TCP_ALLOCATED, eventset(POLLERR|POLLHUP) is return, causing the libuv poll_multiple_handles to fail.
Verification: Use the libuv test case ` uv_run_tests poll_multiple_handles`.
Signed-off-by: liqinhui <liqinhui@xiaomi.com>
Net poll teardown is not protected by net lock, if the conn is released
before teardown, the assertion failure will be triggered during free dev
callback, this patch will add the net lock around net poll teardown to
fix race condition
nuttx/libs/libc/assert/lib_assert.c:36
nuttx/net/devif/devif_callback.c:85
nuttx/net/tcp/tcp_netpoll.c:405
nuttx/fs/vfs/fs_poll.c:244
nuttx/fs/vfs/fs_poll.c:500
Signed-off-by: chao.an <anchao@xiaomi.com>
Do not bother to preserve segment boundaries in the tcp
readahead queues.
* Avoid wasting the tail IOB space for each segments.
Instead, pack the newly received data into the tail space
of the last IOB. Also, advertise the tail space as
a part of the window.
* Use IOB chain directly. Eliminate IOB queue overhead.
* Allow to accept only a part of a segment.
* This change improves the memory efficiency.
And probably more importantly, allows less-confusing
recv window advertisement behavior.
Previously, even when we advertise N bytes window,
we often couldn't actually accept N bytes. Depending on
the segment sizes and IOB configurations, it was causing
segment drops.
Also, the previous code was moving the right edge of the
window back and forth too often, even when nothing in
the system was competing on the IOBs. Shrinking the
window that way is a kinda well known recipe to confuse
the peer stack.
It's enough to check the buffer available in the net event handler
Change-Id: I2d7c7a03675cf6eff6ffb42a81b7c7245253e92c
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
Here is the email loop talk about why it is better to remove the option:
https://groups.google.com/forum/#!topic/nuttx/AaNkS7oU6R0
Change-Id: Ib66c037752149ad4b2787ef447f966c77aa12aad
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
* fix ieee802154/ieee802154_input.c:179:7: error: too few arguments to function 'iob_free'
iob_free(container->ic_iob);
^~~~~~~~
ieee802154/ieee802154_input.c:180:7: error: too many arguments to function 'ieee802154_container_free'
ieee802154_container_free(container, IOBUSER_NET_SOCK_IEEE802154);
^~~~~~~~~~~~~~~~~~~~~~~~~
* fix udp/udp_netpoll.c:327:10: warning: 'ret' may be used uninitialized in this function [-Wmaybe-uninitialized]
* fix local/local_netpoll.c:154:15: warning: implicit declaration of function 'nxsem_post'; did you mean 'sem_post'? [-Wimplicit-function-declaration]
nxsem_post(fds->sem);
^~~~~~~~~~
sem_post
Author: Gregory Nutt <gnutt@nuttx.org>
Run all .h and .c files modified in last PR through nxstyle.
Author: Xiang Xiao <xiaoxiang@xiaomi.com>
Net cleanup (#17)
* Fix the semaphore usage issue found in tcp/udp
1. The count semaphore need disable priority inheritance
2. Loop again if net_lockedwait return -EINTR
3. Call nxsem_trywait to avoid the race condition
4. Call nxsem_post instead of sem_post
* Put the work notifier into free list to avoid the heap fragment in the long run. Since the allocation strategy is encapsulated internally, we can even refine the implementation later.
* Network stack shouldn't allocate memory in the poll implementation to avoid the heap fragment in the long run, other modification include:
1. Select MM_IOB automatically since ICMP[v6] socket can't work without the read ahead buffer
2. Remove the net lock since xxx_callback_free already do the same thing
3. TCP/UDP poll should work even the read ahead buffer isn't enabled at all
* Add NET_ prefix for UDP_NOTIFIER and TCP_NOTIFIER option to align with other UDP/TCP option convention
* Remove the unused _SF_[IDLE|ACCEPT|SEND|RECV|MASK] flags since there are code to set/clear these flags, but nobody check them.
sched/wqueue/kwork_notifier.c: Redesign some data structures. struct works_s must appear at the beginning of the notifier entry structure. That is because it contains the work queue indices. This solves a harfault issue.
net/tcp/tcp_netpoll.c: tcp_iob_work() needs to free the allocated argument when it is finished.
net/tcp/tcp_send_buffered.c: Extend psock_tcp_cansend() so that it also requires that at least on IOB is also avaialble.
mm/iob: iob_navail() was returning the number of free IOB chain queue entries, not the number of free IOBs. Completely misnamed.
net/tcp/tcp_netpoll.c: Add logic to receive notifications when IOBs are freed (Needs CONFIG_NET_TCP_WRITE_BUFFERS and CONFIG_IOB_NOTIFIER). At present, does nothing because the logic in in psock_tcp_cansend() does not check for the availability of IOBs. That will change.
sched/semaphore: Add nxsem_post() which is identical to sem_post() except that it never modifies the errno variable. Changed all references to sem_post in the OS to nxsem_post().
sched/semaphore: Add nxsem_destroy() which is identical to sem_destroy() except that it never modifies the errno variable. Changed all references to sem_destroy() in the OS to nxsem_destroy().
libc/semaphore and sched/semaphore: Add nxsem_getprotocol() and nxsem_setprotocola which are identical to sem_getprotocol() and set_setprotocol() except that they never modifies the errno variable. Changed all references to sem_setprotocol in the OS to nxsem_setprotocol(). sem_getprotocol() was not used in the OS
When a poll requesting POLLOUT happens, the poll should return
immediately if a write will not block. This change adds that, as
opposed to the old behaviour of blocking until a timer from the
Ethernet driver eventually triggers the poll to complete.
This is only implemented for buffered TCP. Unbuffered TCP should
behave as before.