After this, RISC-V fully supports the kmap interface.
Due to the current design limitations of having only a single L2 table
per process, the kernel kmap area cannot be mapped via any user page
directory, as they do not contain the page tables to address that range.
So a "kernel address environment" is added, which can do the mapping. The
mapping is reflected to every process as only the root page directory (L1)
is copied to users, which means every change to L2 / L3 tables will be
seen by every user.
Mapping a physical page to a kernel virtual page is very simple and does
not need the kernel vma list, just get the kernel addressable virtual
address for the page.
is_kmap_vaddr is added and used to test that a given (v)addr is actually
inside the kernel map area. This gives a speed optimization for kmm_unmap,
as it is no longer necessary to take the mm_map_lock to check if such a
mapping exists; obviously if the address is not within the kmap area, it
won't be in the list either.
User pages are mapped from the currently active address environment. If
the process is running on a borrowed address environment, then the
mapping should be created from there.
This happens during (new) process creation only.
rndis header length is 36, L2 header is 14, IPv6 header is 40, tcp header is 56 when sack option count is 4(default max_ofosegs is 4). so the iob bufsize should greater than we need.
Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
Modification based on opengroup's description for shmget: "When the shared memory segment is created, it shall be initialized with all zero values."
Link to documentation page for shmget: https://pubs.opengroup.org/onlinepubs/9699919799/
If we apply `iob_reserve` on an IOB with `io_offset != 0`, the `head->io_pktlen` and `iob->io_len` will become wrong value, because we only need to trim `offset - iob->io_offset`.
Signed-off-by: Zhe Weng <wengzhe@xiaomi.com>
if there are two throttled wait, when iob_free occurs, one of wait
will be awakened to execute iob_alloc_committed, but it will fail
to execute, sem will be posted at this time, then another wait will
be awakened. after the other wait thread is awakened, This step is
repeated. the two threads are in the critical_section state and
cannot be switched to other threads. then cpu will busy util timeout.
Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
If malloc chunk fails, and if malloc fails to dump all memory,
it will cause deadlock in multiple_mempool_info
Signed-off-by: anjiahao <anjiahao@xiaomi.com>
No memory map count limit that will exhaust memory and cause
the system hang. Also that fix pass LTP posix case mmap/24-1.c
Signed-off-by: fangxinyong <fangxinyong@xiaomi.com>
When other code use nuttx memory manager by call mm_xx api directly,
it better to let other code to control weather dump or panic when
malloc failed.
Signed-off-by: Bowen Wang <wangbowen6@xiaomi.com>
The circbuf_peekat should copy valid content from the specfied position
of buffer, so skip the area from this position to the tail.
Signed-off-by: dongjiuzhu1 <dongjiuzhu1@xiaomi.com>
1. command "memdump leak" can dump the leacked memory node;
2. fix the leak memory stat bug in memory manager;
Signed-off-by: wangbowen6 <wangbowen6@xiaomi.com>