Commit Graph

676 Commits

Author SHA1 Message Date
chao an feb6ede434 sched/cpu: replace up_cpu_index() to this_cpu()
In SMP mode, up_cpu_index()/this_cpu() are the same, both return the index of the physical core.
In AMP mode, up_cpu_index() will return the index of the physical core, and this_cpu() will always return 0

| #ifdef CONFIG_SMP
| #  define this_cpu()             up_cpu_index()
| #elif defined(CONFIG_AMP)
| #  define this_cpu()             (0)
| #else
| #  define this_cpu()             (0)
| #endif

Signed-off-by: chao an <anchao@lixiang.com>
2024-03-21 18:52:35 +08:00
wangmingrong d2fd043575 mm: Using Macros Instead of Memory to Fill Labels
Signed-off-by: wangmingrong <wangmingrong@xiaomi.com>
2024-03-14 22:48:19 +08:00
Yanfeng Liu 813f67f93b mm_heap/mm.h: revising comments
Revising comments to be in line with code

Signed-off-by: Yanfeng Liu <yfliu2008@qq.com>
2024-03-12 19:44:48 +08:00
Yanfeng Liu 5ac401d941 mm/kconfig: fix typo in MM_DEFAULT_ALIGNMENT
This fixes minor typo in MM_DEFAULT_ALIGNMENT

Signed-off-by: Yanfeng Liu <yfliu2008@qq.com>
2024-03-11 13:12:34 +08:00
Masayuki Ishikawa e67d32a5ba Revert "fix variable set but not used"
This reverts commit d2d93ba58c.
2024-02-21 21:29:48 -08:00
yinshengkai d2d93ba58c fix variable set but not used
These variables will trigger variable 'ret' set but not used warnings due to different configurations.

Signed-off-by: yinshengkai <yinshengkai@xiaomi.com>
2024-02-21 13:28:20 -03:00
chao an 39a0e6fa74 toolchain/lto: enable lto flags only on GNU toolchain
Some commercial customized toolchains do not support these options

Signed-off-by: chao an <anchao@lixiang.com>
2024-02-18 00:47:53 -08:00
chao an d133de10e6 compiler/tasking: fix NULL pointer reference
ctc E246: ["map/mm_map.c" 67/41] left side of '.' or '->' is not struct or union
ctc E260: ["map/mm_map.c" 67/25] not an lvalue
ctc E246: ["map/mm_map.c" 80/3] left side of '.' or '->' is not struct or union
ctc E260: ["map/mm_map.c" 80/3] not an lvalue

Signed-off-by: chao an <anchao@lixiang.com>
2024-01-31 05:02:56 -08:00
yinshengkai 9852428953 fs: procfs add poll support
Signed-off-by: yinshengkai <yinshengkai@xiaomi.com>
2023-12-26 19:23:13 -08:00
Nathan Hartman 3ed629274e mm: Fix some typos 2023-11-24 09:57:10 -08:00
Ville Juven 8a2b83c482 mm/kmap: Finalize kmap implementation for RISC-V
After this, RISC-V fully supports the kmap interface.

Due to the current design limitations of having only a single L2 table
per process, the kernel kmap area cannot be mapped via any user page
directory, as they do not contain the page tables to address that range.

So a "kernel address environment" is added, which can do the mapping. The
mapping is reflected to every process as only the root page directory (L1)
is copied to users, which means every change to L2 / L3 tables will be
seen by every user.
2023-11-23 16:38:41 -08:00
Xu Xingliang 6e7115ca09 mm: free delay list when exceeding specified count
Signed-off-by: Xu Xingliang <xuxingliang@xiaomi.com>
2023-11-15 12:05:20 +01:00
Xiang Xiao 0fbeea64d5 mm: Remove mm_spinlock
since it's enough to protect per cpu delay list by disabling interrupt

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2023-11-12 11:27:38 +08:00
yinshengkai bb5b5420ae mm: record the maximum system memory usage
Add the usmblks field to mallinfo to record the maximum space allocated historically in the system

https://man7.org/linux/man-pages/man3/mallinfo.3.html#:~:text=mmap(2).-,usmblks,-This%20field%20is

Signed-off-by: yinshengkai <yinshengkai@xiaomi.com>
2023-11-09 09:08:49 +08:00
Ville Juven 8a2d3958e9 mm/kmap: Fix bad dependency to ARCH_VMA_MAPPING
kmap does not need ARCH_VMA_MAPPING => remove the dependency
2023-11-02 15:10:57 +02:00
Xiang Xiao 0c805ca0a9 mm: Change global spinlock to per heap
Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2023-10-30 11:18:34 +02:00
Xiang Xiao 9e4a1be8d4 mm/iob: Replace the critical section with spin lock
Base on discusion: https://github.com/apache/nuttx/issues/10981

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2023-10-30 11:18:34 +02:00
Xiang Xiao d5d4006c6b mm/gran: Replace the critical section with spin lock
Base on discusion: https://github.com/apache/nuttx/issues/10981

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2023-10-30 11:18:34 +02:00
Xiang Xiao 08bae13624 mm/tlfs: Replace the critical section with spin lock
Base on discusion: https://github.com/apache/nuttx/issues/10981

Signed-off-by: Xiang Xiao <xiaoxiang@xiaomi.com>
2023-10-30 11:18:34 +02:00
raiden00pl 5b87fdfb9d Documentation: remove all migrated READMEs 2023-10-29 21:03:54 -03:00
ligd fe03ce5fbe mm: both use spin_lock_irqxx() when operated delaylist
align with free_delaylist()

Signed-off-by: ligd <liguiding1@xiaomi.com>
2023-10-27 22:27:15 +08:00
TaiJuWu 51eaa59cc0 Replace enter_critical_section with spin_irqsave
Base on discusion: https://github.com/apache/nuttx/issues/10981

Signed-off-by: TaiJuWu <tjwu1217@gmail.com>
2023-10-21 11:00:07 +08:00
Ville Juven 7c893ddfc2 kmm_map: Add function to map a single page of kernel memory
Mapping a physical page to a kernel virtual page is very simple and does
not need the kernel vma list, just get the kernel addressable virtual
address for the page.
2023-10-09 18:59:43 +03:00
Ville Juven 04bfaf3a55 kmm_map.c: Remember to free the temporary 'pages' variable
The temp variable was freed only in error cases, but it needs to be freed
in the happy case as well.
2023-10-09 18:59:43 +03:00
Ville Juven 5dd0474269 kmm_map.c: Add way to test if addr is within kmap area or not
is_kmap_vaddr is added and used to test that a given (v)addr is actually
inside the kernel map area. This gives a speed optimization for kmm_unmap,
as it is no longer necessary to take the mm_map_lock to check if such a
mapping exists; obviously if the address is not within the kmap area, it
won't be in the list either.
2023-10-09 18:59:43 +03:00
Ville Juven a444435f8b kmm_map.c: Fix user page mapping
User pages are mapped from the currently active address environment. If
the process is running on a borrowed address environment, then the
mapping should be created from there.

This happens during (new) process creation only.
2023-10-09 18:59:43 +03:00
Ville Juven b2f9926a1b kmm_map.c: Fix call to gran_alloc
Provide the handle to gran_alloc, not pointer to handle.
2023-10-09 18:59:43 +03:00
Ville Juven c57c11c516 mm/kmap: Fix bug in kmm_unmap
Searching the current process mappings for kmappings is quite futile,
do the search in the kernel's mappings instead.
2023-09-29 21:06:16 +08:00
Ville Juven 51f8611fc0 mm/kmap: Change kmm_user_map to kmm_map_user
Naming consistency wrt kmm_map_user_page
2023-09-29 21:06:16 +08:00
anjiahao 5d6f6f2d47 mm:fix warning
mm_heap/mm_initialize.c:69:11: warning: array subscript -1 is outside array bounds of 'void[2147483647]' [-Warray-bounds]
   69 |       node->pid = MM_BACKTRACE_MEMPOOL_PID;
      |           ^~
mm_heap/mm_initialize.c:64:9: note: at offset -16 into object of size [0, 2147483647] allocated by 'mm_memalign'
   64 |   ret = mm_memalign(arg, alignment, size);

Signed-off-by: anjiahao <anjiahao@xiaomi.com>
2023-09-26 14:12:59 +08:00
ligd 13f0051747 mm: rewrite the memdump code for more readable
Signed-off-by: ligd <liguiding1@xiaomi.com>
2023-09-24 10:39:18 +08:00
ligd cb94f825af kasan: add builtin_return_address(0) to kasan
Signed-off-by: ligd <liguiding1@xiaomi.com>
2023-09-24 03:48:39 +08:00
zhanghongyu 13f59128fd iob: limit the iob bufsize is sufficient to fill all L2/L3/L4 headers
rndis header length is 36, L2 header is 14, IPv6 header is 40, tcp header is 56 when sack option count is 4(default max_ofosegs is 4). so the iob bufsize should greater than we need.

Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
2023-09-20 14:35:22 +08:00
Petro Karashchenko 1c2931222c mm/mm_heap: add parenthesis in macro definition
Signed-off-by: Petro Karashchenko <petro.karashchenko@gmail.com>
2023-09-16 14:17:47 +08:00
Petro Karashchenko 440be65010 spinlock: use spinlock API instead of direct asignment/compare
Signed-off-by: Petro Karashchenko <petro.karashchenko@gmail.com>
2023-09-16 14:17:47 +08:00
Xuxingliang b9e59ad0a7 mm: add macro to judge if node/prenode is free
Signed-off-by: Xuxingliang <xuxingliang@xiaomi.com>
2023-09-12 22:09:36 +08:00
Xuxingliang c9f33b7ee5 mm: use unified naming style for macros
Make all macro name starts with prefix MM_
Signed-off-by: Xuxingliang <xuxingliang@xiaomi.com>
2023-09-12 22:09:36 +08:00
dongjiuzhu1 36e3d32740 mm/heap: add coloration after free to detect use after free issue
Signed-off-by: dongjiuzhu1 <dongjiuzhu1@xiaomi.com>
2023-09-11 15:28:34 -04:00
Stuart Ianna 531e5c2011 mm/shm/shmget: Zero allocated shared memory pages when created.
Modification based on opengroup's description for shmget: "When the shared memory segment is created, it shall be initialized with all zero values."

Link to documentation page for shmget: https://pubs.opengroup.org/onlinepubs/9699919799/
2023-09-11 16:38:37 +08:00
xuxin19 5b6488f09f cmake:complete missing changes during cmake reforming for mm
Signed-off-by: xuxin19 <xuxin19@xiaomi.com>
2023-09-08 21:20:16 +03:00
Xu Xingliang a2df576ecf kasan: add option to disable read/write checks
Signed-off-by: Xu Xingliang <xuxingliang@xiaomi.com>
2023-09-07 00:41:43 +08:00
chao an 664927c86e mm/alloc: remove all unnecessary cast for alloc
Fix the minor style issue and remove unnecessary cast

Signed-off-by: chao an <anchao@xiaomi.com>
2023-08-30 14:34:20 +08:00
Zhe Weng d44e19d115 mm/iob: Add support for increasing length in iob_update_pktlen
Signed-off-by: Zhe Weng <wengzhe@xiaomi.com>
2023-08-22 16:34:21 +09:00
Zhe Weng e2c9aa6588 mm/iob: Fix IOB length in `iob_reserve`
If we apply `iob_reserve` on an IOB with `io_offset != 0`, the `head->io_pktlen` and `iob->io_len` will become wrong value, because we only need to trim `offset - iob->io_offset`.

Signed-off-by: Zhe Weng <wengzhe@xiaomi.com>
2023-08-22 09:09:21 +08:00
zhanghongyu 1d6b9d3e98 iob: add elapse calc for iob_allocwait
Correct the calculation of timeout time

Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
2023-08-19 21:22:09 +08:00
zhanghongyu 69667e2c73 iob: iob_clone_partial support Negative offset
make the function can correctly handle negative offset cases

Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
2023-08-19 01:41:29 +08:00
anjiahao 4c39cdce09 mempool:Use default alignment inside of blockalign
can reduce memery usage

Signed-off-by: anjiahao <anjiahao@xiaomi.com>
2023-08-19 01:29:14 +08:00
anjiahao e053dcc9f4 mempool:add check for double free check for mempool free
Signed-off-by: anjiahao <anjiahao@xiaomi.com>
2023-08-19 01:22:58 +08:00
zhanghongyu 0216224260 iob_alloc: change sem_post to count++
if there are two throttled wait, when iob_free occurs, one of wait
will be awakened to execute iob_alloc_committed, but it will fail
to execute, sem will be posted at this time, then another wait will
be awakened. after the other wait thread is awakened, This step is
repeated. the two threads are in the critical_section state and
cannot be switched to other threads. then cpu will busy util timeout.

Signed-off-by: zhanghongyu <zhanghongyu@xiaomi.com>
2023-08-11 08:54:05 -06:00
anjiahao 6eabacf304 mempool:change mutex to rmutex avoid deadlock
If malloc chunk fails, and if malloc fails to dump all memory,
it will cause deadlock in multiple_mempool_info
Signed-off-by: anjiahao <anjiahao@xiaomi.com>
2023-08-10 13:43:32 +02:00