Commit Graph

107 Commits

Author SHA1 Message Date
Nicolas Pitre 1e4fd23e58 kernel: mmu: install demand mappings for the on-demand linker sections
This sets initial unpaged mappings for __ondemand_func code and
__ondemand_rodata variables. To achieve this, we have to augment the
backing store API.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-09-10 17:17:30 -04:00
Nicolas Pitre 6b3fff3a2f kernel: mmu: make demand paging work on SMP
This is the minimum for demand paging to work on SMP systems.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-09-10 11:44:16 +02:00
Nicolas Pitre c692136f21 mmu: introduce k_mem_update_flags()
It is sometimes necessary to modify/update memory permissions on some
pages, especially with LLEXT where some allocated segments have to be
executable.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-09-06 11:25:54 -04:00
Nicolas Pitre c9aa98ebc0 kernel: mmu: support for on-demand mappings
This provides memory mappings with the ability to be initialized in their
paged-out state and be paged in on demand. This is especially nice for
anonymous memory mappings as they no longer have to allocate all memory
at mem_map time. This also allows for file mappings to be implemented by
simply providing backing store location tokens.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-08-26 17:25:41 -04:00
Pisit Sawangvonganan ef639efb38 style: kernel: comply with MISRA C:2012 Rule 15.6
Add missing braces to comply with MISRA C:2012 Rule 15.6 and
also following Zephyr's style guideline.

Signed-off-by: Pisit Sawangvonganan <pisit@ndrsolution.com>
2024-08-20 10:33:51 +02:00
Nicolas Pitre 2a41d85709 k_mem_map: move some assertions to unconditional checks
Dynamic memory allocation depends on various factors and many are
only known at runtime depending on application inputs.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-08-19 15:17:48 -04:00
Pisit Sawangvonganan 5ed3cd4bc9 kernel: fix typo
Utilize a code spell-checking tool to scan for and correct spelling errors
in all files within the `kernel` directory.

Signed-off-by: Pisit Sawangvonganan <pisit@ndrsolution.com>
2024-07-08 15:51:37 +02:00
Nicolas Pitre 6a3aa3b04e demand_paging: add frame tracking functions to eviction algorithms
Let eviction algorithms be notified when a given page frame:

- should be considered as possible candidate

- should no longer be considered as candidate

- has just been marked as "accessed"

The NRU algorithm is unchanged so it implements those as empty stubs.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-14 18:58:02 -04:00
Nicolas Pitre 0313091287 kernel: mmu: make k_mem_unmap() work with demand paging
If a page is paged out or paged in but unaccessible for the purpose of
tracking the "accessed" flag then k_mem_unmap() may fails. Add the code
needed to support those cases.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-13 09:22:39 +02:00
Daniel Leung 564ca11631 kernel: mm: rename z_page_fault() to k_mem_page_fault()
This is part of a series of move memory management related
stuff out of Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung 54af5dda84 kernel: mm: rename z_page_frame_* to k_mem_page_frame_*
Also any demand paging and page frame related bits are
renamed.

This is part of a series to move memory management related
stuff out of the Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung 01682756b6 kernel: mm: rename Z_VM_RESERVED to K_MEM_VM_RESERVED
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung a459cdf51e kernel: mm: rename Z_FREE_VM_START to K_MEM_VM_FREE_START
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung b2784c9145 kernel: mm: rename Z_KERNEL_VIRT_* to K_MEM_KERNEL_VIRT_*
Renames:
  Z_KERNEL_VIRT_START to K_MEM_KERNEL_VIRT_START
  Z_KERNEL_VIRT_SIZE to K_MEM_KERNEL_VIRT_SIZE
  Z_KERNEL_VIRT_END to K_MEM_KERNEL_VIRT_END

This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung 03eded1ed6 kernel: mm: rename Z_VIRT_RAM_* to K_MEM_VIRT_*
Renames:
  Z_VIRT_RAM_START to K_MEM_VIRT_RAM_START
  Z_VIRT_RAM_SIZE to K_MEM_VIRT_RAM_SIZE
  Z_VIRT_RAM_END to K_MEM_VIRT_RAM_END

This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung 3fd66de508 kernel: mm: rename Z_PHYS_RAM_* to K_MEM_PHYS_*
Renames:
  Z_PHYS_RAM_START to K_MEM_PHYS_RAM_START
  Z_PHYS_RAM_SIZE to K_MEM_PHYS_RAM_SIZE
  Z_PHYS_RAM_END to K_MEM_PHYS_RAM_END

This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung def364ab08 kernel: mm: rename Z_BOOT_* to K_MEM_BOOT_*
Rename Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() to
K_MEM_BOOT_VIRT_TO_PHYS() and K_MEM_BOOT_PHYS_TO_VIRT()
respectively. This is part of a series to move memory management
functions away from the Z_ namespace and into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung 552e29790d kernel: mm: rename z_phys_un/map to k_mem_*_phys_bare
This renames z_phys_map() and z_phys_unmap() to
k_mem_map_phys_bare() and k_mem_unmap_phys_bare()
respectively. This is part of the series to move memory
management functions away from the z_ namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Daniel Leung 9f9dd264d8 kernel: mm: rename k_mem_un/map_impl to k_mem_*_phys_guard
The internal functions k_mem_map_impl() and k_mem_unmap_impl()
are renamed to k_mem_map_phys_guard() and
k_mem_unmap_phys_guard() respectively to better clarify
their usage.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
frei tycho 4c2938a295 kernel: added missing parenthesis
- added missing parenthesis around macro argument expansion

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-06-07 12:59:46 +02:00
Nicolas Pitre e9a47d932c kernel: mmu: shrink and align struct z_page_frame
The struct z_page_frame is marked __packed to avoid extra padding as
such padding may represent significant memory waste when lots of page
frames are used. However this is a bad strategy.

The code contained this somewhat dubious comment and code in
free_page_frame_list_put():

	/* The structure is packed, which ensures that this is true */
	void *node = pf;
	sys_slist_append(&free_page_frame_list, node);

This is bad for many reasons:

- type checking is completely bypassed;

- if the sys_snode_t node member is no longer located at the front of
  struct z_page_frame then the code will still compile and possibly run
  but be broken with memory corruption as a likely outcome;

- the sys_slist_append() code is completely unaware of the packed
  attribute which breaks architectures with alignment restrictions.

Let's improve code efficiency as well as memory usage by removing the
packed attribute and manually packing the flags in the unused virtual
address bits. This way the page frame array remains naturally aligned,
data access becomes optimal and the actual array size gets even smaller.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-13 16:04:40 -04:00
Nicolas Pitre 57305971d1 kernel: mmu: abstract access to page frame flags and address
Introduce z_page_frame_set() and z_page_frame_clear() to manipulate
flags. Obtain the virtual address using the existing
z_page_frame_to_virt(). This will make changes to the page frame
structure easier.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-13 16:04:40 -04:00
Hess Nathan 6d417d52c2 coding guidelines: comply with MISRA Rule 12.1.
added parentheses verifying lack of ambiguities

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-12 13:37:27 -04:00
Pieter De Gendt f147a5fec2 spelling: Replace occurrences of "iff" with "if and only if"
Spell checking tools do not recognize "iff", replace with "if and only if".
See https://en.wikipedia.org/wiki/If_and_only_if

Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
2024-05-06 14:58:08 +01:00
Daniel Leung 04c5632bd4 kernel: mm: introduce k_mem_phys_map()/_unmap()
This is similar to k_mem_map()/_unmap(). But instead of using
anonymous memory, the provided physical region is mapped
into virtual address instead. In addition to simple mapping
physical ro virtual addresses, the mapping also adds two
guard pages before and after the virtual region to catch
buffer under-/over-flow.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Simon Hein bcd1d19322 kernel: add closing comments to config endifs
Add a closing comment to the endif with the configuration
information to which the endif belongs too.
To make the code more clearer if the configs need adaptions.

Signed-off-by: Simon Hein <Shein@baumer.com>
2024-03-25 18:03:31 -04:00
Flavio Ceolin e2f3840380 kernel: Fix possible overflow in k_mem_map
k_mem_map additionally allocates two guard pages that are not mapped.
These pages are not being accounted when checking the provided size and
when they are added an overflow can happen and the mapped memory is not
correct.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-03-22 15:58:33 -04:00
Daniel Leung fa561ccd59 kernel: mmu: no need to expose z_free_page_count
z_free_page_count is only used in one file, so there is
no need to expose it, even to other part of kernel.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-12 18:46:21 +00:00
Daniel Leung 22447c9736 kernel: mmu: fix typo K_DIRECT_MAP to K_MEM_DIRECT_MAP
Fix typo in comment to reflect the actual macro named
K_MEM_DIRECT_MAP.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-08 08:31:15 -05:00
Peter Mitsis 91a6af3e8f kernel: mmu: Fix static analysis issue
Instead of performing a set of relative address comparisons using
pointers of type 'uint8_t *', we leverage the existing IN_RANGE()
macro and perform the comparisons with 'uintptr_t'.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-11-28 16:44:16 -05:00
Daniel Leung 40ba4015e3 kernel: mm: only include demand_paging.h if needed
This moves including of demand_paging.h out of kernel/mm.h,
so that users of demand paging APIs must include the header
explicitly. Since the main user is kernel itself, we can be
more discipline about header inclusion.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-23 10:01:45 +01:00
Anas Nashif 4e396174ce kernel: move syscall_handler.h to internal include directory
Move the syscall_handler.h header, used internally only to a dedicated
internal folder that should not be used outside of Zephyr.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Flavio Ceolin a36c016dff kernel: mmu: Fix possible null de-reference
virt_page_phys_get can be called with phy parameter NULL when
the intention is just checking if a virtual address is mapped.

This function is generally overwritten by a an arch API that checks if
phys is null before using it but this default implementation doesn't.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-10-30 09:23:23 -04:00
Daniel Leung b8e0de2ad0 kernel: mmu: fix bitmap set and clear under direct map
When CONFIG_KERNEL_DIRECT_MAP enabled, the region to be mapped
or unmapped can be outside of the virtual memory space, wholly
within it, or overlap partially. Additional processing is
needed to make sure we only manipulate the bits within
the bitmap, in other words, only the pages represented by
the bitmap.

Fixes #59549

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-15 16:30:55 -04:00
Hou Zhiqiang 41520e8b5b kernel: mmu: add direct-map support in z_phys_map()
Many RTOS applications assume the virtual and physical address
is 1:1 mapping, so add the 1:1 mapping support in z_phys_map()
to easy adapt these applications.

Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
2023-05-26 13:50:35 -04:00
Daniel Leung 100eacca07 kernel: mmu: fix potential running out of virtual memory space
In z_phys_unmap(), the call to virt_region_free() is not using
aligned virtual address and space. This can result in freeing
smaller region that allocated given that inputs to z_phys_unmap()
may not be aligned. So use the already calculated aligned
virtual address and size as input to virt_region_free().

Note that the assertion and if-block in virt_region_free() to
check whether the to-be-unmapped region is within the virtual
memory region needs to be trimmed by one byte at the end.
The assertion and if-block are checking against the region
end address but (start + size) is just one byte over the end.
So subtract one.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-11-17 15:56:04 +00:00
Simon Hein 02cfbfea51 kernel: comply to coding guidelines MISRA C:2012 Rule 14.4
MISRA C:2012 Rule 14.4 (The controlling expression of an if statement
and the controlling expression of an iteration-statement shall have
essentially Boolean type.)

Use `bool' instead of `int' to represent Boolean values.
Use `do { ... } while (false)' instead of `do { ... } while (0)'.
Use comparisons with zero instead of implicitly testing integers.

This commit is a subset of the original commit:
5d02614e34

Signed-off-by: Simon Hein <SHein@baumer.com>
2022-07-21 06:16:16 -04:00
Johann Fischer 3c971307dc arch/kernel/soc/samples: use unsigned int for irq_lock()
irq_lock() returns an unsigned integer key.
Generated by spatch using semantic patch
scripts/coccinelle/irq_lock.cocci

Signed-off-by: Johann Fischer <johann.fischer@nordicsemi.no>
2022-07-14 14:37:13 -05:00
Abramo Bagnara ad8778d019 coding guidelines: comply with MISRA C:2012 Rule 4.1
MISRA C:2012 Rule 4.1 (Octal and hexadecimal escape sequences shall be
terminated.)

Use string literal concatenation to properly terminate hexadecimal
escape sequences.

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
Signed-off-by: Simon Hein <SHein@baumer.com>
2022-06-30 19:51:59 -04:00
Gerard Marull-Paretas cffefc818d kernel: migrate includes to <zephyr/...>
In order to bring consistency in-tree, migrate all kernel code to the
new prefix <zephyr/...>. Note that the conversion has been scripted,
refer to zephyrproject-rtos#45388 for more details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-09 09:26:20 +02:00
Daniel Leung b46916484a kernel: mmu: add a log line for z_phys_unmap
This adds a LOG_DBG() line for z_phys_unmap which mirrors
what is in z_phys_map(). This also fixes a warning from
Clang about a variable being set but never used (addr_offset).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-02-24 08:38:38 -06:00
Carles Cufi 2000cdf6dc kernel: mmu: Fix access to unpacked member inside packed struct
The following warning is triggered by GCC when
-Waddress-of-packed-member is enabled:

/home/carles/src/zephyr/zephyr/kernel/mmu.c: In function
'free_page_frame_list_put':
/home/carles/src/zephyr/zephyr/kernel/mmu.c:383:42: warning: taking
address of packed member of 'struct z_page_frame' may result in an
unaligned pointer value [-Waddress-of-packed-member]
  383 |  sys_slist_append(&free_page_frame_list, &pf->node);

This is due to the fact that sys_snode_t node is an unpacked structure
inside a packed z_page_frame structure, so that the alignment of the
former cannot be ensured if placed inside the latter.

Given that alignment of z_page_frame is ensured by the code, silence the
compiler by going through an intermediate variable.

More info in #16587.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2021-12-10 14:08:59 +01:00
Daniel Leung fff9180614 kernel: mmu: make virtual region bitmap static
The virtual region bitmap bitarray struct is only used within
the source file, so it can be declared static.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-24 14:22:23 -05:00
Lixin Guo 6aa783ed6a kernel: mmu: exclude some funcs from coverage
page_frame_dump() and z_page_frames_dump() are used for
debug print, so there is no need to cover those funcs.
__weak function is also excluded, every test overrides it.

Signed-off-by: Lixin Guo <lixinx.guo@intel.com>
2021-11-12 11:57:50 -05:00
Neil Armstrong 2f359aeacf mmu: fix virt_region_alloc() unused region free when aligned
In the case where the aligned memory range is on top of the allocated
memory range, freeing the 0 sized top unused memory will trigger
an assert in the virt_region_free() call since vaddr could be equal
to Z_VIRT_REGION_END_ADDR.

Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
2021-10-13 06:24:56 -04:00
Neil Armstrong 7830f87ccd mmu: get virtual alignment from physical address
On ARM64 platforms, when mapping multiple memory zones with size
not multiple of a L2 block size (2MiB), all the following mappings
will probably use L3 tables.

And a huge mapping will consume all possible L3 tables.

In order to reduce usage of L3 tables, this introduces a new
arch_virt_region_align() optional architecture specific
call to eventually return a more optimal virtual address
alignment than the default MMU_PAGE_SIZE.

This alignment is used in virt_region_alloc() by:
- requesting more pages in virt_region_bitmap to make sure we request
  up to the possible aligned virtual address
- freeing the supplementary pages used for alignment

Suggested-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
2021-10-11 21:00:28 -04:00
Stefan Eicher fbe7d7297e comments: minor typo fixes
Fixing minor typos in comments

Signed-off-by: Stefan Eicher <stefan.eicher@ypsomed.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-10-05 07:18:13 -04:00
Daniel Leung 2dfae4a0f7 kernel: demand_paging: allow reserving page frames
This adds the kconfig to allow reserving a number of page frames
which do not count towards free memory. This is to ensure that
there are enough page frames available for paging code and data.
Or else, it would be possible to exhaust all page frames via
anonymous memory mappings.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung e88afd2c37 kernel: mmu: pin/unpin boot sections during boot process
During boot process, the boot sections need to be pinned in
memory to prevent them from being paged out (to avoid
pages being paged out and immediately paged in again).
Once the boot process is completed (just before calling main()),
the boot sections can be unpinned so the memory can be
used for demand paging for paging in data pages.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung f32ea4433c kernel: demand_paging: clear BSS after paging is initialized
If BSS section is not present in memory at boot, it would not
have been cleared as the data pages are not in physical memory.
Manipulating those pages would result in page faults.
In this scenario, zeroing BSS can only be done once the paging
mechanism has been initialized. So do it there.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00