Commit Graph

4530 Commits

Author SHA1 Message Date
Nicolas Pitre 88477906f0 arm64: hold curr_cpu instance in tpidrro_el0
Let's fully exploit tpidrro_el0 by storing in it the current CPU's
struct _cpu instance alongside the userspace mode flag bit. This
greatly simplifies the code needed to get at the cpu structure, and
this paves the way to much simpler multi cluster support, as there
is no longer the need to decode MPIDR all the time.

The same code is used in the !SMP case as there are benefits there too
such as avoiding the literal pool, and it looks cleaner.

The tpidrro_el0 value is no longer stored in the exception stack frame.
Instead, we simply restore the user mode flag based on the SPSR value.
This way, more flag bits could be used independently in the future.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-14 15:06:21 -04:00
Jaxson Han f249544f48 arch: arm64: Add MPU drivers to the build system
When ARM_MPU is defined, the MPU drivers will be built into the final
zephyr target.

Signed-off-by: Haibo Xu <haibo.xu@arm.com>
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-04-13 07:47:44 -04:00
Jaxson Han 30ed92c218 arch: arm64: Armv8-R AArch64 MPU implementation
Armv8-R AArch64 MPU can support a maximum 16 memory regions, and the
actual region number can be retrieved from the system register(MPUIR)
during MPU initialization.
Current MPU driver only suppots EL1.

Signed-off-by: Haibo Xu <haibo.xu@arm.com>
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-04-13 07:47:44 -04:00
Jaxson Han ad1da08f4f arch: arm64: Add Cortex-R82 config
Add Cortex-R82 config to support the Cortex-R82 processor.
Introduce the new CPU_CORTEX_R_AARCH64 config for the Cortex-R 64-bit
processor.

Since the current CPU_CORTEX_R config has already been bound for
AArch32 in many test cases, we therefore add a new CPU_AARCH64_CORTEX_R
to distinguish from the Cortex-R 32-bit processor.
We do not use CPU_CORTEX_R64 because this name will lead to ambiguity
with processor name like Cortex-R82.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-04-13 07:47:44 -04:00
Evgeniy Paltsev d4081fd07f ARC: allow to configure the RGF_NUM_BANKS only if ARC_FIRQ is enabled
As of today we use second register bank only if fast interrupts are
enabled. So don't show the 'number of register bank' configuration
option if fast interrupts are disabled to avoid user confusion.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-04-13 06:59:20 -04:00
Nicolas Pitre 790794ce84 arm64: improve CONFIG_MAX_XLAT_TABLES default value
The typical number of needed translation tables depends on memory
domain usage and userspace support, but also on the virtual address
space width due to the number of translation levels involved.
Reflect that in the default value.

Also fix a related comment where values were off by 1.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-12 22:13:38 -04:00
Nicolas Pitre fc8c53ff0e arm64: a few alignment fixes
The structure for the arm64_cpu_init array has to carry the cache
alignment on the whole structure and not on some internal padding
to achieve the desired effect.

And align struct __esf to a 16-byte boundary which will also align
its size accordingly. This structure is allocated on the stack on
exception entry and the ABI prescribed 16-byte stack alignment
should be preserved.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-12 11:47:41 -04:00
Krzysztof Chruscinski 8bee027ec4 arch: arm: Unconditionally compile IRQ_ZERO_LATENCY flag
Flag was present only when ZLI was enabled. That resulted in additional
ifdefs needed whenever code supports ZLI and non-ZLI mode.

Removed ifdefs, added build assert to irq connections to fail at
compile time if IRQ_ZERO_LATENCY is set but ZLI is disabled. Additional
clean up made which resulted from removing the ifdef.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-12 07:33:27 -04:00
Flavio Ceolin 9285b4c6bf arch: nios2: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin b7d04487e1 arch: riscv: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin bfd9d0069b arch: sparc: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin 49f0c74a9e arch: common: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin 4f5460ad6a arch: arm: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin abb1bbe6b1 arch: xtensa: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin 4b55ee27d4 arch: arc: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin 03544f0b77 arch: x86: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Ioannis Glaropoulos d307bd2fdd arm: add note explaining why Hard ABI is disabled for tfm builds
Add a note in the Kconfig help text that explains why Hard ABI
is not possible on builds with TF-M.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-04-09 11:48:55 -05:00
Øyvind Rønningstad 80a351e22d arch: arm: Disallow FP_HARDABI when building with TFM
When building with TFM, the app is linked with libraries built by the
TFM build system. TFM is always built with -msoft-float which is
equivalent to -mfloat-abi=soft. FP_HARDABI adds -mfloat-abi=hard
which gives errors when linking with the libs from TFM since they are
built with a different ABI.

Fixes https://github.com/zephyrproject-rtos/zephyr/issues/33956

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2021-04-09 11:48:55 -05:00
Nicolas Pitre 69a0fd3a6a aarch64: smp: make the cross-CPU swap_ptables call use its own IPI
Let's disentangle this from arch_sched_ipi() with an SGI for its
own purpose.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-09 11:55:13 -04:00
Nicolas Pitre 28dc807f50 aarch64: mmu: don't touch the lock before the MMU is on
We can't do atomic memory operations before the MMU is on. Let's create
a code path to set up MMU page tables without any lock. There is
obviously no concurrency issues at this stage.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-09 11:55:00 -04:00
Carlo Caione 9dd2731d15 aarch64: Remove comparison with GIC-specific intid
GIC_INTID_SPURIOUS is a GIC-specific intid so it's not valid for custom
interrupt controllers. Rework a bit the logic by comparing the intid to
the maximum intid possible instead.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-09 08:28:21 -04:00
Carlo Caione 23a0c8c2ec aarch64: Do not save garbage on the stack
No need to save useless values on the stack.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-09 08:28:21 -04:00
Carlo Caione 64dfa69681 aarch64: Remove useless _curr_cpu struct
Currently _curr_cpu is only used by the get_cpu macro to quickly access
the cpu struct. This is not really necessary because we can access to
the struct by directly referencing &(_kernel.cpus[cpu_num]) in assembly

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-09 08:10:10 -04:00
Jiafei Pan 56db1ee66d arch: arm64: add 40 bits physical and virtual address
Add support 40 bits physical and virtual address space.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-04-09 13:25:15 +02:00
Kumar Gala be0a19757c riscv: MTVAL CSR not supported on OpenISA RV32M1
Don't report MTVAL on the OpenISA RV32M1 SoC as this CSR isn't
supported on the SoC.

Fixes: #34014

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-04-08 14:22:54 +02:00
Daniel Leung 09e8db3d68 kernel: enable using timing subsys to collect paging histograms
This adds bits to the paging timing histogram collection routines
so they can use timing functions to collect execution time data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung 1eba3545c1 x86: timing: allow userspace to convert cycles to ns
The variable tsc_freq is not accessible in user thread
and is thus preventing user threads to convert cycles to ns.
So make tsc_freq available globally in default memory
domain so conversion is possible.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung 8eea5119d7 kernel: mmu: demand paging execution time histogram
This adds the bits to record execution time of eviction selection,
and backing store page-in/page-out in histograms.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung ae86519819 kernel: mmu: collect more demand paging statistics
This adds more bits to gather statistics on demand paging,
e.g. clean vs dirty pages evicted, # page faults with
IRQ locked/unlocked, etc.

Also extends this to gather per-thread demand paging
statistics.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Flavio Ceolin 85b2bd63c1 arch: x86: Fix 14.4 guideline violation
The controlling expression of an if statement has to be an
essentially boolean type.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-06 10:25:24 -04:00
Flavio Ceolin 95cd021cea arch: arm: Fix 14.4 guideline violation
The controlling expression of an if statement has to be an
essentially boolean type.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-06 10:25:24 -04:00
Daniel Leung 64e99dfcf6 xtensa: change CONFIG_ATOMIC_OPERATIONS_ARCH to imply
Xtensa cores are highly configurable so each SoC may not have
the needed instructions for the hardware assisted atomic
operations. So instead of selecting the arch-specific atomic
operations kconfig, do a "imply" instead. So SoC or board
configs can disable this.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-02 07:23:33 -04:00
Nicolas Pitre d18ab4af49 arm64: get rid of the mmu directory
Turns out that we could flatten the tree further as there is not
that many files to warrant a whole directory for this.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-01 15:47:04 -05:00
Anas Nashif 0630452890 x86: make tests of a value against zero should be made explicit
Tests of a value against zero should be made explicit, unless the
operand is effectively Boolean. This is based on MISRA rule 14.4.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif 25c87db860 kernel/arch: cleanup function definitions
make identifiers used in the declaration and definition identical. This
is based on MISRA rule 8.3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Carlo Caione a43f3bade8 arm/arm64: Fix misc and trivials for ARM/ARM64 split
Fix the header guards, comments, github labeler, CODEOWNERS and
MAINTAINERS files.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-31 10:34:33 -05:00
Carlo Caione 3539c2fbb3 arm/arm64: Make ARM64 a standalone architecture
Split ARM and ARM64 architectures.

Details:

- CONFIG_ARM64 is decoupled from CONFIG_ARM (not a subset anymore)
- Arch and include AArch64 files are in a dedicated directory
  (arch/arm64 and include/arch/arm64)
- AArch64 boards and SoC are moved to soc/arm64 and boards/arm64
- AArch64-specific DTS files are moved to dts/arm64
- The A72 support for the bcm_vk/viper board is moved in the
  boards/bcm_vk/viper directory

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-31 10:34:33 -05:00
Daniel Leung 7a27509d6f x86: gen_mmu: allow script to take extra arguments
This extends the cmake build script to take in extra arguments
for gen_mmu.py.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Daniel Leung 4b477a9864 x86: mmu: allow copying page directory entries with large pages
This changes the assert when a large page is encountered to
copying the page directory entry to the new page directory.
This is needed when a large page entry is generated by
gen_mmu.py. Note that this still asserts when there are entries
of large page at higher level.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Daniel Leung 51263f73aa x86: gen_mmu: allow specifying extra mappings
This extends gen_mmu.py to accept additional mappings passed via
command line.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Daniel Leung 0886a73df8 x86: gen_mmu: fail if reserved page table space is too small
This makes the gen_mmu.py script to error out if the reserved space
for page table in zephyr_prebuilt.elf is not large enough to
accommodate the generated page table. Let catch this at build time
instead of mysterious hangs when loading the page table at boot.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Daniel Leung 3ebcd8307e x86: mmu: add kconfig CONFIG_X86_EXTRA_PAGE_TABLE_PAGES
The whole page table is pre-allocated at build time and is
dependent on the range of address space. This kconfig allows
reserving extra pages (of size CONFIG_MMU_PAGE_SIZE) to
the page table so that gen_mmu.py can make use of these
extra pages.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Flavio Ceolin 3a04cc2210 riscv: core: Remove invalid comparison
unsigned int will never be lesser than 0.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-03-26 07:13:13 -04:00
Martin Åberg 83f733ce59 SPARC: improve fatal log
The fatal log now contains
- Trap type in human readable representation
- Integer registers visible to the program when trap was taken
- Special register values such as PC and PSR
- Backtrace with PC and SP

If CONFIG_EXTRA_EXCEPTION_INFO is enabled, then all the above is
logged. If not, only the special registers are logged.

The format is inspired by the GRMON debug monitor and TSIM simulator.
A quick guide on how to use the values is in fatal.c.

It now looks like this:

E: tt = 0x02, illegal_instruction
E:
E:       INS        LOCALS     OUTS       GLOBALS
E:   0:  00000000   f3900fc0   40007c50   00000000
E:   1:  00000000   40004bf0   40008d30   40008c00
E:   2:  00000000   40004bf4   40008000   00000003
E:   3:  40009158   00000000   40009000   00000002
E:   4:  40008fa8   40003c00   40008fa8   00000008
E:   5:  40009000   f3400fc0   00000000   00000080
E:   6:  4000a1f8   40000050   4000a190   00000000
E:   7:  40002308   00000000   40001fb8   000000c1
E:
E: psr: f30000c7   wim: 00000008   tbr: 40000020   y: 00000000
E:  pc: 4000a1f4   npc: 4000a1f8
E:
E:       pc         sp
E:  #0   4000a1f4   4000a190
E:  #1   40002308   4000a1f8
E:  #2   40003b24   4000a258

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-03-25 17:48:23 +01:00
Martin Åberg c2b1e8d2f5 SPARC: implement ARCH_EXCEPT()
Introduce a new software trap 15 which is generated by the
ARCH_EXCEPT() function macro.

The handler for this software trap calls z_sparc_fatal_error() and
finally z_fatal_error() with "reason" and ESF as arguments.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-03-25 17:48:23 +01:00
Martin Åberg 9da5a786a1 SPARC: catch unexpected softare traps
Unexpected software traps ("ta" instruction) are now handled by the
fatal exception handler and eventually end up in z_fatal_error().

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-03-25 17:48:23 +01:00
Kumar Gala 520ebe4d76 arch: arm: remove compat headers
These compat headers have been moved since at least v2.4.0 release so we
can now remove them.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-25 16:40:25 +01:00
Katsuhiro Suzuki 19db485737 kernel: arch: use ENOTSUP instead of ENOSYS in k_float_disable()
This patch replaces ENOSYS into ENOTSUP to keep consistency with
the return value specification of k_float_enable().

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Katsuhiro Suzuki 59903e2934 kernel: arch: introduce k_float_enable()
This patch introduce new API to enable FPU of thread. This is pair of
existed k_float_disable() API. And also add empty arch_float_enable()
into each architectures that have arch_float_disable(). The arc and
riscv already implemented arch_float_enable() so I do not touch
these implementations.

Motivation: Current Zephyr implementation does not allow to use FPU
on main and other system threads like as work queue. Users need to
create an other thread with K_FP_REGS for floating point programs.
Users can use FPU more easily if they can enable FPU on running
threads.

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Carlo Caione 807991e15f AArch64: Do not use CONFIG_GEN_PRIV_STACKS
We are setting CONFIG_GEN_PRIV_STACKS when AArch64 actually uses a
statically allocated privileged stack.

This error was not captured by the tests because we only verify whether
a read/write to a privileged stack is failing, but it can fail for a lot
of reasons including when the pointer to the privileged stack is not
initialized at all, like in this case.

With this patch we deselect CONFIG_GEN_PRIV_STACKS and we fix the
mem_protect/userspace test to correctly probe the privileged stack.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-25 07:23:19 -04:00
Eugeniy Paltsev 1b41da2630 ARC: Kconfig: rename CPU_ARCV2 option to ISA_ARCV2
* Rename CPU_ARCV2 to ISA_ARCV2. That helps to avoid conflict between
  CPU families naming and ISAs naming and aligns this options
  with other ARC OSS projects.

* Generalize ARCV2 check to ARC check where it is required.

NOTE: we add ISA_ARCV2 option in a choice list as a preparation
for ISA_ARCV3 addition.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2021-03-25 07:23:02 -04:00
Eugeniy Paltsev 8311d27afc ARC: Kconfig: cleanup CPU_ARCEM / CPU_ARCHS options usage
Don't allow user to choose CPU_ARCEM / CPU_ARCHS options
but select them when exact CPU type (i.e. EM4 / EM6 / HS3X/ etc)
is chosen.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2021-03-25 07:23:02 -04:00
Kumar Gala 95e4b3eb2c arch: arm: Add initial support for Cortex-M55 Core
Add initial support for the Cortex-M55 Core which is an implementation
of the Armv8.1-M mainline architecture and includes support for the
M‑profile Vector Extension (MVE).

The support is based on the Cortex-M33 support that already exists in
Zephyr.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-23 13:13:32 -05:00
Jim Shu 8db3683820 arch: riscv: improve exception messages
Add exception descriptions of mcause id 6~15. Also print mtval CSR for
memory access fault & illegal instruction exceptions.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2021-03-22 15:47:09 -04:00
Watson Zeng 0da8ec70dc arch: arc: enable divide zero exception
STATUS32.DZ(bit 13) is the EV_DivZero exception enable bit, and it's
not enabled by default. we need to set it explicitly to enable divide
zero exception on early boot and each thread's setup.

The DZ bit is ignored on write and read as zero when there is no
hardware division configured. So we can simply set DZ bit even if
there is no hardware division configured.

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2021-03-19 13:56:59 -04:00
Anas Nashif fe0872c0ab clocks: rename z_tick_get -> sys_clock_tick_get
Do not use z_ for internal APIs, z_ is for private APIs within one
subsystem only.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Anas Nashif 771cc9705c clock: z_clock_isr -> sys_clock_isr
Do not use z_ for internal APIs, z_ is for private APIs within one
subsystem only.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Carlo Caione f3d11cccf4 aarch64: userspace: Enable userspace
Add ARCH_HAS_USERSPACE to enable userspace.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione 2936998591 aarch64: GCC10: Add -mno-outline-atomics
GCC10 introduced by default calls to out-of-line helpers to implement
atomic operations with the '-moutline-atomic' option. This is breaking
several tests because the embedded calls are trying to access the
zephyr_data region from userspace that is declared as MT_P_RW_U_NA,
triggering a memory fault.

Since there is currently no support for MT_P_RW_U_RO (and probably never
will be), disable the out-of-line helpers disabling the GCC option.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione 8cbd9c7d8e aarch64: userspace: Add missing entries in vector table
To support exceptions taken in EL0.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione 1347fdbca7 aarch64: userspace: Increase KOBJECT_TEXT_AREA
This is needed to have some tests run successfully.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Nicolas Pitre 2b5b054b0b aarch64: userspace: bump the global number of available page tables
Each memory domain requires a few pages for itself.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione b52f769908 aarch64: mmu: Fix MMU permissions for zephyr code and data
User threads still need to access the code and the RO data. Fix the
permissions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Nicolas Pitre a74f378cdc aarch64: mmu: apply domain switching on all CPUs if SMP
It is apparently possible for one CPU to change the memory domain
of a thread already being executed on another CPU.

All CPUs must ensure they're using the appropriate mapping after a
thread is newly added to a domain.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione ec70b2bc7a aarch64: userspace: Add support for page tables swapping
Introduce the necessary routines to have the user thread stack correctly
mapped and the functions to swap page tables on context switch.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Kumar Gala 7d35a8c93d kernel: remove arch_mem_domain_destroy
The only user of arch_mem_domain_destroy was the deprecated
k_mem_domain_destroy function which has now been removed.  So remove
arch_mem_domain_destroy as well.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-18 16:30:47 +01:00
Carles Cufi 59a51f0e09 debug: Clean up thread awareness data sections
There's no need to duplicate the linker section for each architecture.
Instead, move the section declaration to common-rom.ld.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2021-03-17 14:43:01 -05:00
Daniel Leung 0c540126c0 x86: gen_mmu: unify size display in hex
This unifies all the display of region size in hex.
Some of them are there to aid in figuring out the end of
a memory region so it is easier if they are already in hex.

This also fixes the display of address range where the end
is off by one and should be (base + size - 1).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung c650721a0f x86: ia32: use virtual address for interrupt stack at boot
After page table is load, we should be executing in virtual
address space. Therefore we need to set ESP to the virtual
address of interrupt stack for the boot process.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung 9109fbb1a2 x86: ia32: load GDT in virtual memory after loading page table
This reverts commit d40e8ede8e.

This fixes triple faults after wiping the identity mapping of
physical memory when running entering userspace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Andrew Boie 348d1315d2 x86: 32-bit: restore virtual linking capability
This reverts commit 7d32e9f9a5.

We now allow the kernel to be linked virtually. This patch:

- Properly converts between virtual/physical addresses
- Handles early boot instruction pointer transition
- Double-maps SRAM to both virtual and physical locations
  in boot page tables to facilitate instruction pointer
  transition, with logic to clean this up after completed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung 03b413712a x86: gen_mmu: double map physical/virtual memory at top level
This reuses the page directory pointer table (PAE=y) or page
directory (PAE=n) to point to next level page directory table
(PAE=y) or page tables (PAE=n) to identity map the physical
memory. This gets rid of the extra memory needed to host
the extra mappings which are only used at boot. Following
patches will have code to actual unmap physical memory
during the boot process, so this avoids some wasting of
memory.

Since no extra memory needs to be reserved, this also reverts
commit ee3d345c09
("x86: mmu: reserve more space for page table if linking in virt").

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung dd0748a979 x86: gen_mmu: use constants to refer to page level...
...instead of magic numbers. Makes it a tiny bit easier to
read code.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung b95b8fb075 x86: gen_mmu: allow more verbose messages
This allows specifying second --verbose in command line to
enable more messages. Two new ones have been added to aid
in debugging code for mapping and setting permission to
a single page.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung 4bab992e80 x86: gen_mmu: consolidate map() and identity_map()
Consolidate map() and identity_map() as they are mostly
the same.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung e211d3a999 kernel: remove CONFIG_KERNEL_LINK_IN_VIRT
There actually is no need for a separate kconfig here, as
the kernel VM address and SRAM address can be used to figure
out if the kernel is linked in virtual address space.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung 273a5e670b x86: remove usage of CONFIG_KERNEL_LINK_IN_VIRT
There is no need to use this kconfig, as the phys-to-virt
offset is enough to figure out if the kernel is linked in
virtual address space in gen_mmu.py.

For code, use Z_VM_KERNEL instead.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung d39012a590 x86: use Z_MEM_*_ADDR instead of Z_X86_*_ADDR
With the introduction of Z_MEM_*_ADDR for physical<->virtual
address translation, there is no need to have x86 specific
versions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Nicolas Pitre f062490c7e aarch64: mmu: add TLB flushing on mapping changes
Pretty crude for now, as we always invalidate the entire set.
It remains to be seen if more fined grained TLB flushing is worth
the added complexity given this ought to be a relatively rare event.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Carlo Caione a010651c65 aarch64: mmu: Add initial support for memory domains
Introduce the basic support code for memory domains. To each domain
is associated a top page table which is a copy of the global kernel
one. When a partition is added, corresponding memory range is made
private before its mapping is adjusted.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Nicolas Pitre c77ffebb24 aarch64: mmu: apply proper locking
We need to protect against concurrent modifications to page tables and
their use counts.

It would have been nice to have one lock per domain, but we heavily
share page tables across domains. Hence the global lock.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Nicolas Pitre e4cd3d4292 aarch64: mmu: code to split/combine page tables
Two scenarios are possible.

privatize_page_range:

Affected pages are made private if they're not. This means a whole
new page branch starting from the top may be allocated and content
shared with the reference page tables, except for the private range
where content is duplicated.

globalize_page_range:

That's the reverse operation where pages for given range is shared with
the reference page tables and no longer needed pages are freed.

When changing a domain mapping the range needs to be privatized first.

When changing a global mapping the range needs to be globalized last.

This way page table sharing across domains is maximized and memory
usage remains optimal.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Nicolas Pitre 402636153d aarch64: mmu: factor out table expansion code
Make the allocation, population and linking of a new table into
a function of its own for easier code reuse.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Eugeniy Paltsev 8165f3ad80 ARC: cleanup instruction cache initialization
As of today during the Zephyr start we
 - invalidate I$
 - disable I$
 - enable I$

Given that we don't need to have I$ disabled during any
initialization period and ARC processors have caches enabled
after reset the I$ disabling/enabling is excessive, so we can
drop it.

By that we also aligh the I$ initialization on ARC with other
projects like U-boot and Linux kernel.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2021-03-12 18:29:07 -05:00
Watson Zeng 5c3e7e3cb7 arch: arc: remove ARCH_HAS_STACK_PROTECTION for ARC_MPU_VER 2
As we have removed MPU_STACK_GUARD for ARC_MPU_VER 2, we also
need to remove ARCH_HAS_STACK_PROTECTION for boards with
ARC_MPU_VER 2 and no hardware stack checking, relative commit:
commit(arch: arc: remove MPU_STACK_GUARD for ARC_MPU_VER 2)
in pull request #24021

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2021-03-11 08:57:01 -05:00
Daniel Leung 6cac92ad52 x86: remove CONFIG_CPU_MINUTEIA
Since the removal of Quark-based boards, there are no user of
Minute-IA. Also, the generic x86 SoC is not exactly Minute-IA
so change it to use a fairly safe CPU_ATOM.

Fixes #14442

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-11 06:37:02 -05:00
Peng Fan b4f5b9e237 aarch64: reset: initialize CNTFRQ_EL0 in the highest EL
Can only be written at the highest Exception level implemented.
For example, if EL3 is the highest implemented Exception level,
CNTFRQ_EL0 can only be written at EL3.

Also move z_arm64_el_highest_plat_init to be called when is_el_highest

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-11 12:24:18 +01:00
Carlo Caione dacd176991 aarch64: userspace: Implement syscalls
This patch adds the code managing the syscalls. The privileged stack
is setup before jumping into the real syscall.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Nicolas Pitre f2995bcca2 aarch64: arch_buffer_validate() implementation
This leverages the AT (address translation) instruction to test for
given access permission. The result is then provided in the PAR_EL1
register.

Thanks to @jharris-intel for the suggestion.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione 9ec1c1a793 aarch64: userspace: Introduce arch_user_string_nlen
Introduce the arch_user_string_nlen() assembly routine and the necessary
C code bits.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione a7a3e800bf aarch64: fatal: Restrict oops-es when in user-mode
User mode is only allowed to induce oopses and stack check failures via
software-triggered system fatal exceptions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione 6978160427 aarch64: userspace: Introduce arch_is_user_context
The arch_is_user_context() function is relying on the content of the
tpidrro_el0 register to determine whether we are in user context or not.

This register is set to '1' when in EL1 and set back to '0' when user
threads are running in userspace.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione 6cf0d000e8 aarch64: userspace: Introduce skeleton code for user-threads
Introduce the first pieces needed to schedule user threads by defining
two different code paths for kernel and user threads.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione a7d3d2e0b1 aarch64: fatal: Add arch_syscall_oops hook
Add the arch_syscall_oops hook for the AArch64.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
James Harris 4e1926d508 arch: aarch64: do EL2 init in EL3 if necessary
If EL2 is implemented but we're skipping EL2, we should still
do EL2 init. Otherwise we end up with a bunch of things still
at their (unknown) reset values.

This in particular causes problems when different
cores have different virtual timer offsets.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-10 06:50:36 -05:00
Carlo Caione 8388794c9b aarch64: Rename z_arm64_get_cpu_id macro
z_arm64_* prefix should not be used for macros. Rename it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-09 04:52:40 -05:00
Carlo Caione bdbe33b795 aarch64: Rework {inc,dec}_nest_counter
There are several issues with the current implemenation of the
{inc,dec}_nest_counter macros.

The first problem is that it's internally using a call to a misplaced
function called z_arm64_curr_cpu() (for some unknown reason hosted in
irq_manage.c) that could potentially clobber the caller-saved registers
without any notice to the user of the macro.

The second problem is that being a macro the clobbered registers should
be specified at the calling site, this is not possible given the current
implementation.

To fix these issues and make the call quicker, this patch rewrites the
code in assembly leveraging the availability of the _curr_cpu array. It
now clobbers only two registers passed from the calling site.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-09 04:52:40 -05:00
Erwan Gouriou 19314514e6 arch/arm: cortex_m: Disable DWT based null-pointer exception detection
Null-pointer exception detection using DWT is currently incompatible
with current openocd runner default implementation that leaves debug
mode on by default.
As a consequence, on all targets that use openocd runner, null-pointer
exception detection using DWT will generated an assert.
As a consequence, all tests are failing on such platforms.

Disable this until openocd behavior is fixed (#32984) and enable
the MPU based solution for now.

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
2021-03-08 19:19:14 -05:00
Andy Ross ae4f7a1a06 arch/xtensa: Remember to spill windows in arch_cohere_stacks()
When we reach this code in interrupt context, our upper GPRs contain a
cross-stack call that may still include some registers from the
interrupted thread.  Those need to go out to memory before we can do
our cache coherence dance here.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross b28da4a3b7 arch/xtensa: Invalidate bottom of outbound stacks
Both new thread creation and context switch had the same mistake in
cache management: the bottom of the stack (the "unused" region between
the lower memory bound and the live stack pointer) needs to be
invalidated before we switch, because otherwise any dirty lines we
might have left over can get flushed out on top of the same thread on
another CPU that is putting live data there.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross 64cf33952d arch/xtensa: Add non-HAL caching primitives
The Xtensa L1 cache layer has straightforward semantics accessible via
single-instructions that operate on cache lines via physical
addresses.  These are very amenable to inlining.

Unfortunately the Xtensa HAL layer requires function calls to do this,
leading to significant code waste at the calling site, an extra frame
on the stack and needless runtime instructions for situations where
the call is over a constant region that could elide the loop.  This is
made even worse because the HAL library is not built with
-ffunction-sections, so pulling in even one of these tiny cache
functions has the effect of importing a 1500-byte object file into the
link!

Add our own tiny cache layer to include/arch/xtensa/cache.h and use
that instead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross d0c538e9a2 arch/xtensa: Add an arch-internal README on register windows
Back when I started work on this stuff, I had a set of notes on
register windows that slowly evolved into something that looks like
formal documentation.  There really isn't any overview-style
documentation of this stuff on the public internet, so it couldn't
hurt to commit it here for posterity.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross a230fafde5 arch/xtensa: soc/intel_adsp: Rework MP code entry
Instead of passing the crt1 _start function as the entry code for
auxiliary CPUs, use a tiny assembly stub instead which can avoid the
runtime testing needed to skip the work in _start.  All the crt1 code
was doing was clearing BSS (which must not happen on a second CPU) and
setting the stack pointer (which is wrong on the second CPU).

This allows us to clean out the SMP code in crt1.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross 613594e68c soc/intel_adsp: Use the correct MP stack pointer
The kernel passes the CPU's interrupt stack expected that it will
start on that, so do it.  Pass the initial stack pointer from the SOC
layer in the variable "z_mp_stack_top" and set it in the assembly
startup before calling z_mp_entry().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross 820c94e5dd arch/xtensa: Inline atomics
The xtensa atomics layer was written with hand-coded assembly that had
to be called as functions.  That's needlessly slow, given that the low
level primitives are a two-instruction sequence.  Ideally the compiler
should see this as an inline to permit it to better optimize around
the needed barriers.

There was also a bug with the atomic_cas function, which had a loop
internally instead of returning the old value synchronously on a
failed swap.  That's benign right now because our existing spin lock
does nothing but retry it in a tight loop anyway, but it's incorrect
per spec and would have caused a contention hang with more elaborate
algorithms (for example a spinlock with backoff semantics).

Remove the old implementation and replace with a much smaller inline C
one based on just two assembly primitives.

This patch also contains a little bit of refactoring to address the
scheme has been split out into a separate header for each, and the
ATOMIC_OPERATIONS_CUSTOM kconfig has been renamed to
ATOMIC_OPERATIONS_ARCH to better capture what it means.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross eb1ef50b6b arch/xtensa: General cleanup, remove dead code
There was a bunch of dead historical cruft floating around in the
arch/xtensa tree, left over from older code versions.  It's time to do
a cleanup pass.  This is entirely refactoring and size optimization,
no behavior changes on any in-tree devices should be present.

Among the more notable changes:

+ xtensa_context.h offered an elaborate API to deal with a stack frame
  and context layout that we no longer use.

+ xtensa_rtos.h was entirely dead code

+ xtensa_timer.h was a parallel abstraction layer implementing in the
  architecture layer what we're already doing in our timer driver.

+ The architecture thread structs (_callee_saved and _thread_arch)
  aren't used by current code, and had dead fields that were removed.
  Unfortunately for standards compliance and C++ compatibility it's
  not possible to leave an empty struct here, so they have a single
  byte field.

+ xtensa_api.h was really just some interrupt management inlines used
  by irq.h, so fold that code into the outer header.

+ Remove the stale assembly offsets.  This architecture doesn't use
  that facility.

All told, more than a thousand lines have been removed.  Not bad.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Peng Fan e27c9c7c52 arch: arm64: select SCHED_IPI_SUPPORTED when SMP enabled
Select SCHED_IPI_SUPPORTED when SMP enabled.

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan a2ea20dd6d arch: arm: aarch64: add SMP support
With timer/gic/cache added, we could add the SMP support.
Bringup cores

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan 14b9b752be arch: arm: aarch64: add arch_dcache_range
Add arch_dcache_range to support flush and invalidate

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan e10d9364d0 arch: arm64: irq/switch: accessing nested using _cpu_t
With _kernel_offset_to_nested, we only able to access the nested counter
of the first cpu. Since we are going to support SMP, we need accessing
nested from per cpu.

To get the current cpu, introduce z_arm64_curr_cpu for asm usage,
because arch_curr_cpu could not be compiled in asm code.

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan 251b1d39ac arch: arm: aarch64: export z_arm64_mmu_init for SMP
Export z_arm64_mmu_init for SMP usage

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan 6182330fc3 arm: core: aarch64: save switch_handle
Save old_thread to switch_handle for wait_for_thread usage

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Ioannis Glaropoulos 191c3088af arm: cortex_m: fix arguments to dwt_init() function
Fix the call to z_arm_dwt_init(), remove the NULL argument.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-05 18:13:22 -06:00
Katsuhiro Suzuki e58e2767f8 arch: riscv: add common stub reboot function
This patch adds weak sys_arch_reboot() function to avoid build error
with CONFIG_REBOOT=y. Some SoC has already had own reboot function
but others (Ex. qemu boards) faced buld error.

- openisa_rv32m1: Not change
- riscv-ite: Do nothing, remove and use arch/riscv function

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-04 11:09:51 -06:00
Carlo Caione 9d908c78fa aarch64: Rewrite reset code using C
There is no strict reason to use assembly for the reset routine. Move as
much code as possible to C code using the proper helpers.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-04 06:51:48 -05:00
Carlo Caione bba7abe975 aarch64: Use helpers instead of inline assembly
No need to rely on inline assembly when helpers are available.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-04 06:51:48 -05:00
Carlo Caione a2226f5200 aarch64: Fix registers naming in cpu.h
The name for registers and bit-field in the cpu.h file is incoherent and
messy. Refactor the whole file using the proper suffixes for bits,
shifts and masks.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-04 06:51:48 -05:00
Daniel Leung 90722ad548 x86: gen_idt: fix some pylint issues
Fixes some issues identified by pylint.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung 685e6aa2e4 x86: gen_mmu: fix some pylint issues
Fixes some issues identified by pylint.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung 8cfdd91d54 x86: ia32/fatal: be explicit on pointer math with _df_tss.cr3
For some unknown reason, the pagetable address for _df_tss.cr3
did not get translated from virtual to physical. However,
the translation is done if the pointer to pagetable is obtained
through reference to the first array element (instead of simply
through the name of array). Without CR3 pointing to the page
table via physical address, double fault does not work. So
fixing this by being explicit with the page table pointer.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung fa6d7cecb5 x86: mmu/mem_domain: don't translate address before null check
When adding a new thread to memory domain, there is a NULL check
to figure out if a thread is being migrated to another memory
domain. However, the NULL check is AFTER physical-to-virtual
address translation which means (NULL + offset) != NULL anymore.
This results in calling reset_region() with an invalid page table
pointer. Fix this by doing the NULL check before address
translation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung ee3d345c09 x86: mmu: reserve more space for page table if linking in virt
When linking in virtual address space, we still need physical
addresses in SRAM to be mapped so platform can boot from physical
memory and to access structure necessary for boot (e.g. GDT and
IDT). So we need to enlarge the reserved space for page table
to accommodate this.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung 9ce77abf23 x86: ia32: jump to virtual address before calling z_x86_prep_c
We have been having the assumption that the physical memory
is identity-mapped to virtual address space. However, with
the ability to set CONFIG_KERNEL_VM_BASE separately from
CONFIG_SRAM_BASE_ADDRESS, the assumption is no longer valid.
This changes the boot code in x86 32-bit, so that once
the page table is loaded, we can proceed with executing in
the virtual address space. So do a long jump to virtual
address just before calling z_x86_prep_c. From this point on,
code execution is in virtual address space.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung 991300e1ba x86: gen_mmu: also map SRAM if linking in virtual address space
When linking in virtual address space, we still need physical
addressed in SRAM to be mapped so platform can boot from physical
memory and to access structure necessary for boot (e.g. GDT and
IDT). So identity maps the kernel in SRAM.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung d40e8ede8e x86: gen_gdt: add address translation if needed
When the kernel is mapped into virtual address space
that is different than the physical address space,
the dynamic GDT generation uses the virtual addresses.
However, the GDT table is required at boot before
page table is loaded where the virtual addresses are
invalid. So make sure GDT generation is using
physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung 7a51aab397 x86: gen_mmu: add address translation if needed
There is an assumption made in the page table generation code
that the kernel would occupy the same physical and virtual
addresses. However, we may want to map the kernel into
a virtual address space which differs from kernel's physical
address space. For example, with demand paging enabled on
kernel code and data, we can accommodate kernel that is
larger than physical memory size, and may want to utilize
a bigger virtual address space. So add address translation
in the gen_mmu.py script for this.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung a1afe9be5e x86: ia32: do virtual address translation at boot
This adds virtual address translation to a few variables
used in crt0.S. This is needed as they are linked at
virtual addresses but before page table is loaded,
they are not available at virtual addresses and must be
referred via physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung bbe4b39f8d x86: mmu: cast to uintptr_t for page table using Z_X86_PHYS_ADDR
When feeding &z_shared_kernel_page_start directly to
Z_X86_PHYS_ADDR(), the compiler would complain array subscript
out of bound if linking in virtual address space. So cast it
into uintptr_t first before translation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Nicolas Pitre 0c45b548e2 aarch64: rationalize exception entry/exit code
Each vector slot has room for 32 instructions. The exception context
saving needs 15 instructions already. Rather than duplicating those
instructions in each out-of-line exception routines, let's store
them directly in the vector table. That vector space is otherwise
wasted anyway. Move the z_arm64_enter_exc macro into vector_table.S
as this is the only place where it should be used.

To further reduce code size, let's make z_arm64_exit_exc into a
function of its own to avoid code duplication again. It is put in
vector_table.S as this is the most logical location to go with its
z_arm64_enter_exc counterpart.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-03 16:26:40 +03:00
Shubham Kulkarni 26efef4f94 arch: xtensa: Fix backtrace from ISR
a0 is used as scratch register. Restore value of a0 (return address)
from stack frame before spilling registers on stack

Signed-off-by: Shubham Kulkarni <shubham.kulkarni@espressif.com>
2021-03-03 13:02:57 +01:00
Ioannis Glaropoulos f1a27a8189 arm: cortex_m: assert if DebugMonitor exc is enabled in debug mode
Assert if the null pointer de-referencing detection (via DWT) is
enabled when the processor is in debug mode, because the debug
monitor exception can not be triggered in debug mode (i.e. the
behavior is unpredictable). Add a note in the Kconfig definition
of the null-pointer detection implementation via DWT, stressing
that the solution requires the core be in normal mode.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 77c76a3b79 arm: cortex_m: build time assert for null-pointer exception page size
We introduce build time asserts for
CONFIG_CORTEX_M_DEBUG_NULL_POINTER_EXCEPTION_PAGE_SIZE
to catch that the user-supplied value has, as requested
by the Kconfig symbol specification, a power of 2 value.
For the MPU-based implementation of null-pointer detection
we can use an existing macro for the build time assert,
since the region for catching null-pointer exceptions
is a regular MPU region, with different restrictions,
depending on the MPU architecture. For the DWT-based
implementation, we introduce a custom build-time assert.

We add also a run-time ASSERT for the MPU-based
implementation in ARMv8-M platforms, which require
that the null pointer exception detection page is
already mapped by the MPU.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 1db78aae73 arm: cortex_m: ensure DebugMonitor is targeting Secure domain
By design, the DebugMonitor exception is only employed
for null-pointer dereferencing detection, and enabling
that feature is not supported in Non-Secure builds. So
when enabling the DebugMonitor exception, assert that
it is not targeting the Non Secure domain.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 1b22f6b8c8 arm: cortex_m: enable null-pointer exception detection in the tests
Enable the null-pointer dereferencing detection by default
throughout the test-suite. Explicitly disable this for the
gen_isr_table test which needs to perform vector table reads.
Disable null-pointer exception detection on qemu_cortex_m3
board, as DWT it is not emulated by QEMU on this platform.
Additionally, disable null-pointer exception detection on
mps2_an521 (QEMU target), as DWT is not present and the MPU
based solution won't work, since the target does not have
the area 0x0 - 0x400 mapped, but the QEMU still permits
read access.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos d86d2c6f65 arm: cortex_m: implement null pointer exception detection with MPU
Implementation for null pointer exception detection feature
using the MPU on Cortex-M. Null-pointer detection is implemented
by programming an MPU to guard a limited area starting at
address 0x0. on non ARMv8-M we program an MPU region with
No-access policy. On ARMv8-M we program a region with any
permissions, assuming the region will overlap with fixed
FLASH0 region. We add a compile-time message to warn the
user if the MPU-based null-pointer exception solution can
not be used (ARMv8-M only).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 66ef96fded arm: cortex_m: add vector table padding for null pointer detection
Padding inserted after the (first-stage) vector table,
so that the Zephyr image does not attempt to use the
area which we reserve to detect null pointer dereferencing
(0x0 - <size>). If the end of the vector table section is
higher than the upper end of the reserved area, no padding
 will be added. Note also that the padding will be added
only once, to the first stage vector table, even if the current
snipped is included multiple times (this is for a corner case,
when we want to use this feature together with SW Vector Relaying
on MCUs without VTOR but with an MPU present).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 0bac92db96 arm: cortex-m: null pointer detection additions for ARMv8-M
Additions to the null-pointer exception detection mechanism
for ARMv8-M Mainline MCUs.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 3054c1351a arm: cortex_m: null-pointer exception detection via DWT
Implement the functionality to detect null pointer dereference
exceptions via the DWT unit in the ARMv7-M Mainline MCUs.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos f97ccd940c arm: cortex-m: build debug.c for null-pointer detection feature
When we enable the null pointer exceptino feature (using DWT)
we include debug.c in the build. debug.c contains the functions
to configure and enable null pointer detection using the Data
Watchdog and Trace unit.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos c42a8d9d24 arm: cortex_m: fault: hook up debug monitor exception handler
Extend the debug monitor exception handler to
- return recoverable faults when the debug monitor
  is enabled but we do not get an expected DWT event,
- call a debug monitor routine to check for null pointer
  exceptions.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 712a7951db arm: cortex_m: move static inline DWT functions in internal header
Move the DWT utility functions, present in timing.c
in an internal cortex-m header.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos b3cd5065eb arm: cortex_m: Kconfig symbols for null pointer detection feature
Introduce the required Kconfig symbol framework for the
Cortex-M-specific null pointer dereferencing detection
feature. There are two implementations (based on DWT and
MPU) so we introduce the corresponding choice symbols,
including a choice symbol to signify that the feature
is to be disabled.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Carlo Caione eb72b2d72a aarch64: smccc: Retrieve up to 8 64-bit values
The most common secure monitor firmware in the ARM world is TF-A. The
current release allows up to 8 64-bit values to be returned from a
SMC64 call from AArch64 state.

Extend the number of possible return values from 4 to 8.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione bc7cb75a82 aarch64: smccc: Use offset macros
Instead of relying on hardcoded offset in the assembly code, introduce
the offset macros to make the code more clear.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione 998856bacb aarch64: smccc: Update specs link
The link points to an outdated version. Update it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione 90859c6bf3 aarch64: smccc: Decouple PSCI from SMCCC
The current code is assuming that the SMC/HVC helpers can only be used
by the PSCI driver. This is wrong because a mechanism to call into the
secure monitor should be made available regardless of using PSCI or not.

For example several SoCs relies on SMC calls to read/write e-fuses,
retrieve the chip ID, control power domains, etc...

This patch introduces a new CONFIG_HAS_ARM_SMCCC symbol to enable the
SMC/HVC helpers support and export that to drivers that require it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Nicolas Pitre 443e3f519e arm64: mmu: initialize early
This is fundamental enough that it better be initialized ASAP.
Many other things get initialized soon afterwards assuming the MMU
is already operational.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre 9461600c86 aarch64: mmu: rationalize debugging output
Make it into a generic call that can be used in various places.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre b40a2fdb8b aarch64: mmu: fix common MMU mapping
Location of __kernel_ram_start is too far and _app_smem .bss areas
are not covered. Use _image_ram_start instead.

Location of __kernel_ram_end is also way too far. We should stop at
_image_ram_end where the expected unmapped area starts.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre fb3de16f0c aarch64: mmu: use a range (start..end) for common MMU mapping
This is easier to cover multiple segments this way. Especially since
not all boundary symbols from the linker script come with a size
derrivative.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre cb49e4b789 aarch64: mmu: invert the MT_OVERWRITE flag
The MT_OVERWRITE case is much more common. Redefine that flag as
MT_NO_OVERWRITE instead for those fewer cases where it is needed.

One such case is platform provided mappings. Apply them after the
common kernel mappings and use the MT_NO_OVERWRITE on them.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre 56c77118d3 aarch64: mmu: factor out the phys argument out of set_mapping()
Minor cleanup.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre f53bd24a4d aarch64: mmu: move get_region_desc() closer to usage points
Simple code tidiness.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre b696090bb7 aarch64: mmu: make page table pool global
There is no real reason for keeping page tables into separate pools.
Make it global which allows for more efficient memory usage and
simplifies the code.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre 459bfed9ea aarch64: mmu: dynamic mapping support
Introduce a remove_map() to ... remove a mapping.

Add a use count to the page table pool so pages can be dynamically
allocated, deallocated and reused.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre 861f6ce2c8 aarch64: a few trivial assembly optimizations
Removed some instructions when possible.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-25 10:35:37 -05:00
Andy Ross 6fb6d3cfbe kernel: Add new k_thread_abort()/k_thread_join()
Add a newer, much smaller and simpler implementation of abort and
join.  No need to involve the idle thread.  No need for a special code
path for self-abort.  Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation.  All work in both
calls happens under a single locked path with no unexpected
synchronization points.

This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.

Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Shih-Wei Teng 8912f549ce arch: riscv: Update the description of CONFIG_PMP_STACK_GUARD_MIN_SIZE
Update the uints in bytes instead of words in its description. It can
avoid confusion.

Signed-off-by: Shih-Wei Teng <swteng@andestech.com>
2021-02-24 10:37:03 -05:00
Yuguo Zou a8b6936c7d arch: arc: fix mpu version number
ARC mpu version used a wrong number 3, could cause conflict in future.
This commit fix this issue to version number 4.

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2021-02-24 08:57:35 -05:00
Ioannis Glaropoulos 8289b8c877 arch: arm: cortex_m: fix ASSERT expression in MemManage handler
We need to form the ASSERT expression inside the MemManage
fault handler for the case we building without USERSPACE
and STACK GUARD support, in the same way it is formed for
the case with USERSPACE or MPU STACK GUARD support, that
is, we only assert if we came across a stacking error.

Data access violations can still occur even without user
mode or guards, e.g. when trying to write to Read-only
memory (such as the code region).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-23 11:29:49 +01:00
Andrei Emeltchenko 377456c5af kernel: Move LOCKED() macro to kernel_internal.h
Remove duplication in the code by moving macro LOCKED() to the correct
kernel_internal.h header.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2021-02-22 14:56:37 -05:00
Daniel Leung 2816c17a09 x86: allow linking in virtual address space
This adds the pieces to allow the kernel to be linked
in virtual address space.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung d340afd456 x86: use CONFIG_SRAM_OFFSET instead of CONFIG_X86_KERNEL_OFFSET
This changes x86 to use CONFIG_SRAM_OFFSET instead of
arch-specific CONFIG_X86_KERNEL_OFFSET. This allows the common
MMU macro Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() to
function properly if we ever need to map the kernel into
virtual address space that does not have the same starting
physical address.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung ece9cad858 kernel: add CONFIG_SRAM_OFFSET
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung c0ee8c4a43 x86: use z_bss_zero and z_data_copy
Instead of doing these in assembly, use the common z_bss_zero()
and z_data_copy() C functions instead. This simplifies code
a bit and we won't miss any additions to these two functions
(if any) under x86 in the future (as x86_64 was actually not
clearing gcov bss area).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-21 22:28:33 -05:00
Daniel Leung dd98de880a x86: move calling z_loapic_enable into z_x86_prep_c
This moves calling z_loapic_enable() from crt0.S into
z_x86_prep_c(). This is done so we can move BSS clearing
and data section copying inside z_x86_prep_c() as
these are needed before calling z_loapic_enable().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-21 22:28:33 -05:00
Daniel Leung 78837c769a soc: x86: add Lakemont SoC
This adds a very basic SoC configuration for Intel Lakemont SoC.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-19 18:51:04 -05:00
Daniel Leung 9a189da03b x86: add kconfig CONFIG_X86_MEMMAP
This adds a new kconfig to enable the use of memory map.
This map can be populated automatically if
CONFIG_MULTIBOOT_MEMMAP=y or can be manually defined
via x86_memmap[].

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-19 18:51:04 -05:00
Daniel Leung c027494dba x86: add kconfig CONFIG_X86_PC_COMPATIBLE
This is an hidden option to indicate we are building for
PC-compatible devices (where there are BIOS, ACPI, etc.
which are standard on such devices).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-19 18:51:04 -05:00
Carlo Caione 3f055058dc aarch64: Remove QEMU 'wfi' issue workaround
The problem is not reproducible when CONFIG_QEMU_ICOUNT=n. We can now
revert the commit aebb9d8a45.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-19 16:26:38 +03:00
Nicolas Pitre 7a91cf0176 Revert "lib/os/heap: introduce option to force big heap mode"
This reverts commit b6b6d39bb6.

With both commit 4690b8d5ec ("libc/minimal: fix malloc() allocated
memory alignment") and commit c822e0abbd ("libc/minimal: fix
realloc() allocated memory alignment") in place, there is no longer
a need for enforcing the big heap mode on every allocations.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-19 07:32:22 -05:00
Martin Åberg 88f478108d sparc: write through switched_from in arch_switch()
Write through switched_from in arch_switch() as required by the
switch protocol.

Also restructure the implementation to better match the template in
kernel_arch_interface.h, by removing a wrapper routine and instead
use CONTAINER_OF().

Fixes #32197

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-02-17 06:35:03 -05:00
Carlo Caione fadbe9d2f2 arch: aarch64: Add XIP support
Add the missing pieces to enable XIP for AArch64. Try to simulate the
XIP using QEMU using the '-bios' parameter.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-17 14:13:10 +03:00
Daniel Leung 32b70bb7b5 x86: multiboot: map memory before accessing if necessary
Before accessing the multiboot data passed by the bootloader,
we need to map the memory first. This adds the code to map
the memory if necessary.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-16 19:08:55 -05:00
Tomasz Bursztyka 5e4e0298e9 arch/x86: Generalize cache manipulation functions
We assume that all x86 CPUs do have clflush instructions.
And the cache line size is now provided through DTS.

So detecting clflush instruction as well as the cache line size is no
longer required at runtime and thus removed.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2021-02-15 09:43:30 -05:00
Daniel Leung 5c649921de x86: add kconfigs and compiler flags for MMX and SSE*
This adds kconfigs and compiler flags to support MMX and SSE*
instructions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-15 08:21:15 -05:00
Daniel Leung ce44048d46 x86: rename CONFIG_SSE* to CONFIG_X86_SSE*
This adds X86 keyword to the kconfigs to indicate these are
for x86. The old options are still there marked as
deprecated.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-15 08:21:15 -05:00
Daniel Leung 23a9a3234b x86: correct compiler flags for SSE
It is possible to enable SSE without using SSE for floating
point, so fix the compiler flags.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-15 08:21:15 -05:00
Carlo Caione b27bca4b45 aarch64: mmu: Remove SRAM memory region
Now that the arch_mem_map() is actually working correctly we can remove
the big SRAM region.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-15 08:07:55 -05:00
Andy Ross 746c65acb7 soc/intel_adsp: Move KERNEL_COHERENCE to cavs15
Only the CAVS 1.5 linker script has full support for the coherence
features, don't advertise it on the other SoC's yet.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-11 14:47:40 -05:00
Anas Nashif 5d1c535fc8 license: add missing SPDX headers
Add SPDX header to files with existing license.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-11 08:05:16 -05:00
Anas Nashif 1cea902fad license: add missing SPDX headers
Add missing SPDX header.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-11 08:05:16 -05:00
Anas Nashif 67d290540e xtensa: remove unused script
While fixing license headers, identified this script as orphan and not
being used anywhere, so remove.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-11 08:05:16 -05:00
Carlo Caione d6316aae27 aarch64: Fix corrupted IRQ state when tracing enabled
The call to sys_trace_idle() is potentially clobbering x0 resulting in a
wrong value being used by the following code. Save and restore x0 before
and after the call to sys_trace_idle() to avoid any issue.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Suggested-by: James Harris <james.harris@intel.com>
2021-02-10 10:16:03 -05:00
Daniel Leung 4daa2cb6cf x86: mark page frame as reserved according to memory map
With x86, there are usually memory regions that are reserved
for firmware and device MMIOs. We don't want to use these
pages for memory mapping so mark them as reserved at boot.
The weakly defined x86_memmap contains the list of memory
regions which can be overriden by SoC or board configurations.
Also, with CONFIG_MULTIBOOT_MEMMAP=y, the memory regions
are populated from multiboot provided data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-05 11:42:28 -05:00
Ioannis Glaropoulos 8bc242ebb5 arm: cortex-m: add extra stack size for test build with FPU_SHARING
Additional stack for tests when building with FPU_SHARING
enabled is required, because the option may increase ESF
stacking requirements for threads.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-05 11:41:25 -05:00
Daniel Leung 5a11caba33 xtensa: fix rsr/wsr assembly for XCC
XCC doesn't like the "rsr.<reg name>" style assembly
so fix that to the other style.

Also, XCC doesn't like _CONCAT() with the EPC/EPS
registers so need to spell out all of them.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-05 07:45:07 -05:00
Daniel Leung 92c93b1b7f xtensa: fix hard-coded interrupt value for PS register
There is a hard-coded value of PS_INTLEVEL(15) to set the PS
register. The correct way is actually to use XCHAL_EXCM_LEVEL
with PS_INTLEVEL() to setup the register. So fix it.

Fixes #31858

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-04 20:58:56 -05:00
Andy Ross cce5ff1510 arch/x86: Fix stack alignment for user threads
The x86_64 SysV ABI requires 16 byte alignment for the stack pointer
during execution of normal code.  That means that on entry to an
ABI-compatible C function (which is reached via a CALL instruction
that pushes the return address) the RSP register must be MISaligned by
exactly 8 bytes.  The kernel mode thread setup got this right, but we
missed the equivalent condition in userspace entry.

The end result was a misaligned stack, which is surprisingly robust
for most use.  But recent toolchains have starting doing some more
elaborate vectorization, and the resulting SSE instructions started
failing in userspace on the misaliged loads.

Note that there's a comment about optimization: we're doing the stack
alignment in the "wrong place" and are needlessly wasting bytes in
some cases.  We should see the raw stack boundaries where we are
setting up RSP values.  Add a FIXME to this effect, but don't touch
anything as this patch is a targeted bugfix.

Also fix a somewhat embarassing 32-bit-ism that would have truncated
the address of a userspace stack that we tried to put above 4G.

Fixes #31018

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-03 18:45:48 -05:00
Ioannis Glaropoulos ef926e714b arm: cortex_m: fix vector table relocation in non-XIP builds
When VTOR is implemented on the Cortex-M SoC, we can
basically use any address (properly aligned) for the
vector table starting address. We fix the setting of
VTOR in prep_c.c for non-XIP images, in this commit,
so we do not need to always have the vector table be
present at the start of RAM (CONFIG_SRAM_BASE_ADDRESS)
and allow for extra linker sections being placed before
the vector table section.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-03 10:44:17 -05:00
Ioannis Glaropoulos 73288490f6 arm: cortex_m: log EXC_RETURN value in fatal.c
If CONFIG_EXTRA_EXCEPTION_INFO is enabled, log
the value of EXC_RETURN in the fault handler.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos cafe04558c arm: cortex_m: make lazy FP stacking enabling dynamic
Under FPU sharing mode, any thread is allowed to generate
a Floating Point context (use FP registers in FP instructions),
regardless of whether threads are pre-tagged with K_FP_REGS
option when they are created.

When building with MPU stack guard feature enabled,
a large MPU stack guard is required to catch stack
overflows, if lazy FP stacking is enabled. When lazy
FP stacking is not enabled, a default 32 byte guard is
sufficient.

If lazy stacking is enabled by default, all threads may
potentially generate FP context, so they would need to
program a large MPU guard, carved out of their reserved
stack memory.

To avoid this memory waste, we modify the behavior, and make
lazy stacking a dynamically enabled feature, implemented as
follows:
- threads that are not pre-tagged with K_FP_REGS, and have
not generated an FP context use a default MPU guard and disable
lazy stacking. As long as the threads do not have an active FP
context, they won't stack FP registers, anyway, on ISRs and
exceptions, while they will benefit from reserving a small
MPU guard size
- as soon as a thread starts using FP registers, ISR might
temporarily experience some increased ISR latency due to lazy
stacking being disabled. This will be the case until the next
context switch, where the threads that have active FP context
will be tagged with K_FP_REGS, enable lazy stacking, and
program a wide MPU guard.

The implementation is a tradeoff between performance (ISR
latency) and memory consumption.

Note that when MPU STACK GUARD feature is not enabled, lazy
FP stacking is always activated.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos 86c1b57103 arm: cortex_m: select by default FP sharing mode when using the FPU
For applications that make use of the FPU in cortex m,
we enforce the FPU sharing registers mode, because the
compiler, under certain optimization regimes, may use
FP instructions and create FP context in any thread,
so the unshared registers mode is not practically
supported.

In addition to that we force FPU_SHARING to depend on
MULTITHREADING, as FPU sharing mode does not make sense
outside the normal multi-threaded builds.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos 2642eb28bf arm: cortex_m: force FP context stacking by default
For the standard multi-theading builds, we will
enforce FP context stacking only when FPU_SHARING
is set. For the single-threading use case we enable
context stacking by default.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos 56dd787627 arm: cortex_m: skip clearing CONTROL if this is done at boot
If CONTROL register is done in reset.S we can skip
clearing the FPCA when enabling the floating point
support, to save a few instructions. The CONTROL
register is cleared right after boot, if the symbol
CONFIG_INIT_ARCH_HW_AT_BOOT is enabled.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Daniel Leung 92c12d1f82 toolchain: add GEN_ABSOLUTE_SYM_KCONFIG()
This adds a new GEN_ABSOLUTE_SYM_KCONFIG() specifically for
generating absolute symbols in assembly for kconfig values.
This is needed as the existing GEN_ABSOLUTE_SYM() with
constraints in extended assembly parses the "value" as
signed 32-bit integers. An unsigned 32-bit integer with
MSB set results in a negative number in the final binary.
This also prevents integers larger than 32-bit. So this
new macro simply puts the value inline within the assembly
instrcution instead of having it as parameter.

Fixes #31562

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-02 09:23:45 -05:00
Daniel Leung dea8fccfb3 x86: clear GS at boot for x86_64
On Intel processors, if GS is not zero and is being set to
zero, GS_BASE is also being set to zero. This would interfere
with the actual use of GS_BASE for usespace. To avoid accidentally
clearing GS_BASE, simply set GS to 0 at boot, so any subsequent
clearing of GS will not clear GS_BASE.

The clearing of GS_BASE was discovered while trying to figure out
why the mem_protect test would hang within 10-20 repeated runs.
GDB revealed that both GS and GS_BASE was set to zero when the tests
hanged. After setting GS to zero at boot, the mem_protect tests
were running repeated for 5,000+ times without hanging.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-01 21:38:28 -05:00
Nicolas Pitre 7fcf5519d0 aarch64: mmu: cleanups and fixes
Major changes:

- move related functions together
- optimize add_map() not to walk the page tables *twice* on
  every loop
- properly handle leftover size when a range is already mapped
- don't overwrite existing mappings by default
- return an error when the mapping fails

and make the code clearer overall.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-01-28 20:24:30 -05:00
Daniel Leung 0d099bdd54 linker: remove asterisk from IRQ/ISR section name macro
Both _IRQ_VECTOR_TABLE_SECTION_NAME and _SW_ISR_TABLE_SECTION_NAME
are defined with asterisk at the end in an attempt to include
all related symbols in the linker script. However, these two
macros are also being used in the source code to specify
the destination sections for variables. Asterisks in the name
results in older GCC (4.x) complaining about those asterisks.
So create new macros for use in linker script, and keep
the names asterisk free.

Fixes #29936

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-26 16:24:11 -05:00
Andrew Boie 6b58e2c0a3 x86: use large VM size if ACPI
We've already enabled full RAM mapping if ACPI is enabled, also
set a large 3GB address space size, these systems are not RAM-
constrained (they are PC platforms) and they have large MMIO
config spaces for PCIe.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-26 16:21:50 -05:00
Stephanos Ioannidis f769a03081 arch: arm: aarch32: Fix interrupt nesting
In the current interrupt nesting implementation, if an ISR is
interrupted while executing inside a branch, the lr_svc register will
be corrupted, and the branch of the interrupted ISR will exit to the
return address of the final branch of the interrupting ISR, which may
or may not correspond to the intended return address.

This commit fixes the aforementioned bug by storing the lr_svc register
in the stack at the ISR entry, and restoring its value before exiting
the ISR.

For more details, refer to the issue #30517.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-01-26 06:17:15 -05:00
Stephanos Ioannidis c00169daba arch: arm: aarch32: Fix exception exit failures
This commit fixes the following bugs in the AArch32 z_arm_exc_exit
routine:

1. Invalid return address when calling `z_arm_pendsv` from the
   exception-specific mode

2. Caller-saved register is referenced after a call to `z_arm_pendsv`

For more details, refer to the issue #31511.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-01-26 06:17:15 -05:00
Stephanos Ioannidis d86fdb2154 arch: arm: aarch32: Update stale references to `_IntExit`
This commit updates the stale references to the `_IntExit` function in
the in-line documentation.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-01-26 06:17:15 -05:00
Volodymyr Babchuk cd86ec2655 aarch64: add ability to generate image header
Image header is compatible with Linux aarch64 boot protocol,
so zephyr can be booted with U-boot or Xen loader.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
2021-01-24 13:59:55 -05:00
Yonatan Schachter 7191b64c6f gen_isr_tables: Added check of the IRQ num before accessing the vt
At its current state, the script tries to access the vector table
list without checking first that the index is valid. This can
cause the script to crash without a descriptive message.
The index can be invalid if an IRQ number that is larger than
the maximum number allowed by the SOC is used.
This PR adds a check of that index, that exits with an error
message if the index is invalid.

Fixes #29809

Signed-off-by: Yonatan Schachter <yonatan.schachter@gmail.com>
2021-01-24 10:12:54 -05:00
Martin Åberg b6b6d39bb6 lib/os/heap: introduce option to force big heap mode
This option allows forcing big heap mode. Useful on for getting 8-byte
aligned blocks on 32-bit machines.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-01-24 10:11:11 -05:00
Andrew Boie 77861037d9 x86: map all RAM if ACPI
ACPI tables can lurk anywhere. Map all memory so they can be
read.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 14c5d1f1f7 kernel: add CONFIG_ARCH_MAPS_ALL_RAM
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.

Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie ed22064e27 x86: implement demand paging APIs
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 56a9e7b91e arch: add CONFIG_DEMAND_PAGING
Indicates at the kernel level that demand paging is active.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 299a2cf62e mmu: arch_mem_map() may no longer fail
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.

Instantiation of new memory domains may still require allocations
unless a common page table is used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie b0b7756756 x86: pre-allocate address space
We no longer use a page pool to draw memory pages when doing
memory map operations. We now preallocate the entire virtual
address space so no allocations are ever necessary when mapping
memory.

We still need memory to clone page tables, but this is now
expressed by a new Kconfig X86_MAX_ADDITIONAL_MEM_DOMAINS
which has much clearer semantics than specifying the number
of pages in the pool.

The default address space size is now 8MB, but this can be
tuned by the application.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 5c47bbc501 x86: only map the kernel image
The policy is changed and we no longer map all page frames.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 893822fbda arch: remove KERNEL_RAM_SIZE
We don't map all RAM at boot any more, just the kernel image.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie f3e9b61a91 x86: reserve the first megabyte
A more comprehensive solution would use E820 enumeration, but we
are unlikely to ever care that much, as we intend to use demand
paging on microcontrollers and not PC-like hardware. This is
really to just prevent QEMU from crashing.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 73a3e05e40 kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES
We will need this to run on x86 with PC-like hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 69355d13a8 arch: add KERNEL_VM_OFFSET
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.

Fix the calculations on X86 where this is the case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Shubham Kulkarni 8b7da334d5 arch: xtensa: Print backtrace from panic handler
This change uses stack frame to print backtrace once exception occurs
Printing backtrace helps to identify the cause of exception

Signed-off-by: Shubham Kulkarni <shubham.kulkarni@espressif.com>
2021-01-23 08:43:10 -05:00
Kumar Gala 895277f909 x86: Fix zefi.py creating valid images
When zefi.py was changed to pass compiler and objcopy the flag to
objcopy for the EFI target was dropped.  This is because the current
SDK (0.12.1) doesn't support that target type for objcopy.  However,
target is necessary for the images to be created correctly and boot.

Switch back to use the host objcopy as a stop gap fix, until the SDK
can support target for EFI.

Fixes: #31517

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-01-22 12:41:27 -05:00
Daniel Leung 4e8abfcba7 x86: use TSC for timing information
This changes the timing functions to use TSC to gather
timing information instead of using the timer for
scheduling as it provides higher resolution for timing
information.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-22 11:05:30 -05:00
Anas Nashif 6f61663695 Revert "arch: add KERNEL_VM_OFFSET"
This reverts commit fd2434edbd.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif db0732f11d Revert "kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES"
This reverts commit 9d2ebfff58.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 4422b1d376 Revert "x86: reserve the first megabyte"
This reverts commit 51e3c9efa5.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 34e9c09330 Revert "arch: remove KERNEL_RAM_SIZE"
This reverts commit 73561be500.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 83d15d96e3 Revert "x86: only map the kernel image"
This reverts commit 3660040e22.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif e980848ba7 Revert "x86: pre-allocate address space"
This reverts commit 64f05d443a.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif a2ec139bf7 Revert "mmu: arch_mem_map() may no longer fail"
This reverts commit db56722729.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 0f24e09bcf Revert "arch: add CONFIG_DEMAND_PAGING"
This reverts commit 48cc63b4a3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif adff757c72 Revert "x86: implement demand paging APIs"
This reverts commit 7711c9a82d.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Daniel Leung d3218ca515 debug: coredump: remove z_ prefix for stuff used outside subsys
This removes the z_ prefix those (functions, enums, etc.) that
are being used outside the coredump subsys. This aligns better
with the naming convention.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-21 22:08:59 -05:00
Andrew Boie 7711c9a82d x86: implement demand paging APIs
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 48cc63b4a3 arch: add CONFIG_DEMAND_PAGING
Indicates at the kernel level that demand paging is active.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie db56722729 mmu: arch_mem_map() may no longer fail
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.

Instantiation of new memory domains may still require allocations
unless a common page table is used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 64f05d443a x86: pre-allocate address space
We no longer use a page pool to draw memory pages when doing
memory map operations. We now preallocate the entire virtual
address space so no allocations are ever necessary when mapping
memory.

We still need memory to clone page tables, but this is now
expressed by a new Kconfig X86_MAX_ADDITIONAL_MEM_DOMAINS
which has much clearer semantics than specifying the number
of pages in the pool.

The default address space size is now 8MB, but this can be
tuned by the application.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 3660040e22 x86: only map the kernel image
The policy is changed and we no longer map all page frames.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 73561be500 arch: remove KERNEL_RAM_SIZE
We don't map all RAM at boot any more, just the kernel image.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 51e3c9efa5 x86: reserve the first megabyte
A more comprehensive solution would use E820 enumeration, but we
are unlikely to ever care that much, as we intend to use demand
paging on microcontrollers and not PC-like hardware. This is
really to just prevent QEMU from crashing.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 9d2ebfff58 kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES
We will need this to run on x86 with PC-like hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie fd2434edbd arch: add KERNEL_VM_OFFSET
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.

Fix the calculations on X86 where this is the case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Daniel Leung ff720cd9b3 x86: enable soft float support for Zephyr SDK
This adds the correct compiler and linker flags to
support software floating point operations. The flags
need to be added to TOOLCHAIN_*_FLAGS for GCC to find
the correct library (when calling GCC with
--print-libgcc-file-name).

Note that software floating point needs to be turned
on for Newlib. This is due to Newlib having floating
point numbers in its various printf() functions which
results in floating point instructions being emitted
from toolchain. These instructions are placed very
early in the functions which results in them being
executed even though the format string contains
no floating point conversions. Without using CONFIG_FPU
to enable hardware floating point support, any calls to
printf() like functions will result in exceptions
complaining FPU is not available. Although forcing
CONFIG_FPU=y with newlib is an option, and because
the OS doesn't know which threads would call these
printf() functions, Zephyr has to assume all threads
are using FPU and thus incurring performance penalty as
every context switching now needs to save FPU registers.
A compromise here is to use soft float instead. Newlib
with soft float enabled does not have floating point
instructions and yet can still support its printf()
like functions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-20 16:45:31 -05:00
Anas Nashif 8b3d9e7285 sparc: do not set cmsis api Kconfigs
Those should be set by applications, not architectures.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-20 16:45:31 -05:00
Spoorthy Priya Yerabolu 513b9f598f zefi.py: Use cross compiler while building zephyr
Currently, zefi.py takes host GCC OBJCOPY as
default. Fixing the script to use CMAKE_C_COMPILER
and CMAKE_OBJCOPY.

Fixes: #27047

Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
2021-01-20 16:38:12 -05:00
Carlo Caione dc4cc5565b x86: cache: Use new cache APIs
Add an helper to correctly use the new cache APIs.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Carlo Caione 42386e48b3 arc: cache: Use new cache APIs
Add an helper to correctly use the new cache APIs.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Carlo Caione e77c841023 cache: Expand the APIs for cache flushing
The only two supported operations for data caches in the cache framework
are currently arch_dcache_flush() and arch_dcache_invd().

This is quite restrictive because for some architectures we also want to
control i-cache and in general we want a finer control over what can be
flushed, invalidated or cleaned. To address these needs this patch
expands the set of operations that can be performed on data and
instruction caches, adding hooks for the operations on the whole cache,
a specific level or a specific address range.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Carlo Caione 20f59c8f1e cache: Rename CACHE_FLUSHING to CACHE_MANAGEMENT
The new APIs are not only dealing with cache flushing. Rename the
Kconfig symbol to CACHE_MANAGEMENT to better reflect this change.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Carlo Caione 923b3be890 kconfig: Unify CACHE_* options
The kconfig options to configure the cache flushing framework are
currently living in the arch-specific kconfigs of ARC and X86 (32-bit)
architectures even though these are defining the same things.

Move the common symbols in one place accessible by all the architectures
and create a menu for those.

Leave the default values in the arch-specific locations.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Carlo Caione 57f7e31017 drivers: PSCI: Add driver and subsystem
Firmware implementing the PSCI functions described in ARM document
number ARM DEN 0022A ("Power State Coordination Interface System
Software on ARM processors") can be used by Zephyr to initiate various
CPU-centric power operations.

It is needed for virtualization, it is used to coordinate OSes and
hypervisors and it provides the functions used for SMP bring-up such as
CPU_ON and CPU_OFF.

A new PSCI driver is introduced to setup a proper subsystem used to
communicate with the PSCI firmware, implementing the basic operations:
get_version, cpu_on, cpu_off and affinity_info.

The current implementation only supports PSCI 0.2 and PSCI 1.0

The PSCI conduit (SMC or HVC) is setup reading the corresponding
property in the DTS node.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-18 19:06:53 +01:00
Martin Åberg 9d1dd9a760 arch/riscv: boost default stacks
Increased stacks required for RISC-V 64-bit CI to pass. Most of these
were catched by the kernel stack sentinel.

The CMSIS stacks are for programs in samples/portability.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-01-15 13:06:33 -05:00
Martin Åberg d2e9568543 riscv: make COMPRESSED_ISA independent of FLOAT_HARD
The CONFIG_FLOAT_HARD config previously enabled the C (compressed)
ISA extensions (CONFIG_COMPRESSED_ISA). This commit removes that
dependency.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-01-15 13:06:33 -05:00