After below commit in https://github.com/zephyrproject-rtos/zephyr
commit d0126a037d23484feebba00d2c0eac27e6393fef
Author: Zide Chen <zide.chen@intel.com>
Date: Wed Feb 5 08:32:00 2020 -0800
boards/x86/acrn: build it in x86_64 mode and switch to X2APIC
The zephyr image for acrn would be built in x86_64 mode by default, then the
load/entry address for pre-launched Zephyr image should be changed from
0x100000 to 0x8000 accordingly per below definition in zephyr .ld file:
zephyrproject_src/zephyr/include/arch/x86/intel64/linker.ld
SECTIONS
{
/*
* The "locore" must be in the 64K of RAM, so that 16-bit code (with
* segment registers == 0x0000) and 32/64-bit code agree on addresses.
* ... there is no 16-bit code yet, but there will be when we add SMP.
*/
.locore 0x8000 : ALIGN(16)
{
_locore_start = .;
The commit in zephyrproject is merged before zephyr v2.2 release, so from v2.2
on, HV need this fix to boot Zephyr as pre-launched VM.
Tracked-On: #5259
Signed-off-by: Victor Sun <victor.sun@intel.com>
add shm_region config in default launch XMLs to configure Inter-
VM communication for post-launched VMs.
Tracked-On: #4853
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
This patch is to expose GPIO chassis interrupts as INTx to safety VM for
EHL. User can configure this per-VM attribute in scenario xml using the
following format:
<pt_intx desc="pt intx mapping.">
(phys_gsi0, virt_gsi0), (phys_gsi1, virt_gsi1), (phys_gsiN, virt_gsiN)
</pt_intx>
The physical and virtual interrupt gsi in each pair are separated by a
comma and enclosed in parentheses. If an integer begins with 0x or 0X,
it is hexadecimal, otherwise, it is assumed to be decimal. Example:
<pt_intx desc="pt intx mapping.">
(1, 0), (0x3, 1), (0x4, 2), (5, 6), (89, 0x12)
</pt_intx>
Tracked-On: #5241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
This patch is to support direct assignment of P2SB bridge to one pre-launched
VM for EHL. User can configure this per-VM attribute in scenario xml:
<mmio_resources desc="MMIO resources.">
<p2sb>y</p2sb>
</mmio_resources>
Set p2sb to y to passthru P2SB bridge to VM, and n otherwise.
Tracked-On: #5221
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Add missing IVSHMEM tag in mrb board xml file to fix build issue
Correct misspelled function name
Use better error messages
Tracked-On: #5221
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
add hybrid_rt scenario for the ElkhartLake CRB board so that user can
launch Yocto Linux as pre-launched VM.
Tracked-On: #5238
Signed-off-by: "Nishioka, Toshiki" <toshiki.nishioka@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
1.Modify clos_mask and mba_delay as a member of the union type.
2.Move HV_SUPPORTED_MAX_CLOS ,MAX_CACHE_CLOS_NUM_ENTRIES and
MAX_MBA_CLOS_NUM_ENTRIES to misc_cfg.h file.
Tracked-On: #5229
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
1.Add macro MAX_CACHE_CLOS_NUM_ENTRIES for CAT, and MAX_MBA_CLOS_NUM_ENTRIES for MBA.
MAX_MBA_CLOS_NUM_ENTRIES:
Max number of Cache Mask entries corresponding to each CLOS.
This can vary if CDP is enabled vs disabled, as each CLOS entry will have corresponding
cache mask values for Data and Code when CDP is enabled.
MAX_CACHE_CLOS_NUM_ENTRIES:
Max number of MBA delay entries corresponding to each CLOS.
2.Move VMx_VCPU_CLOS macro to misc_cfg.h head file.
Tracked-On: #5229
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
HV_SUPPORTED_MAX_CLOS:
This value represents the maximum CLOS that is allowed by ACRN hypervisor.
This value is set to be least common Max CLOS (CPUID.(EAX=0x10,ECX=ResID):EDX[15:0])
among all supported RDT resources in the platform. In other words, it is
min(maximum CLOS of L2, L3 and MBA). This is done in order to have consistent
CLOS allocations between all the RDT resources.
Tracked-On: #5229
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
add IVSHMEM config in hybrid_rt scenario on tgl-rvp board.
Tracked-On: #4853
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Add cfl-k700-i7 board xml and its industry xml to support ACRN industry
scenario on cfl-k700-i7 board.
Tracked-On: #5212
Signed-off-by: Victor Sun <victor.sun@intel.com>
add IVSHMEM_ENABLED and IVSHMEM_REGION in scenario xmls to support
Inter-VM communications configuration for VMs.
v2: move IVSHMEM config into <FEATURES> section
Tracked-On: #4853
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
The default memory is 16G on TGL; the value of PLATFORM_RAM_SIZE and
SOS_RAM_SIZE is a little small in default xml.
Tracked-On: #5184
Reviewed-by: Victor Sun <victor.sun@intel.com>
Signed-off-by: fuzhongl <fuzhong.liu@intel.com>
add an IVSHMEM regoin and the related configuration parameters in
hybrid_rt scenario on whl-ipc-i5. The size of the shared memory is
2M, and it is used for the communication between VM0 and VM2.
v6: rename shm name; remove unnecessary MACROs.
v7: rename MACRO for shm name; add unassigned vbdf for post-launched
VMs.
Tracked-On: #4853
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The default memory is 16G on TGL; the value of HV and sos
ramsize is a little small in default xml.
Tracked-On: # 5184
Signed-off-by: fuzhongl <fuzhong.liu@intel.com>
Add a comment for SOS_VM to indicate its VM ID for better understanding;
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
when CONFIG_MAX_MSIX_TABLE_NUM was set to 64, it will trigger timeout ASSERT
on WHL-I5 board.
Tracked-On: #5178
Signed-off-by: lirui34 <ruix.li@intel.com>
Add cpu_affinity setup for SOS VM. Cpu affinity must be set in
scenario XML, except if no pre-launched VM on the scenario and
all pCPUs will be assigned to SOS VM in that case;
Tracked-On: #5077
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Previously the CPU affinity of SOS VM is initialized at runtime during
sanitize_vm_config() stage, follow the policy that all physical CPUs
except ocuppied by Pre-launched VMs are all belong to SOS_VM. Now change
the process that SOS CPU affinity should be initialized at build time
and has the assumption that its validity is guarenteed before runtime.
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Set guest flag value for logical partition.
Tracked-On: #5119
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Remove RT guest flags from logical partition
configuration.
Tracked-On: #5119
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Add xmls/samples folders under misc/vm_configs, and make soft link for
them.
Tracked-On: #5077
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Add hybrid_rt source code for whl-ipc-i5/i7.
Tracked-On: #5081
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Add acrn-config tool formated nuc7i7dnb configurations code in misc/vm_configs/
folder with new layout;
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
The make command is same as old configs layout:
under acrn-hypervisor folder:
make hypervisor BOARD=xxx SCENARIO=xxx [TARGET_DIR]=xxx [RELEASE=x]
under hypervisor folder:
make BOARD=xxx SCENARIO=xxx [TARGET_DIR]=xxx [RELEASE=x]
if BOARD/SCENARIO parameter is not specified, the default will be:
BOARD=nuc7i7dnb SCENARIO=industry
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
There are 3 kinds of configurations in ACRN hypervisor source code: hypervisor
overall setting, per-board setting and scenario specific per-VM setting.
Currently Kconfig act as hypervisor overall setting and its souce is located at
"hypervisor/arch/x86/configs/$(BOARD).config"; Per-board configs are located at
"hypervisor/arch/x86/configs/$(BOARD)" folder; scenario specific per-VM configs
are located at "hypervisor/scenarios/$(SCENARIO)" folder.
This layout brings issues that board configs and VM configs are coupled tightly.
The board specific Kconfig file and misc_cfg.h are shared by all scenarios, and
scenario specific pci_dev.c is shared by all boards. So the user have no way to
build hypervisor binary for different scenario on different board with one
source code repo.
The patch will setup a new VM configurations layout as below:
misc/vm_configs
├── boards --> folder of supported boards
│ ├── <board_1> --> scenario-irrelevant board configs
│ │ ├── board.c --> C file of board configs
│ │ ├── board_info.h --> H file of board info
│ │ ├── pci_devices.h --> pBDF of PCI devices
│ │ └── platform_acpi_info.h --> native ACPI info
│ ├── <board_2>
│ ├── <board_3>
│ └── <board...>
└── scenarios --> folder of supported scenarios
├── <scenario_1> --> scenario specific VM configs
│ ├── <board_1> --> board specific VM configs for <scenario_1>
│ │ ├── <board_1>.config --> Kconfig for specific scenario on specific board
│ │ ├── misc_cfg.h --> H file of board specific VM configs
│ │ ├── pci_dev.c --> board specific VM pci devices list
│ │ └── vbar_base.h --> vBAR base info of VM PT pci devices
│ ├── <board_2>
│ ├── <board_3>
│ ├── <board...>
│ ├── vm_configurations.c --> C file of scenario specific VM configs
│ └── vm_configurations.h --> H file of scenario specific VM configs
├── <scenario_2>
├── <scenario_3>
└── <scenario...>
The new layout would decouple board configs and VM configs completely:
The boards folder stores kinds of supported boards info, each board folder
stores scenario-irrelevant board configs only, which could be totally got from
a physical platform and works for all scenarios;
The scenarios folder stores VM configs of kinds of working scenario. In each
scenario folder, besides the generic scenario specific VM configs, the board
specific VM configs would be put in a embedded board folder.
In new layout, all configs files will be removed out of hypervisor folder and
moved to a separate folder. This would make hypervisor LoC calculation more
precisely with below fomula:
typical LoC = Loc(hypervisor) + Loc(one vm_configs)
which
Loc(one vm_configs) = Loc(misc/vm_configs/boards/<board>)
+ LoC(misc/vm_configs/scenarios/<scenario>/<board>)
+ Loc(misc/vm_configs/scenarios/<scenario>/vm_configurations.c
+ Loc(misc/vm_configs/scenarios/<scenario>/vm_configurations.h
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>