This patch is to support direct assignment of P2SB bridge to one pre-launched
VM for EHL. User can configure this per-VM attribute in scenario xml:
<mmio_resources desc="MMIO resources.">
<p2sb>y</p2sb>
</mmio_resources>
Set p2sb to y to passthru P2SB bridge to VM, and n otherwise.
Tracked-On: #5221
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Add missing IVSHMEM tag in mrb board xml file to fix build issue
Correct misspelled function name
Use better error messages
Tracked-On: #5221
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Because ivshmem memory uses hv memory, if the ivshmem feature is
enabled, HV_RAM_SIZE will include IVSHMEM_SHM_SIZE.
Tracked-On: #4853
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
add hybrid_rt scenario for the ElkhartLake CRB board so that user can
launch Yocto Linux as pre-launched VM.
Tracked-On: #5238
Signed-off-by: "Nishioka, Toshiki" <toshiki.nishioka@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
1.Modify clos_mask and mba_delay as a member of the union type.
2.Move HV_SUPPORTED_MAX_CLOS ,MAX_CACHE_CLOS_NUM_ENTRIES and
MAX_MBA_CLOS_NUM_ENTRIES to misc_cfg.h file.
Tracked-On: #5229
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
1.Add macro MAX_CACHE_CLOS_NUM_ENTRIES for CAT, and MAX_MBA_CLOS_NUM_ENTRIES for MBA.
MAX_MBA_CLOS_NUM_ENTRIES:
Max number of Cache Mask entries corresponding to each CLOS.
This can vary if CDP is enabled vs disabled, as each CLOS entry will have corresponding
cache mask values for Data and Code when CDP is enabled.
MAX_CACHE_CLOS_NUM_ENTRIES:
Max number of MBA delay entries corresponding to each CLOS.
2.Move VMx_VCPU_CLOS macro to misc_cfg.h head file.
Tracked-On: #5229
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
HV_SUPPORTED_MAX_CLOS:
This value represents the maximum CLOS that is allowed by ACRN hypervisor.
This value is set to be least common Max CLOS (CPUID.(EAX=0x10,ECX=ResID):EDX[15:0])
among all supported RDT resources in the platform. In other words, it is
min(maximum CLOS of L2, L3 and MBA). This is done in order to have consistent
CLOS allocations between all the RDT resources.
Tracked-On: #5229
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
add IVSHMEM config in hybrid_rt scenario on tgl-rvp board.
Tracked-On: #4853
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
We use sos kernel cmdline maxcpus to limit the pCPU number of SOS
for hybrid or hybrid_rt scenarios by vcpu numbers calculation.
v2: add SOS CPU affinity calculation by total pCPU plus pCPUs
occupied by pre-launched VMs when no pcpuid configured
in SOS.
Tracked-On: #5216
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Reviewed-by: Victor Sun <victor.sun@intel.com>
Add cfl-k700-i7 board xml and its industry xml to support ACRN industry
scenario on cfl-k700-i7 board.
Tracked-On: #5212
Signed-off-by: Victor Sun <victor.sun@intel.com>
CPU sharing between pre-launch VMs and SOS, post-launch VMs were
forbidden.
Remove the limitation.
Tracked-On: #5153
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
This patch is to support the Inter-VM communication by IVSHMEM
in config tool.
Users can configure IVSHMEM_ENABLE to enable or disable Inter-VM
communication by IVSHMEM; users can configure the name, size,
communication VM IDs of the IVSHMEM devices in the VM settings of
scenario xmls, then config tool will generate the related IVSHMEM
configurations for Inter-VM communication.
The config tool will do sanity check including when saving the xmls:
the format of shared memory region configuration is
[name],[size],[VM ID]:[VM ID](:[VM ID]...);
the max size of the name is 32 bytes;
the names should not be duplicated;
the mininum value of shared memory region size is 2M;
the value of shared memory region is a power of 2;
the size of share memory region should not extended the size of
available ram.
Tracked-On: #4853
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Shared memory regoins can be added or deleted or updated from
scenario settings in config app with sanity check.
v2: move IVSHMEM config to hv->FEATURES->IVSHMEM
Tracked-On: #4853
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
add IVSHMEM_ENABLED and IVSHMEM_REGION in scenario xmls to support
Inter-VM communications configuration for VMs.
v2: move IVSHMEM config into <FEATURES> section
Tracked-On: #4853
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Enable TPM passthrough configuration for pre-launched VM feature, on
TGL, by adding 'tgl-rvp' to TPM_PASSTHRU_BOARD.
Tracked-On: #5205
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Get the max number with integer list to instead string 'number'.
Tracked-On: #5199
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
The default memory is 16G on TGL; the value of PLATFORM_RAM_SIZE and
SOS_RAM_SIZE is a little small in default xml.
Tracked-On: #5184
Reviewed-by: Victor Sun <victor.sun@intel.com>
Signed-off-by: fuzhongl <fuzhong.liu@intel.com>
add an IVSHMEM regoin and the related configuration parameters in
hybrid_rt scenario on whl-ipc-i5. The size of the shared memory is
2M, and it is used for the communication between VM0 and VM2.
v6: rename shm name; remove unnecessary MACROs.
v7: rename MACRO for shm name; add unassigned vbdf for post-launched
VMs.
Tracked-On: #4853
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Fix build issue while CDP_ENABLED=y for EHL-CRB-B.
Tracked-On: #5092
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
The default memory is 16G on TGL; the value of HV and sos
ramsize is a little small in default xml.
Tracked-On: # 5184
Signed-off-by: fuzhongl <fuzhong.liu@intel.com>
Add a comment for SOS_VM to indicate its VM ID for better understanding;
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
when CONFIG_MAX_MSIX_TABLE_NUM was set to 64, it will trigger timeout ASSERT
on WHL-I5 board.
Tracked-On: #5178
Signed-off-by: lirui34 <ruix.li@intel.com>
This patch will move the VM configuration check to pre-build stage,
a test program will do the check for pre-defined VM configuration
data before making hypervisor binary. If test failed, the make
process will be aborted. So once the hypervisor binary is built
successfully or start to run, it means the VM configuration has
been sanitized.
The patch did not add any new VM configuration check function,
it just port the original sanitize_vm_config() function from cpu.c
to static_checks.c with below change:
1. remove runtime rdt detection for clos check;
2. replace pr_err() from logmsg.h with printf() from stdio.h;
3. replace runtime call get_pcpu_nums() in ALL_CPUS_MASK macro
with static defined MAX_PCPU_NUM;
4. remove cpu_affinity check since pre-launched VM might share
pcpu with SOS VM;
The BOARD/SCENARIO parameter check and configuration folder check is
also moved to prebuild Makefile.
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add cpu_affinity setup for SOS VM. Cpu affinity must be set in
scenario XML, except if no pre-launched VM on the scenario and
all pCPUs will be assigned to SOS VM in that case;
Tracked-On: #5077
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Previously the CPU affinity of SOS VM is initialized at runtime during
sanitize_vm_config() stage, follow the policy that all physical CPUs
except ocuppied by Pre-launched VMs are all belong to SOS_VM. Now change
the process that SOS CPU affinity should be initialized at build time
and has the assumption that its validity is guarenteed before runtime.
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Set guest flag value for logical partition.
Tracked-On: #5119
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Remove RT guest flags from logical partition
configuration.
Tracked-On: #5119
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
The folders for config xmls and scenario setting source code are moved
to misc/vm_configs/xmls and misc/vm_configs/board, misc/vm_configs/scenario,
so this patch is to update config path for these folders.
Tracked-On: #5077
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Add xmls/samples folders under misc/vm_configs, and make soft link for
them.
Tracked-On: #5077
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Add hybrid_rt source code for whl-ipc-i5/i7.
Tracked-On: #5081
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
1.Refine cpu affinity in hybrid rt xmls for whl-ipc-i5/7
2.Refine guest flag for hybrid rt xmls for whl-ipc-i5/7
Tracked-On: #5081
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Add support to generate passthru TPM information for whl-ipc-i5/i7.
Tracked-On: #5077
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
There is some macro defined in misc_cfg.h while CAT/MBA enabled.
include the missing header to solve build issue.
Tracked-On: #5092
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Now the hypervisor configuration source code layout is changed, so acrn-config
need to change accordingly to make sure XML based configuration build success;
Tracked-On: #5077
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Add acrn-config tool formated nuc7i7dnb configurations code in misc/vm_configs/
folder with new layout;
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
The make command is same as old configs layout:
under acrn-hypervisor folder:
make hypervisor BOARD=xxx SCENARIO=xxx [TARGET_DIR]=xxx [RELEASE=x]
under hypervisor folder:
make BOARD=xxx SCENARIO=xxx [TARGET_DIR]=xxx [RELEASE=x]
if BOARD/SCENARIO parameter is not specified, the default will be:
BOARD=nuc7i7dnb SCENARIO=industry
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
There are 3 kinds of configurations in ACRN hypervisor source code: hypervisor
overall setting, per-board setting and scenario specific per-VM setting.
Currently Kconfig act as hypervisor overall setting and its souce is located at
"hypervisor/arch/x86/configs/$(BOARD).config"; Per-board configs are located at
"hypervisor/arch/x86/configs/$(BOARD)" folder; scenario specific per-VM configs
are located at "hypervisor/scenarios/$(SCENARIO)" folder.
This layout brings issues that board configs and VM configs are coupled tightly.
The board specific Kconfig file and misc_cfg.h are shared by all scenarios, and
scenario specific pci_dev.c is shared by all boards. So the user have no way to
build hypervisor binary for different scenario on different board with one
source code repo.
The patch will setup a new VM configurations layout as below:
misc/vm_configs
├── boards --> folder of supported boards
│ ├── <board_1> --> scenario-irrelevant board configs
│ │ ├── board.c --> C file of board configs
│ │ ├── board_info.h --> H file of board info
│ │ ├── pci_devices.h --> pBDF of PCI devices
│ │ └── platform_acpi_info.h --> native ACPI info
│ ├── <board_2>
│ ├── <board_3>
│ └── <board...>
└── scenarios --> folder of supported scenarios
├── <scenario_1> --> scenario specific VM configs
│ ├── <board_1> --> board specific VM configs for <scenario_1>
│ │ ├── <board_1>.config --> Kconfig for specific scenario on specific board
│ │ ├── misc_cfg.h --> H file of board specific VM configs
│ │ ├── pci_dev.c --> board specific VM pci devices list
│ │ └── vbar_base.h --> vBAR base info of VM PT pci devices
│ ├── <board_2>
│ ├── <board_3>
│ ├── <board...>
│ ├── vm_configurations.c --> C file of scenario specific VM configs
│ └── vm_configurations.h --> H file of scenario specific VM configs
├── <scenario_2>
├── <scenario_3>
└── <scenario...>
The new layout would decouple board configs and VM configs completely:
The boards folder stores kinds of supported boards info, each board folder
stores scenario-irrelevant board configs only, which could be totally got from
a physical platform and works for all scenarios;
The scenarios folder stores VM configs of kinds of working scenario. In each
scenario folder, besides the generic scenario specific VM configs, the board
specific VM configs would be put in a embedded board folder.
In new layout, all configs files will be removed out of hypervisor folder and
moved to a separate folder. This would make hypervisor LoC calculation more
precisely with below fomula:
typical LoC = Loc(hypervisor) + Loc(one vm_configs)
which
Loc(one vm_configs) = Loc(misc/vm_configs/boards/<board>)
+ LoC(misc/vm_configs/scenarios/<scenario>/<board>)
+ Loc(misc/vm_configs/scenarios/<scenario>/vm_configurations.c
+ Loc(misc/vm_configs/scenarios/<scenario>/vm_configurations.h
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>