The current code can't config PCI vUART by a unified HV Config and there
is a conflict between the HV vUART Config and PCI vUART Config.
This patch use PCI vUART Config to replace the HV vUART Config when the
vUART connection type is PCI and modify the launch scenario to make sure
the BDF is correct when user launch post launched VMs.
Tracked-On: #6690
Signed-off-by: Chenli Wei <chenli.wei@linux.intel.com>
The current UI "Maximum virtual CLOS" above the "VM Virtual Cache
Allocation Tech", it's not user-friendly, and the clos element was not
used by vCAT feature.
This patch move the "virtual_cat_number" element and remove the unused
node from schema.
Tracked-On: #6690
Signed-off-by: Chenli Wei <chenli.wei@intel.com>
The current code generate vcat info by count the CLOS MASK and the
scenario xml should set multiple times.
It is not an ideal solution, so we add an "virtual_cat_number" element
to set the virtual CLOS number and get the MAX pcbm directly.
Signed-off-by: Chenli Wei <chenli.wei@intel.com>
The current RDT setting requires users to calculate the CLOS mask and
the details, it is not a user-friendly setting.
So we redesigned RDT, users can easily specify the cache of each vcpu
for VMs.
This patch add an RDT region element for schema, calculate and generate
all the mask and rdt parameters by config tool to generates rdt_info
struct for board.c.
Tracked-On: #6690
Signed-off-by: Chenli Wei <chenli.wei@intel.com>
1. Update the data structure of vm/memory in scenario and schema files.
The scenario will look like this.
<hpa_region>
<start_hpa>xxx</start_hpa>
<size_hpa>xxx</size_hpa>
</hpa_region>
2. Update xsl files to generate the related struct.
Tracked-On: #6690
Signed-off-by: Chenli Wei <chenli.wei@intel.com>
Signed-off-by: Kunhui-Li <kunhuix.li@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
This patch drops a few config items which are no longer needed, including:
- vm.os_config.name
- vm.UEFI_OS_LOADER_NAME
- vm.pci_dev_num
Tracked-On: #6690
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
As is recommended by UX/DX reviews, the per-VM console virtual UART is now
limited to the following choices:
- Disabled
- a COM port from COM1 to COM4
- PCI based
This patch converts the schema of scenario XMLs to integrate this
recommendation and add logic in the scenario upgrader to migrate data from
old scenario XMLs.
v1 -> v2:
* Update the static allocators and C source transformers according to the
new console vUART config item.
Tracked-On: #6690
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
We have redesign the vuart and the UI for user, so the config tool
should change the schema and xform for the new xml, then change the
static_allocators to alloc irq and io_port for new connection.
This patch add a new vuart connection type and change the xforms to
adapter the new type.
Tracked-On: #6690
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
Signed-off-by: Chenli Wei <chenli.wei@linux.intel.com>
The concept of guest_flags is hard to understand for users.
So turn guest_flags into several parameters in config tool
user interface, list as below:
GUEST_FLAG_LAPIC_PASSTHROUGH ---> lapic_passthrough
GUEST_FLAG_IO_COMPLETION_POLLING ---> io_completion_polling
GUEST_FLAG_VCAT_ENABLED ---> virtual_cat_support
GUEST_FLAG_SECURE_WORLD_ENABLED ---> secure_world_support
GUEST_FLAG_HIDE_MTRR ---> hide_mtrr_support
GUEST_FLAG_NVMX_ENABLED ---> nested_virtualization_support
GUEST_FLAG_SECURITY_VM ---> security_vm
GUEST_FLAG_RT ---> vm_type(RTVM)
GUEST_FLAG_TEE ---> vm_type(TEE_VM)
GUEST_FLAG_REE ---> vm_type(REE_VM)
In addition, HV global parameter NVMX_ENABLE is removed
from user interface, when config tool detects more than
one VM with nested_virtualization_support, NVMX_ENABLE is
assigned as 'y' automatically.
v1->v2:
*Rebase on the latest xml schema checking change
*Remove "all rights reserved" from the license header in guest_flags.py
v2->v3:
*Change the name of the new config items to CAPITAL_CASE style
*Combine guest flag policy to an XPATH in guest_flags.py
*Use count() to directly deduce NVMX_ENABLED in config_common.xsl and
update `boolean-by-key-value` to process 'true'
v3->v4:
*Change the name of the new config items to lower_case style
*Change guest_flag_node to allocation_vm_node in guest_flags.py
*Separate value case and key case for boolean-by-key-value
Tracked-On: #6690
Signed-off-by: hangliu1 <hang1.liu@linux.intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
This patch includes:
1.add load_order(PRE_LAUNCHED_VM/SERVICE_VM/POST_LAUNCHED_VM) parameter
2.change vm_type parameter to values as RTVM, STANDARD_VM, TEE, REE
TEE and REE are hide in UI.
3.deduce vm severity in vm_configuration from vm_type and load_order
This patch not includes:
change for scenario_config and functions called by scenario_config about checking
v2->v3:
*Refine template load_order
v1->v2:
*Change variable name from vm_type to load_order
*Change LoadOptionType to LoadOrderType
*Change VMOptionsType to VMType
*Add TEE_VM/REE_VM description
*Refine acrn:is-pre-launched-vm
Tracked-On: #6690
Signed-off-by: hangliu1 <hang1.liu@linux.intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
Add a configuration to support companion VM.
Tracked-On: #6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Rename SOS_VM type to SERVICE_VM
rename UOS to User VM in XML description
rename uos_thread_pid to user_vm_thread_pid
rename devname_uos to devname_user_vm
rename uosid to user_vmid
rename UOS_ACK to USER_VM_ACK
rename SOS_VM_CONFIG_CPU_AFFINITY to SERVICE_VM_CONFIG_CPU_AFFINITY
rename SOS_COM to SERVICE_VM_COM
rename SOS_UART1_VALID_NUM" to SERVICE_VM_UART1_VALID_NUM
rename SOS_BOOTARGS_DIFF to SERVICE_VM_BOOTARGS_DIFF
rename uos to user_vm in launch script and xml
Tracked-On: #6744
Signed-off-by: Liu Long <long.liu@linux.intel.com>
Reviewed-by: Geoffroy Van Cutsem <geoffroy.vancutsem@intel.com>
Rename SOS_VM_NUM to SERVICE_VM_NUM.
rename SOS_SOCKET_PORT to SERVICE_VM_SOCKET_PORT.
rename PROCESS_RUN_IN_SOS to PROCESS_RUN_IN_SERVICE_VM.
rename PCI_DEV_TYPE_SOSEMUL to PCI_DEV_TYPE_SERVICE_VM_EMUL.
rename SHUTDOWN_REQ_FROM_SOS to SHUTDOWN_REQ_FROM_SERVICE_VM.
rename PROCESS_RUN_IN_SOS to PROCESS_RUN_IN_SERVICE_VM.
rename SHUTDOWN_REQ_FROM_UOS to SHUTDOWN_REQ_FROM_USER_VM.
rename UOS_SOCKET_PORT to USER_VM_SOCKET_PORT.
rename SOS_CONSOLE to SERVICE_VM_OS_CONSOLE.
rename SOS_LCS_SOCK to SERVICE_VM_LCS_SOCK.
rename SOS_VM_BOOTARGS to SERVICE_VM_OS_BOOTARGS.
rename SOS_ROOTFS to SERVICE_VM_ROOTFS.
rename SOS_IDLE to SERVICE_VM_IDLE.
rename SEVERITY_SOS to SEVERITY_SERVICE_VM.
rename SOS_VM_UUID to SERVICE_VM_UUID.
rename SOS_REQ to SERVICE_VM_REQ.
rename RTCT_NATIVE_FILE_PATH_IN_SOS to RTCT_NATIVE_FILE_PATH_IN_SERVICE_VM.
rename CBC_REQ_T_UOS_ACTIVE to CBC_REQ_T_USER_VM_ACTIVE.
rename CBC_REQ_T_UOS_INACTIVE to CBC_REQ_T_USER_VM_INACTIV.
rename uos_active to user_vm_active.
Tracked-On: #6744
Signed-off-by: Liu Long <long.liu@linux.intel.com>
Reviewed-by: Geoffroy Van Cutsem <geoffroy.vancutsem@intel.com>
For vCAT, it may need to store more than MAX_VCPUS_PER_VM of closids,
change clos in vm_config.h to a pointer to accommodate this situation
Rename clos to pclosids
pclosids now is a pointer to an array of physical CLOSIDs that is defined
in vm_configurations.c by vmconfig. The number of elements in the array
must be equal to the value given by num_pclosids
Add max_type_pcbm (type: l2 or l3) to struct acrn_vm_config, which stores a bitmask
that selects/covers all the physical cache ways assigned to the VM
Change vmsr.c to accommodate this amended data structure
Change the config-tools to generate vm_configurations.c, and fill in the num_closids
and clos pointers based on the information from the scenario file.
Now vm_configurations.c.xsl generates all the clos related code so remove the same
code from misc_cfg.h.xsl.
Examples:
Scenario file:
<RDT>
<RDT_ENABLED>y</RDT_ENABLED>
<CDP_ENABLED>n</CDP_ENABLED>
<VCAT_ENABLED>y</VCAT_ENABLED>
<CLOS_MASK>0x7ff</CLOS_MASK>
<CLOS_MASK>0x7ff</CLOS_MASK>
<CLOS_MASK>0x7ff</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
/RDT>
<vm id="0">
<guest_flags>
<guest_flag>GUEST_FLAG_VCAT_ENABLED</guest_flag>
</guest_flags>
<clos>
<vcpu_clos>3</vcpu_clos>
<vcpu_clos>4</vcpu_clos>
<vcpu_clos>5</vcpu_clos>
<vcpu_clos>6</vcpu_clos>
<vcpu_clos>7</vcpu_clos>
</clos>
</vm>
<vm id="1">
<clos>
<vcpu_clos>1</vcpu_clos>
<vcpu_clos>2</vcpu_clos>
</clos>
</vm>
vm_configurations.c (generated by config-tools) with the above vCAT config:
static uint16_t vm0_vcpu_clos[5U] = {3U, 4U, 5U, 6U, 7U};
static uint16_t vm1_vcpu_clos[2U] = {1U, 2U};
struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] = {
{
.guest_flags = (GUEST_FLAG_VCAT_ENABLED),
.pclosids = vm0_vcpu_clos,
.num_pclosids = 5U,
.max_l3_pcbm = 0xff800U,
},
{
.pclosids = vm1_vcpu_clos,
.num_pclosids = 2U,
},
};
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Expand the capacity of legacy vuarts per VM. This change is applied to
manual scenario xml editing only.
A SOS VM can choose io port 0x3F8, 0x2F8, 0x3E8, 0x2E8 by selecting
SOS_COM1_BASE, SOS_COM2_BASE, SOS_COM3_BASE, SOS_COM4_BASE respectively.
Non SOS VM can choose io port 0x3F8, 0x2F8, 0x3E8, 0x2E8 by selecting
COM1_BASE, COM2_BASE, COM3_BASE, COM4_BASE respectively.
For any type of VM, selecting "CONFIG_COM_BASE" allows configuration tool
to pick an available io port from hardcoded list:
['0xA000', '0xA010', '0xA020', '0xA030', '0xA040', '0xA050', '0xA060', '0xA070']
A SOS VM can choose irq 4 by selecting SOS_COM1_IRQ and SOS_COM3_IRQ, and choose irq 3 by selecting SOS_COM2_IRQ and SOS_COM4_IRQ.
Non SOS VM can choose irq 4 by selecting COM1_IRQ and COM3_IRQ, and choose irq 3 by selecting COM2_IRQ and COM4_IRQ.
For SOS VM, selecting "CONFIG_COM_IRQ" allows configuration tool
to pick an available irq based on AVAILABLE_IRQ_INFO. For non SOS VM, it
will allocate an available irq from [1, 15].
Tracked-On: #6652
Signed-off-by: Yang,Yu-chu <yu-chu.yang@intel.com>
This patch adds a new priority based scheduler to support
vCPU scheduling based on their pre-configured priorities.
A vCPU can be running only if there is no higher priority
vCPU running on the same pCPU.
Tracked-On: #6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Extract the log area address from TPM2 acpi table and add node
log_area_start_address to board.xml. This emelment is used by host_pa of
mmiodevs.
Tracked-On: #6320
Signed-off-by: Yang,Yu-chu <yu-chu.yang@intel.com>
This patch allocates interrupt lines among VMs according to the PCI devices
assigned to them.
v1 -> v2:
* Remove the usage of VMx_PT_INTX_NUM macro in vm_configuration.c; use the
concrete numbers directly.
* The static allocator will also complain if any interrupt line is allocated to
a VM with LAPIC_PASSTHROUGH.
v2 -> v3:
* Fix a minor coding style issue.
Tracked-On: #6287
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
IC_ASSIGN_MMIODEV -> ACRN_IOCTL_ASSIGN_MMIODEV
IC_DEASSIGN_MMIODEV -> ACRN_IOCTL_DEASSIGN_MMIODEV
struct acrn_mmiodev has slight change. Move struct acrn_mmiodev into
acrn_common.h because it is used by both DM and HV.
Tracked-On: #6282
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Instead of "#include <x86/foo.h>", use "#include <asm/foo.h>".
In other words, we are adopting the same practice in Linux kernel.
Tracked-On: #5920
Signed-off-by: Liang Yi <yi.liang@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Add a xslt file "vm_configurations.c.xsl". This file is used to
generate vm_configurations.c which is used by hypervisor.
Tracked-On: #5980
Signed-off-by: Yang,Yu-chu <yu-chu.yang@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>