Currently the vcpu_affinity[] array fixes the vCPU to pCPU mapping.
While the new cpu_affinity_bitmap doesn't explicitly sepcify this
mapping, instead, it implicitly assumes that vCPU0 maps to the pCPU
with lowest pCPU ID, vCPU1 maps to the second lowest pCPU ID, and
so on.
This makes it possible for post-launched VM to run vCPUs on a subset of
these pCPUs only, and not all of them.
acrn-dm may launch post-launched VMs with the current approach: indicate
VM UUID and hypervisor launches all VCPUs from the PCPUs that are masked
in cpu_affinity_bitmap.
Also acrn-dm can choose to launch the VM on a subset of PCPUs that is
defined in cpu_affinity_bitmap. In this way, acrn-dm must specify the
subset of PCPUs in the CREATE_VM hypercall.
Additionally, with this change, a guest's vcpu_num can be easily calculated
from cpu_affinity_bitmap, so don't assign vcpu_num in vm_configuration.c.
Tracked-On: #4616
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The pci_dev config settings of SOS are same so move the config interface
from vm_configurations.c to CONFIG_SOS_VM macro;
Tracked-On: #4616
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently the vm uuid and severity is initilized separately in
vm_config struct, developer need to take care both items carefuly
otherwise hypervisor would have trouble with the configurations.
Given the vm loader_order/uuid and severity are binded tightly, the
patch merged these tree settings in one macro so that developer will
have a simple interface to configure in vm_config struct.
Tracked-On: #4616
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This commit allows hypervisor to allocate cache to vcpu by assigning different clos
to vcpus of a same VM.
For example, we could allocate different cache to housekeeping core and real-time core
of an RTVM in order to isolate the interference of housekeeping core via cache hierarchy.
Tracked-On: #4566
Signed-off-by: Yan, Like <like.yan@intel.com>
Reviewed-by: Chen, Zide <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add severity definitions for different scenarios. The static
guest severity is defined according to guest configurations.
Also add sanity check to make sure the severity for all guests
are correct.
Tracked-On: #4270
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Add this vcpu_affinity[] for each VM to indicate the assignment policy.
With it, pcpu_bitmap is not needed, so remove it from vm_config.
Instead, vcpu_affinity is a must for each VM.
This patch also add some sanitize check of vcpu_affinity[]. Here are
some rules:
1) only one bit can be set for each vcpu_affinity of vcpu.
2) two vcpus in same VM cannot be set with same vcpu_affinity.
3) vcpu_affinity cannot be set to the pcpu which used by pre-launched VM.
v4: config SDC with CONFIG_MAX_KATA_VM_NUM
v5: config SDC with CONFIG_MAX_PCPU_NUM
Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
There is plan that define each VM configuration statically in HV and let
DM just do VM creating and destroying. So DM need get vcpu_num
information when VM creating.
This patch return the vcpu_num via the API param. And also initial the
VMs' cpu_num for existing scenarios.
Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The vCOM2 of each VM is designed for VM communication, one VM could send
command or request to another VM through this channel. The feature will
be used for system S3/S5 implementation.
On Hybird scenario, vCOM2 of pre-launched VM will connect to vCOM2 of SOS_VM;
On Industry scenario, vCOM2 of post-launched RTVM will connect to vCOM2 of
SOS_VM.
Tracked-On: #3602
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
The settings of SOS VM COM1 which is used for console is board specific,
and this result in SOS VM COM2 which used for VM communication is also
board specific, so move the configure method from Kconfig to board configs
folder. The MACRO definition will be handled by acrn-config tool in future.
Tracked-On: #3602
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
When assgined a PCI PTDev to post-launched VM from SOS, using a pointer to point to
the real struct pci_vdev. When post-launched VM access its PTDev configure space in
SOS address space, using this real struct pci_vdev.
Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Add emulated PCI device configure for SOS to prepare for add support for customizing
special pci operations for each emulated PCI device.
Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
In hybrid mode, pre-launched VM should have the highest severity to
handle platform reset, the flag should not be set in SOS VM;
Tracked-On: #3505
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Hybrid scenario will run 3 VMs: one pre-launched VM, one pre-launched SOS VM
and one post-launched Standard VM.
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>