doc: add vCAT documentation

This patch adds user guide and high level design for vCAT

Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
This commit is contained in:
dongshen 2021-11-12 07:24:35 -08:00 committed by David Kinder
parent 643d07b3f1
commit 066856d6f9
5 changed files with 230 additions and 0 deletions

View File

@ -74,6 +74,7 @@ Advanced Features
tutorials/nvmx_virtualization
tutorials/vuart_configuration
tutorials/rdt_configuration
tutorials/vcat_configuration
tutorials/waag-secure-boot
tutorials/enable_s5
tutorials/cpu_sharing

View File

@ -25,4 +25,5 @@ Hypervisor High-Level Design
Hypercall / HSM upcall <hv-hypercall>
Compile-time configuration <hv-config>
RDT support <hv-rdt>
vCAT support <hv-vcat>
Split-locked Access handling <hld-splitlock>

View File

@ -0,0 +1,139 @@
.. _hv_vcat:
Enable vCAT
###########
vCAT refers to the virtualization of Cache Allocation Technology (CAT), one of the
RDT (Resource Director Technology) technologies.
ACRN vCAT is built on top of ACRN RDT: ACRN RDT provides a number of physical CAT resources
(COS IDs + cache ways), ACRN vCAT exposes some number of virtual CAT resources to VMs
and then transparently map them to the assigned physical CAT resources in the ACRN hypervisor;
VM can take advantage of vCAT to prioritize and partition virtual cache ways for its own tasks.
In current CAT implementation, one COS ID corresponds to one ``IA32_type_MASK_n`` (type: L2 or L3,
n ranges from 0 to ``MAX_CACHE_CLOS_NUM_ENTRIES`` - 1) MSR and a bit in a capacity bitmask (CBM)
corresponds to one cache way.
On current generation systems, normally L3 cache is shared by all CPU cores on the same socket and
L2 cache is generally just shared by the hyperthreads on a core. But when dealing with ACRN
vCAT COS IDs assignment, it is currently assumed that all the L2/L3 caches (and therefore all COS IDs)
are system-wide caches shared by all cores in the system, this is done for convenience and to simplify
the vCAT configuration process. If vCAT is enabled for a VM (abbreviated as vCAT VM), there should not
be any COS ID overlap between a vCAT VM and any other VMs. e.g. the vCAT VM has exclusive use of the
assigned COS IDs.
When assigning cache ways, however, the VM can be given exclusive, shared, or mixed access to the cache
ways depending on particular performance needs. For example, use dedicated cache ways for RTVM, and use
shared cache ways between low priority VMs.
In ACRN, the CAT resources allocated for vCAT VMs are determined in :ref:`vcat_configuration`.
For further details on the RDT, refer to the ACRN RDT high-level design :ref:`hv_rdt`.
High Level ACRN vCAT Design
***************************
ACRN CAT virtualization support can be divided into two parts:
- CAT Capability Exposure to Guest VM
- CAT resources (COS IDs + cache ways) management
The figure below shows high-level design of vCAT in ACRN:
.. figure:: images/vcat-hld.png
:align: center
CAT Capability Exposure to Guest VM
***********************************
ACRN exposes CAT capability and resource to a Guest VM via vCPUID and vMSR, as explained
in the following sections.
vCPUID
======
CPUID Leaf 07H
--------------
- CPUID.(EAX=07H, ECX=0).EBX.PQE[bit 15]: Supports RDT capability if 1. This bit will be set for a vCAT VM.
CPUID Leaf 10H
--------------
**CAT Resource Type and Capability Enumeration**
- CPUID.(EAX=10H, ECX=0):EBX[1]: If 1, indicate L3 CAT support for a vCAT VM.
- CPUID.(EAX=10H, ECX=0):EBX[2]: If 1, indicate L2 CAT support for a vCAT VM.
- CPUID.(EAX=10H, ECX=1): CAT capability enumeration sub-leaf for L3. Reports L3 COS_MAX and CBM_LEN to a vCAT VM
- CPUID.(EAX=10H, ECX=2): CAT capability enumeration sub-leaf for L2. Reports L2 COS_MAX and CBM_LEN to a vCAT VM
vMSR
====
The following CAT MSRs will be virtualized for a vCAT VM:
- IA32_PQR_ASSOC
- IA32_type_MASK_0 ~ IA32_type_MASK_n
By default, after reset, all CPU cores are assigned to COS 0 and all IA32_type_MASK_n MSRs
are programmed to allow fill into all cache ways.
CAT resources (COS IDs + cache ways) management
************************************************
All accesses to the CAT MSRs are intercepted by vMSR and control is passed to vCAT, which will perform
the following actions:
- Intercept IA32_PQR_ASSOC MSR to re-map virtual COS ID to physical COS ID.
Upon writes, store the re-mapped physical COS ID into its vCPU ``msr_store_area``
data structure guest part. It will be loaded to physical IA32_PQR_ASSOC on each VM-Enter.
- Intercept IA32_type_MASK_n MSRs to re-map virtual CBM to physical CBM. Upon writes,
program re-mapped physical CBM into corresponding physical IA32_type_MASK_n MSR
Several vCAT P2V (physical to virtual) and V2P (virtual to physical)
mappings exist, as illustrated in the following pseudocode:
.. code-block:: none
struct acrn_vm_config *vm_config = get_vm_config(vm_id)
max_pcbm = vm_config->max_type_pcbm (type: l2 or l3)
mask_shift = ffs64(max_pcbm)
vcosid = vmsr - MSR_IA32_type_MASK_0
pcosid = vm_config->pclosids[vcosid]
pmsr = MSR_IA32_type_MASK_0 + pcosid
pcbm = vcbm << mask_shift
vcbm = pcbm >> mask_shift
Where
``vm_config->pclosids[]``: array of physical COS IDs, where each corresponds to one ``vcpu_clos`` that
is defined in the scenario file
``max_pcbm``: a bitmask that selects all the physical cache ways assigned to the VM, corresponds to
the nth ``CLOS_MASK`` that is defined in scenario file, where n = the first physical COS ID assigned
= ``vm_config->pclosids[0]``
``ffs64(max_pcbm)``: find the first (least significant) bit set in ``max_pcbm`` and return
the index of that bit.
``MSR_IA32_type_MASK_0``: 0xD10 for L2, 0xC90 for L3
``vcosid``: virtual COS ID, always starts from 0
``pcosid``: corresponding physical COS ID for a given ``vcosid``
``vmsr``: virtual MSR address, passed to vCAT handlers by the
caller functions ``rdmsr_vmexit_handler()``/``wrmsr_vmexit_handler()``
``pmsr``: physical MSR address
``vcbm``: virtual CBM, passed to vCAT handlers by the
caller functions ``rdmsr_vmexit_handler()``/``wrmsr_vmexit_handler()``
``pcbm``: physical CBM

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

View File

@ -0,0 +1,89 @@
.. _vcat_configuration:
Enable vCAT Configuration
#########################
vCAT is built on top of RDT, so to use vCAT we must first enable RDT.
For details on enabling RDT configuration on ACRN, see :ref:`rdt_configuration`.
For details on ACRN vCAT high-level design, see :ref:`hv_vcat`.
The vCAT feature is disabled by default in ACRN. You can enable vCAT via the UI,
the steps listed below serve as an FYI to show how those settings are translated
into XML in the scenario file:
#. Configure system level features:
- Edit :option:`hv.FEATURES.RDT.RDT_ENABLED` to `y` to enable RDT
- Edit :option:`hv.FEATURES.RDT.CDP_ENABLED` to `n` to disable CDP.
Currently vCAT requires CDP to be disabled.
- Edit :option:`hv.FEATURES.RDT.VCAT_ENABLED` to `y` to enable vCAT
.. code-block:: xml
:emphasize-lines: 3,4,5
<FEATURES>
<RDT>
<RDT_ENABLED>y</RDT_ENABLED>
<CDP_ENABLED>n</CDP_ENABLED>
<VCAT_ENABLED>y</VCAT_ENABLED>
<CLOS_MASK></CLOS_MASK>
</RDT>
</FEATURES>
#. In each Guest VM configuration:
- Edit :option:`vm.guest_flags.guest_flag` and add ``GUEST_FLAG_VCAT_ENABLED``
to enable the vCAT feature on the VM.
- Edit :option:`vm.clos.vcpu_clos` to assign COS IDs to the VM.
If ``GUEST_FLAG_VCAT_ENABLED`` is not specified for a VM (abbreviated as RDT VM):
``vcpu_clos`` is per CPU in a VM and it configures each CPU in a VM to a desired COS ID.
So the number of vcpu_closes is equal to the number of vCPUs assigned.
If ``GUEST_FLAG_VCAT_ENABLED`` is specified for a VM (abbreviated as vCAT VM):
``vcpu_clos`` is not per CPU anymore; instead, it specifies a list of physical COS IDs (minimum 2)
that are assigned to a vCAT VM. The number of vcpu_closes is not necessarily equal to
the number of vCPUs assigned, but may be not only greater than the number of vCPUs assigned but
less than this number. Each vcpu_clos will be mapped to a virtual COS ID, the first vcpu_clos
is mapped to virtual COS ID 0 and the second is mapped to virtual COS ID 1, etc.
.. code-block:: xml
:emphasize-lines: 3,10,11,12,13
<vm id="1">
<guest_flags>
<guest_flag>GUEST_FLAG_VCAT_ENABLED</guest_flag>
</guest_flags>
<cpu_affinity>
<pcpu_id>1</pcpu_id>
<pcpu_id>2</pcpu_id>
</cpu_affinity>
<clos>
<vcpu_clos>2</vcpu_clos>
<vcpu_clos>4</vcpu_clos>
<vcpu_clos>5</vcpu_clos>
<vcpu_clos>7</vcpu_clos>
</clos>
</vm>
.. note::
CLOS_MASK defined in scenario file is a capacity bitmask (CBM) starting
at bit position low (the lowest assigned physical cache way) and ending at position
high (the highest assigned physical cache way, inclusive). As CBM only allows
contiguous '1' combinations, so CLOS_MASK essentially is the maximum CBM that covers
all the physical cache ways assigned to a vCAT VM.
The config tool imposes oversight to prevent any problems with invalid configuration data for vCAT VMs:
* For a vCAT VM, its vcpu_closes cannot be set to 0, COS ID 0 is reserved to be used only by hypervisor
* There should not be any COS ID overlap between a vCAT VM and any other VMs. e.g. the vCAT VM has exclusive use of the assigned COS IDs
* For a vCAT VM, each vcpu_clos must be less than L2/L3 COS_MAX
* For a vCAT VM, its vcpu_closes cannot contain duplicate values
#. Follow instructions in :ref:`gsg` and build with this XML configuration.