Commit 4a04fcc ("config_tools: skip remapping HW units with no devices under
scope") skips hardware remapping units without any device under its scope in the
config tools, which turns out to only work if the HV is not parsing the DMAR at
runtime.
This patch reverts the previous workaround and fixes the previous issue by
always initializing `dmar_hw_list.hw_ignore` when parsing DMAR. This ensures
that the DRHDx_IGNORE macro will always be emitted while DRHD_COUNT is not
impacted.
Fixes: 4a04fcc ("config_tools: skip remapping HW units with no devices under scope")
Tracked-On: #6709
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Implement the propagate_vcbm() function:
Set vCBM to to all the vCPUs that share cache with vcpu
to mimic hardware CAT behavior
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Implement the write_vcbm() function to handle the
MSR_IA32_type_MASK_n vCBM MSRs write request
Call write_vclosid() to handle MSR_IA32_PQR_ASSOC MSR write request
Several vCAT P2V (physical to virtual) and V2P (virtual to physical)
mappings exist:
struct acrn_vm_config *vm_config = get_vm_config(vm_id)
max_pcbm = vm_config->max_type_pcbm (type: l2 or l3)
mask_shift = ffs64(max_pcbm)
vclosid = vmsr - MSR_IA32_type_MASK_0
pclosid = vm_config->pclosids[vclosid]
pmsr = MSR_IA32_type_MASK_0 + pclosid
pcbm = vcbm << mask_shift
vcbm = pcbm >> mask_shift
Where
MSR_IA32_type_MASK_n: L2 or L3 mask msr address for CLOSIDn, from
0C90H through 0D8FH (inclusive).
max_pcbm: a bitmask that selects all the physical cache ways assigned to the VM
vclosid: virtual CLOSID, always starts from 0
pclosid: corresponding physical CLOSID for a given vclosid
vmsr: virtual msr address, passed to vCAT handlers by the
caller functions rdmsr_vmexit_handler()/wrmsr_vmexit_handler()
pmsr: physical msr address
vcbm: virtual CBM, passed to vCAT handlers by the
caller functions rdmsr_vmexit_handler()/wrmsr_vmexit_handler()
pcbm: physical CBM
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Implement the read_vcbm() and read_vclosid() functions to handle the MSR_IA32_PQR_ASSOC
and MSR_IA32_type_MASK_n vCAT MSRs read request.
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Expose CAT feature to vCAT VM by reporting the number of
cache ways/CLOSIDs via the 04H/10H cpuid instructions, so that the
VM can take advantage of CAT to prioritize and partition cache
resource for its own tasks.
Add the vcat_pcbm_to_vcbm() function to map pcbm to vcbm
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Initialize vCBM MSRs
Initialize vCLOSID MSR
Add some vCAT functions:
Retrieve max_vcbm and max_pcbm
Check if vCAT is configured or not for the VM
Map vclosid to pclosid
write_vclosid: vCLOSID MSR write handler
write_vcbm: vCBM MSR write handler
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Fix a crash in the 'board_inspector.py' tool in case 'cpuid' is not
installed on the system.
The tools crashes because 'cpuid' is used before the check for
dependencies is done in the code. That check for dependencies is
done in the 'legacy/board_parser.py' file. But the native_check()
function that is called before it also expects the cpuid tool to
be installed.
The fix is to move the check for dependencies in the main
'board_inspector.py' file, before any other operation is done.
Tracked-On: #6719
Signed-off-by: Geoffroy Van Cutsem <geoffroy.vancutsem@intel.com>
We should generate new board xmls for cfl,whl and nuc11 because of the
board inspector changes. For the details refer to these RPs:#6699 #6710
Tracked-On:#6709
Signed-off-by: zhongzhenx.liu <zhongzhenx.liu@intel.com>
Specifying a reserved or unimplemented MSR address in ECX for rdmsr will cause a
general protection exception. In this case, we should not change the contents of
registers EDX:EAX.
Tracked-On: #4550
Signed-off-by: Fei Li <fei1.li@intel.com>
Initialize the emulated_guest_msrs[] array at runtime for
MSR_IA32_type_MASK_n and MSR_IA32_PQR_ASSOC msrs, there is no good
way to do this initialization statically at build time
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
For vCAT, it may need to store more than MAX_VCPUS_PER_VM of closids,
change clos in vm_config.h to a pointer to accommodate this situation
Rename clos to pclosids
pclosids now is a pointer to an array of physical CLOSIDs that is defined
in vm_configurations.c by vmconfig. The number of elements in the array
must be equal to the value given by num_pclosids
Add max_type_pcbm (type: l2 or l3) to struct acrn_vm_config, which stores a bitmask
that selects/covers all the physical cache ways assigned to the VM
Change vmsr.c to accommodate this amended data structure
Change the config-tools to generate vm_configurations.c, and fill in the num_closids
and clos pointers based on the information from the scenario file.
Now vm_configurations.c.xsl generates all the clos related code so remove the same
code from misc_cfg.h.xsl.
Examples:
Scenario file:
<RDT>
<RDT_ENABLED>y</RDT_ENABLED>
<CDP_ENABLED>n</CDP_ENABLED>
<VCAT_ENABLED>y</VCAT_ENABLED>
<CLOS_MASK>0x7ff</CLOS_MASK>
<CLOS_MASK>0x7ff</CLOS_MASK>
<CLOS_MASK>0x7ff</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
/RDT>
<vm id="0">
<guest_flags>
<guest_flag>GUEST_FLAG_VCAT_ENABLED</guest_flag>
</guest_flags>
<clos>
<vcpu_clos>3</vcpu_clos>
<vcpu_clos>4</vcpu_clos>
<vcpu_clos>5</vcpu_clos>
<vcpu_clos>6</vcpu_clos>
<vcpu_clos>7</vcpu_clos>
</clos>
</vm>
<vm id="1">
<clos>
<vcpu_clos>1</vcpu_clos>
<vcpu_clos>2</vcpu_clos>
</clos>
</vm>
vm_configurations.c (generated by config-tools) with the above vCAT config:
static uint16_t vm0_vcpu_clos[5U] = {3U, 4U, 5U, 6U, 7U};
static uint16_t vm1_vcpu_clos[2U] = {1U, 2U};
struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] = {
{
.guest_flags = (GUEST_FLAG_VCAT_ENABLED),
.pclosids = vm0_vcpu_clos,
.num_pclosids = 5U,
.max_l3_pcbm = 0xff800U,
},
{
.pclosids = vm1_vcpu_clos,
.num_pclosids = 2U,
},
};
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add the VCAT_ENABLED element to RDTType so that user can enable/disable vCAT globally
Add the GUEST_FLAG_VCAT_ENABLED guest flag to enable/disable vCAT per-VM.
Currently we have the following per-VM clos element in scenario file for RDT use:
<clos>
<vcpu_clos>0</vcpu_clos>
<vcpu_clos>0</vcpu_clos>
</clos>
When the GUEST_FLAG_VCAT_ENABLED guest flag is not specified, clos is for RDT use,
vcpu_clos is per-CPU and it configures each CPU in VMs to a desired CLOS ID.
When the GUEST_FLAG_VCAT_ENABLED guest flag is specified, vCAT is enabled for this VM,
clos is for vCAT use, vcpu_clos is not per-CPU anymore in this case, just a list of
physical CLOSIDs (minimum 2) that are assigned to VMs for vCAT use. Each vcpu_clos
will be mapped to a virtual CLOSID, the first vcpu_clos is mapped to virtual CLOSID
0 and the second is mapped to virtual CLOSID 1, etc
Add xs:assert to prevent any problems with invalid configuration data for vCAT:
If any GUEST_FLAG_VCAT_ENABLED guest flag is specified, both RDT_ENABLED and VCAT_ENABLED
must be 'y'
If VCAT_ENABLED is 'y', RDT_ENABLED must be 'y' and CDP_ENABLED must be 'n'
For a vCAT VM, vcpu_clos cannot be set to CLOSID 0, CLOSID 0 is reserved to be used by hypervisor
For a vCAT VM, number of clos/vcpu_clos elements must be greater than 1
For a vCAT VM, each clos/vcpu_clos must be less than L2/L3 COS_MAX
For a vCAT VM, its clos/vcpu_clos elements cannot contain duplicate values
There should not be any CLOS IDs overlap between a vCAT VM and any other VMs
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1. remove HV_RAM_SIZE and CONFIG_HV_RAM_SIZE in the related python
code, schema and all existing scenario XMLs because PR 6664 has
changed it in HV side.
2. set HV_RAM_START default value to 2M.
Tracked-On: #6663
Signed-off-by: Kunhui-Li <kunhuix.li@intel.com>
It is seen on some platforms that the ACPI DMAR tables will report
remapping hardware units that does not have any device under its scope,
which is not expected by the board inspector previously and thus causes the
tool to generate inconsistent configuration data.
This patch makes the board inspector skip such remapping hardware units. It
does not impact the functionality of the hypervisor as no device is managed
by those skipped remapping units.
Tracked-On: #6709
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
In commit of 7cc9c8fe06
the pre-launched VM was set to ACPI HW Reduced platform, then the IRQ
would be allocated from 0 for PCI MSI devices. The Intel igb/igc driver
might get IRQ 8 when do request irq which would conflict with the irq
of RTC device:
[ 14.264954] genirq: Flags mismatch irq 8. 00000000 (enp0s8-TxRx-3) vs.
00000000 (rtc0)
[ 14.265411] ------------[ cut here ]------------
[ 14.265508] kernel BUG at drivers/pci/msi.c:376!
[ 14.265610] invalid opcode: 0000 [#1] PREEMPT SMP
[ 14.265710] CPU: 0 PID: 296 Comm: connmand Not tainted 5.10.52-acrn-sos
-dirty #72
[ 14.265863] RIP: 0010:free_msi_irqs+0x182/0x1b0
This patch will specify some legacy PnP device like UART and RTC in ACPI
DSDT table of pre-launched VM, so that IRQs from IOAPIC pin 4/3/8 could be
preserved before MSI device requesting IRQs.
Tracked-On: #6704
Signed-off-by: Victor Sun <victor.sun@intel.com>
Remove the restriction that SERIAL_CONSOLE needs to be ttys0, ttys1,
ttys2 or ttys3.
1. Lossen the restriction in xsd.
2. Rewrite the document.
3. Refine the intx.py. Refine the logic which take effect if the <irq>
is specified in "SOS_COM#_IRQ" for SOS VM's legacy vuart 0.
Tracked-On: #6610
Signed-off-by: Yang,Yu-chu <yu-chu.yang@intel.com>
IOMMU hardware resource is owned by hypervisor, while
IOMMU capability is reported to service VM in its ACPI
table. In this case, Service VM may access IOMMU hardware
resource, which is not expected.
This patch unmaps all Intel IOMMU register pages for service VM EPT.
Tracked-On: #6677
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- Capitalize Board Inspector and ACRN Configurator names to highlight them in images and content
- Remove duplicate links between the Overview and GSG where not needed
- Provide additional details that help to deepen developer’s knowledge where applicable. Potential items: acrn-dm, more clarity of the relationship between the ServiceVM and UserVMs, expanded definitions of scenarios (beyond one liners).
Signed-off-by: Amy Reyes <amy.reyes@intel.com>
add hybrid_rt.xml file under the generic_board folder so that
UI can call pre_rt_vm type when the user clicks 'Add a VM below'
button to add a new pre_rt_vm.
Tracked-On: #6292
Signed-off-by: Kunhui-Li <kunhuix.li@intel.com>
call default_populator.py to expand the default value in the
scenario XML file to fix the issue that don't export xml successfully
in UI when user to click the 'Export XML' button to export scenario xml
file because that the unexpanded xml does not conform to schema check.
Tracked-On: #6292
Signed-off-by: Kunhui-Li <kunhuix.li@intel.com>
According to TCG ACPI specification (version 1.2), the current revision of
TPM2 table, which has the optional log area fields, is 4. This patch
updates the revision of vTPM2 accordingly.
Tracked-On: #6288
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
ACPI device drivers use both _HID and _CID to identify devices they
match. This patch copies _CID objects to vACPI devices so that guest
drivers can recognize the passthrough devices properly.
Tracked-On: #6288
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
- Change UOS and SOS to User VM and Service VM, respectively
- Change guest to VM or similar depending on context
- Clean up some of the grammar
Signed-off-by: Amy Reyes <amy.reyes@intel.com>
This patch enables TPM2 passthrough to post-launched VM with eventlog
support.
User starts by providing command line "--acpidev_pt <TPM2_HID>",
of which the <TPM2_HID> will be searched from /proc/iomem for TPM2 buffer
start address and size. Furthermore, If TPM2 eventlog is supported,
TPM2 eventlog information will be retrieved from sysfs TPM2 table and
passed-through as well.
v4 -> v5:
move tpm2 related logic from acpi.c to tpm.c
multiple API rename
Tracked-On: #6686
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Acked-by: Wang, Yu1 <yu1.wang@intel.com>
This patch refines the ACPI device passthrough framework by defining a
generic framework. Note that when user gives an HID by "--acpidev_pt
<HID>", the pt logic will go through all registered ops to see if
there's a match.
v4 -> v5:
parse_pt_acpidev/parse_pt_mmiodev -> create_pt_acpidev/create_pt_mmiodev
(there were already "init_xxx" function present, so rename to
create_xxx)
"super user" -> "superuser"
multiple API renames
Tracked-On: #6686
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Acked-by: Wang, Yu1 <yu1.wang@intel.com>
The sphinx_rtd_theme 1.0 includes simplified configuration for Google
analytics, along with general fixes and improvements.
Anyone publishing project documentation to projectacrn.github.io must be
using this updated sphinx_rtd_theme for Google Analytics data to be
properly collected.
* Switch to use this new theme version, and clean up the previous method
of collecting analytics data.
* Update requirements.txt to use this new theme version. Update the
show-versions.py script (used to report on installed doc building tools)
to also report if there's a mismatch on which version is expected (in
requirements.txt) and what's currently installed.
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
HV_RAM_SIZE will be eliminated as a configuration option so we'll need
to remove references to it in the documentation.
See PR #6664 and PR #6681
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
'error' might be used uninitialized in cfginitbar. So initialize it to zero
at the beginning.
Tracked-On: #6284
Signed-off-by: Fei Li <fei1.li@intel.com>
PR #6283 updated code and docs to the new kernel HSM driver. Fix
some references to VHM missed in the doxygen comments. Also fixed some
misspellings while in these files.
Tracked-On: #6282
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
For the IGD device the opregion addr is returned by reading the 0xFC config of
0:02.0 bdf. And the opregion addr is required by GPU driver.
The opregion_addr should be the GPA addr.
When the IGD is assigned to pre-launched VM, the value in 0xFC of igd_vdev is
programmed into with new GPA addr. In such case the prelaunched VM reads
the value from 0xFC of 0:02.0 vdev.
But for the Service VM, the IGD is initialized by using the same policy as other PCI
devices. We only initialize the vdev_head_conf(0x0-0x3F) by checking the
corresponding pbdf. The remaining pci_config_space will be read by
leveraging the corresponding pdev. But as the above code doesn't handle the
scenario for Service VM, it causes that the Service VM fails to
read the 0xFC config_space for IGD vdev.
Then the i915 GPU driver in SOS has some issues because of incorrect 0xFC
pci_conf_space.
This patch initializes offset 0xfc of CFG space of IGD for Service VM,
it is simple and can cover post-launched VM too.
Tracked-On: #6387
Signed-off-by: Liu,Junming <junming.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In the current hypervisor, only support at most two legacy vuarts
(COM1 and COM2) for a VM, COM1 is usually configured as VM console,
COM2 is configured as communication channel of S5 feature.
Hypervisor can support MAX_VUART_NUM_PER_VM(8) legacy vuart, but only
register handlers for two legacy vuart since the assumption (legacy
vuart is less than 2) is made.
In the current hypervisor configurtion, io port (2F8H) is always
allocated for virtual COM2, it will be not friendly if user wants to
assign this port to physical COM2.
Legacy vuart is common communication channel between service VM and
user VM, it can work in polling mode and its driver exits in each
guest OS. The channel can be used to send shutdown command to user VM
in S5 featuare, so need to config serval vuarts for service VM and one
vuart for each user VM.
The following changes will be made to support at most
MAX_VUART_NUM_PER_VM legacy vuarts:
- Refine legacy vuarts initialization to register PIO handler for
related vuart.
- Update assumption of legacy vuart number.
BTW, config tools updates about legacy vuarts will be made in other
patch.
v1-->v2:
Update commit message to make this patch's purpose clearer;
If vuart index is valid, register handler for it.
Tracked-On: #6652
Signed-off-by: Xiangyang Wu <xiangyang.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Update logical partition to simply partitioned.
Remaining references to logical partition are because the documentation
is validated for releases prior to the scenario renaming. When those
are updated to be tested with a newer release, we'll update the
documentation accordingly.
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
It's difficult to configure CONFIG_HV_RAM_SIZE properly at once. This patch
not only remove CONFIG_HV_RAM_SIZE, but also we use ld linker script to
dynamically get the size of HV RAM size.
Tracked-On: #6663
Signed-off-by: Fei Li <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>