Doc: Minor grammatical edits on various files.

Signed-off-by: Deb Taylor <deb.taylor@intel.com>
This commit is contained in:
Deb Taylor 2019-11-07 18:24:31 -05:00 committed by deb-intel
parent ad9b96579f
commit 1902cfd174
14 changed files with 122 additions and 124 deletions

View File

@ -3,26 +3,26 @@
AHCI Virtualization in Device Model
###################################
AHCI (Advanced Host Controller Interface), which is a hardware mechanism
AHCI (Advanced Host Controller Interface) is a hardware mechanism
that allows software to communicate with Serial ATA devices. AHCI HBA
(host bus adapters) is a PCI class device that acts as a data movement
engine between system memory and Serial ATA devices. The ACPI HBA in
ACRN support both ATA and ATAPI devices. The architecture is shown in
below diagram.
ACRN supports both ATA and ATAPI devices. The architecture is shown in
the below diagram.
.. figure:: images/ahci-image1.png
:align: center
:width: 750px
:name: achi-device
HBA is registered to PCI system with device id 0x2821 and vendor id
0x8086. And its memory registers are mapped in BAR 5. It only supports 6
ports refer to ICH8 AHCI. AHCI driver in Guest OS can access HBA in DM
through the PCI BAR. And HBA can inject MSI interrupts through PCI
HBA is registered to the PCI system with device id 0x2821 and vendor id
0x8086. Its memory registers are mapped in BAR 5. It only supports 6
ports (refer to ICH8 AHCI). AHCI driver in the Guest OS can access HBA in DM
through the PCI BAR. And HBA can inject MSI interrupts through the PCI
framework.
When application in Guest OS reads data from /dev/sda, the request will
send through the AHCI driver then the PCI driver. Guest VM will trap to
When the application in the Guest OS reads data from /dev/sda, the request will
send through the AHCI driver and then the PCI driver. The Guest VM will trap to
hypervisor, and hypervisor dispatch the request to DM. According to the
offset in the BAR, the request will dispatch to port control handler.
Then the request is parse to a block I/O request which can be processed

View File

@ -3,12 +3,12 @@
AT keyboard controller emulation
################################
This document describes AT keyboard controller emulation implementation in ACRN device model. Atkbdc device emulates a PS2 keyboard and mouse.
This document describes the AT keyboard controller emulation implementation in the ACRN device model. The Atkbdc device emulates a PS2 keyboard and mouse.
Overview
********
The PS2 port is a 6-pin mini-Din connector used for connecting keyboards and mice to a PC compatible computer system. Its name comes from the IBM Personal System/2 series of personal computers, with which it was introduced in 1987. PS2 keyboard/mouse emulation is based on ACPI Emulation. We can add ACPI description of PS2 keyboard/mouse into virtual DSDT table to emulate keyboard/mouse in the User VM.
The PS2 port is a 6-pin mini-Din connector used for connecting keyboards and mice to a PC-compatible computer system. Its name comes from the IBM Personal System/2 series of personal computers, with which it was introduced in 1987. PS2 keyboard/mouse emulation is based on ACPI Emulation. We can add ACPI description of PS2 keyboard/mouse into virtual DSDT table to emulate keyboard/mouse in the User VM.
.. figure:: images/atkbdc-virt-hld.png
:align: center
@ -19,7 +19,7 @@ The PS2 port is a 6-pin mini-Din connector used for connecting keyboards and mic
PS2 keyboard emulation
**********************
ACRN supports AT keyboard controller for PS2 keyboard that can be accessed through I/O ports(0x60 and 0x64). 0x60 is used to access AT keyboard controller data register, 0x64 is used to access AT keyboard controller address register.
ACRN supports AT keyboard controller for PS2 keyboard that can be accessed through I/O ports(0x60 and 0x64). 0x60 is used to access AT keyboard controller data register, 0x64 is used to access AT keyboard controller address register.
The PS2 keyboard ACPI description as below::

View File

@ -13,13 +13,12 @@ CPU P-state/C-state are controlled by the guest OS. The ACPI
P/C-state driver relies on some P/C-state-related ACPI data in the guest
ACPI table.
Service VM could run ACPI driver with no problem because it can access native
the ACPI table. For User VM though, we need to prepare the corresponding ACPI data
for Device Model to build virtual ACPI table.
The Service VM can run the ACPI driver with no problems because it can access the native ACPI table. For the User VM though, we need to prepare the corresponding ACPI data
for the Device Model to build a virtual ACPI table.
The Px/Cx data includes four
ACPI objects: _PCT, _PPC, and _PSS for P-state management, and _CST for
C-state management. All these ACPI data must be consistent with the
C-state management. All these ACPI data must be consistent with the
native data because the control method is a kind of pass through.
These ACPI objects data are parsed by an offline tool and hard-coded in a
@ -52,13 +51,13 @@ Hypervisor module named CPU state table:
} __attribute__((aligned(8)));
With these Px/Cx data, the Hypervisor is able to intercept guest's
With these Px/Cx data, the Hypervisor is able to intercept the guest's
P/C-state requests with desired restrictions.
Virtual ACPI table build flow
=============================
:numref:`vACPItable` shows how to build virtual ACPI table with
:numref:`vACPItable` shows how to build the virtual ACPI table with the
Px/Cx data for User VM P/C-state management:
.. figure:: images/hld-pm-image28.png
@ -67,26 +66,26 @@ Px/Cx data for User VM P/C-state management:
System block for building vACPI table with Px/Cx data
Some ioctl APIs are defined for Device model to query Px/Cx data from
Service VM VHM. The Hypervisor needs to provide hypercall APIs to transit
Px/Cx data from CPU state table to Service VM VHM.
Some ioctl APIs are defined for the Device model to query Px/Cx data from
the Service VM VHM. The Hypervisor needs to provide hypercall APIs to transit
Px/Cx data from the CPU state table to the Service VM VHM.
The build flow is:
1) Use offline tool (e.g. **iasl**) to parse the Px/Cx data and hard-code to
CPU state table in Hypervisor. Hypervisor loads the data after
system boot up.
2) Before User VM launching, Device mode queries the Px/Cx data from Service
1) Use an offline tool (e.g. **iasl**) to parse the Px/Cx data and hard-code to
a CPU state table in the Hypervisor. The Hypervisor loads the data after
system boots up.
2) Before User VM launching, the Device mode queries the Px/Cx data from the Service
VM VHM via ioctl interface.
3) VHM transmits the query request to Hypervisor by hypercall.
4) Hypervisor returns the Px/Cx data.
5) Device model builds the virtual ACPI table with these Px/Cx data
3) VHM transmits the query request to the Hypervisor by hypercall.
4) The Hypervisor returns the Px/Cx data.
5) The Device model builds the virtual ACPI table with these Px/Cx data
Intercept Policy
================
Hypervisor should be able to restrict guest's
P/C-state request, with a user-customized policy.
The Hypervisor should be able to restrict guest's
P/C-state request with a user-customized policy.
Hypervisor should intercept guest P-state request and validate whether
it is a valid P-state. Any invalid P-state (e.g. doesn't exist in CPU state
@ -176,8 +175,8 @@ System low power state exit process
===================================
The low power state exit process is in reverse order. The ACRN
hypervisor is woken up at first. It will go through its own low power
state exit path. Then ACRN hypervisor will resume the Service VM to let
hypervisor is awakened first. It will go through its own low power
state exit path. Then, ACRN hypervisor will resume the Service VM to let
Service VM go through Service VM low power state exit path. After that,
the DM is resumed and let User VM go through User VM low power state exit
path. The system is resumed to running state after at least one User VM

View File

@ -6,7 +6,7 @@ Hostbridge emulation
Overview
********
Hostbridge emulation is based on PCI emulation. However hostbridge emulation only sets PCI configuration space. Device model set the PCI configuration space for hostbridge in Service VM, then expose to User VM to detect the PCI hostbridge.
Hostbridge emulation is based on PCI emulation; however, the hostbridge emulation only sets the PCI configuration space. The device model sets the PCI configuration space for hostbridge in the Service VM ans then exposes it to the User VM to detect the PCI hostbridge.
PCI Host Bridge and hierarchy
*****************************
@ -29,7 +29,7 @@ There is PCI host bridge emulation in DM. The bus hierarchy is determined by ``a
the bus hierarchy would be:
.. code-block:: console
# lspci
00:00.0 Host bridge: Network Appliance Corporation Device 1275
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]

View File

@ -7,14 +7,14 @@ Overview
********
The ACRN hypervisor implements a simple but fully functional framework
to manage interrupts and exceptions, as show in
to manage interrupts and exceptions, as shown in
:numref:`interrupt-modules-overview`. In its native layer, it configures
the physical PIC, IOAPIC and LAPIC to support different interrupt
sources from local timer/IPI to external INTx/MSI. In its virtual guest
layer, it emulates virtual PIC, virtual IOAPIC and virtual LAPIC/pass-thru
LAPIC, and provides full APIs allowing virtual interrupt injection from
emulated or pass-thru devices. The contents in this section does not include
the pass-thru LAPIC case, for the pass-thru LAPIC, please refer to
the physical PIC, IOAPIC, and LAPIC to support different interrupt
sources from the local timer/IPI to the external INTx/MSI. In its virtual guest
layer, it emulates virtual PIC, virtual IOAPIC, and virtual LAPIC/pass-thru
LAPIC. It provides full APIs, allowing virtual interrupt injection from
emulated or pass-thru devices. The contents in this section do not include
the pass-thru LAPIC case. For the pass-thru LAPIC, refer to
:ref:`lapic_passthru`
.. figure:: images/interrupt-image3.png
@ -26,10 +26,10 @@ the pass-thru LAPIC case, for the pass-thru LAPIC, please refer to
In the software modules view shown in :numref:`interrupt-sw-modules`,
the ACRN hypervisor sets up the physical interrupt in its basic
interrupt modules (e.g., IOAPIC/LAPIC/IDT). It dispatches the interrupt
interrupt modules (e.g., IOAPIC/LAPIC/IDT). It dispatches the interrupt
in the hypervisor interrupt flow control layer to the corresponding
handlers, that could be pre-defined IPI notification, timer, or runtime
registered pass-thru devices. The ACRN hypervisor then uses its VM
handlers; this could be pre-defined IPI notification, timer, or runtime
registered pass-thru devices. The ACRN hypervisor then uses its VM
interfaces based on vPIC, vIOAPIC, and vMSI modules, to inject the
necessary virtual interrupt into the specific VM, or directly deliver
interrupt to the specific RT VM with pass-thru LAPIC.
@ -63,8 +63,7 @@ to support this. The ACRN hypervisor also initializes all the interrupt
related modules like IDT, PIC, IOAPIC, and LAPIC.
HV does not own any host devices (except UART). All devices are by
default assigned to SOS. Any interrupts received by Guest VM (SOS or
UOS) device drivers are virtual interrupts injected by HV (via vLAPIC).
default assigned to the Service VM. Any interrupts received by Guest VM (Service VM or User VM) device drivers are virtual interrupts injected by HV (via vLAPIC).
HV manages a Host-to-Guest mapping. When a native IRQ/interrupt occurs,
HV decides whether this IRQ/interrupt should be forwarded to a VM and
which VM to forward to (if any). Refer to
@ -76,10 +75,10 @@ happens, with some exceptions such as #INT3 and #MC. This is to
simplify the design as HV does not support any exception handling
itself. HV supports only static memory mapping, so there should be no
#PF or #GP. If HV receives an exception indicating an error, an assert
function is then executed with an error message print out, and the
function is then executed with an error message printout, and the
system then halts.
Native interrupts could be generated from one of the following
Native interrupts can be generated from one of the following
sources:
- GSI interrupts
@ -112,7 +111,7 @@ IDT Initialization
==================
ACRN hypervisor builds its native IDT (interrupt descriptor table)
during interrupt initialization and set up the following handlers:
during interrupt initialization and sets up the following handlers:
- On an exception, the hypervisor dumps its context and halts the current
physical processor (because physical exceptions are not expected).

View File

@ -8,23 +8,23 @@ System PM module
The PM module in the hypervisor does three things:
- Monitor all guests power state transition. And emulate low power
state for the guests which are launched by HV directly.
- Monitors all guests power state transitions and emulates a low power
state for the guests which are launched by the HV directly.
- Once all guests enter low power state, Hypervisor handles its
own low-power state transition
- Once all guests enter low power state, the Hypervisor handles its
own low-power state transition.
- Once system resumes from low-power mode, the hypervisor handles its
own resume and emulates Service VM resume too.
- Once the system resumes from low-power mode, the hypervisor handles its
own resume and emulates the Service VM resume.
It is assumed that Service VM does not trigger any power state transition
It is assumed that the Service VM does not trigger any power state transition
until the VM manager of ACRN notifies it that all User VMs are inactive
and Service VM offlines all its virtual APs. And it is assumed that HV
does not trigger its own power state transition until all guests are in
low power state.
:numref:`pm-low-power-transition` shows the Hypervisor entering S3
state process. Service VM triggers power state transition by
state process. The Service VM triggers power state transition by
writing ACPI control register on its virtual BSP (which is pinned to the
physical BSP). The hypervisor then does the following in sequence before
it writes to the physical ACPI control register to trigger physical

View File

@ -3,12 +3,12 @@
RDT Allocation Feature Supported by Hypervisor
##############################################
The hypervisor allows to use RDT (Resource Director Technology) allocation features to optimize performance of VMs. There are 2 sub-features: CAT (Cache Allocation Technology) and MBA(Memory Bandwidth Allocation), CAT is for cache resources and MBA is for memory bandwidth resources. Code and Data Prioritization (CDP) is an extension of CAT. Only CAT is enabled due to the feature availability on ACRN supported platform. In ACRN, the CAT is configured via the "VM-Configuration", the resources allocated for VMs are determined in the VM configuration.
The hypervisor uses RDT (Resource Director Technology) allocation features to optimize VM performance. There are 2 sub-features: CAT (Cache Allocation Technology) and MBA (Memory Bandwidth Allocation). CAT is for cache resources and MBA is for memory bandwidth resources. Code and Data Prioritization (CDP) is an extension of CAT. Only CAT is enabled due to the feature availability on an ACRN-supported platform. In ACRN, the CAT is configured via the "VM-Configuration". The resources allocated for VMs are determined in the VM configuration.
CAT Support in ACRN
*******************
Introduction of CAT Capabilities
Introduction to CAT Capabilities
================================
On a platform which supports CAT, each CPU can mask last-level-cache (LLC) with a cache mask, the masked cache ways cannot be evicted by this CPU. In terms of SDM, please see chapter 17, volume 3, CAT capabilities are enumerated via CPUID, and configured via MSR registers, these are:
@ -24,12 +24,12 @@ On a platform which supports CAT, each CPU can mask last-level-cache (LLC) with
Objective of CAT
================
CAT feature in hypervisor can isolate cache for a VM from other VMs. It can also isolate the cache usage between VMX root mode and VMX non-root mode. Generally, certain cache resources will be allocated for the RT VMs in order to reduce the performance interference through the shared cache access from the neighbour VMs.
The CAT feature in the hypervisor can isolate the cache for a VM from other VMs. It can also isolate the cache usage between VMX root mode and VMX non-root mode. Generally, certain cache resources will be allocated for the RT VMs in order to reduce the performance interference through the shared cache access from the neighbour VMs.
CAT Workflow
=============
The hypervisor enumerates CAT capabilities and setup cache mask arrays; It also sets up CLOS for VMs and hypervisor itself per the "vm configuration".
The hypervisor enumerates CAT capabilities and setup cache mask arrays; it also sets up CLOS for VMs and hypervisor itself per the "vm configuration".
* The CAT capabilities are enumerated on boot-strap processor (BSP), at the
PCPU pre-initialize stage. The global data structure cat_cap_info holds the

View File

@ -17,9 +17,9 @@ VM startup.
Multiboot Header
****************
The ACRN hypervisor is built with multiboot header, which presents
The ACRN hypervisor is built with a multiboot header, which presents
``MULTIBOOT_HEADER_MAGIC`` and ``MULTIBOOT_HEADER_FLAGS`` at the beginning
of the image, and it sets bit 6 in ``MULTIBOOT_HEADER_FLAGS`` which request
of the image, and it sets bit 6 in ``MULTIBOOT_HEADER_FLAGS`` which requests
bootloader passing memory mmap information(like e820 entries) through
Multiboot Information(MBI) structure.
@ -39,7 +39,7 @@ description for the flow:
- **BSP Startup:** The starting point for bootstrap processor.
- **Relocation**: relocate the hypervisor image if the hypervisor image
- **Relocation**: Relocate the hypervisor image if the hypervisor image
is not placed at the assumed base address.
- **UART Init:** Initialize a pre-configured UART device used
@ -67,11 +67,11 @@ description for the flow:
Symbols in the hypervisor are placed with an assumed base address, but
the bootloader may not place the hypervisor at that specified base. In
such case the hypervisor will relocate itself to where the bootloader
this case, the hypervisor will relocate itself to where the bootloader
loads it.
Here is a summary of CPU and memory initial states that are set up after
native startup.
the native startup.
CPU
ACRN hypervisor brings all physical processors to 64-bit IA32e
@ -111,7 +111,7 @@ Memory
Refer to :ref:`physical-interrupt-initialization` for a detailed description of interrupt-related
initial states, including IDT and physical PICs.
After BSP detects that all APs are up, it will continue to enter guest mode; similar, after one AP
After the BSP detects that all APs are up, it will continue to enter guest mode; similar, after one AP
complete its initialization, it will start entering guest mode as well.
When BSP & APs enter guest mode, they will try to launch pre-defined VMs whose vBSP associated with
this physical core; these pre-defined VMs are static configured in ``vm config`` and they could be
@ -149,23 +149,23 @@ The main steps include:
for vcpu scheduling. The vCPU number and affinity are defined in corresponding
``vm config`` for this VM.
- **Build vACPI:** For Service VM, the hypervisor will customize a virtual ACPI
table based on native ACPI table (this is in the TODO).
For pre-launched VM, the hypervisor will build a simple ACPI table with necessary
- **Build vACPI:** For the Service VM, the hypervisor will customize a virtual ACPI
table based on the native ACPI table (this is in the TODO).
For a pre-launched VM, the hypervisor will build a simple ACPI table with necessary
information like MADT.
For post-launched User VM, DM will build its ACPI table dynamically.
For a post-launched User VM, the DM will build its ACPI table dynamically.
- **SW Load:** Prepares for each VM's SW configuration according to guest OS
requirement, which may include kernel entry address, ramdisk address,
bootargs, or zero page for launching bzImage etc.
This is done by the hypervisor for pre-launched or Service VM, while by DM
for post-launched User VMs.
Meanwhile, there are two kinds of boot mode - de-privilege and direct boot
Meanwhile, there are two kinds of boot modes - de-privilege and direct boot
mode. The de-privilege boot mode is combined with ACRN UEFI-stub, and only
apply to Service VM, which ensure native UEFI environment could be restored
and keep running in the Service VM. The direct boot mode is applied to both
pre-launched and Service VM, in this mode, the VM will start from standard
real or proteted mode which is not related with native environment.
applies to the Service VM, which ensures that the native UEFI environment could be restored
and keep running in the Service VM. The direct boot mode is applied to both the
pre-launched and Service VM. In this mode, the VM will start from the standard
real or protected mode which is not related to the native environment.
- **Start VM:** The vBSP of vCPUs in this VM is kick to do schedule.

View File

@ -6,10 +6,10 @@ Timer
Because ACRN is a flexible, lightweight reference hypervisor, we provide
limited timer management services:
- Only lapic tsc-deadline timer is supported as the clock source.
- Only the lapic tsc-deadline timer is supported as the clock source.
- A timer can only be added on the logical CPU for a process or thread. Timer
scheduling or timer migrating are not supported.
scheduling or timer migrating is not supported.
How it works
************
@ -18,7 +18,7 @@ When the system boots, we check that the hardware supports lapic
tsc-deadline timer by checking CPUID.01H:ECX.TSC_Deadline[bit 24]. If
support is missing, we output an error message and panic the hypervisor.
If supported, we register the timer interrupt callback that raises a
timer softirq on each logical CPU and set the lapic timer mode to
timer softirq on each logical CPU and sets the lapic timer mode to
tsc-deadline timer mode by writing the local APIC LVT register.
Data Structures and APIs

View File

@ -3,7 +3,7 @@
Virtual Interrupt
#################
This section introduces ACRN guest virtual interrupt
This section introduces the ACRN guest virtual interrupt
management, which includes:
- VCPU request for virtual interrupt kick off,
@ -11,10 +11,10 @@ management, which includes:
- physical-to-virtual interrupt mapping for a pass-thru device, and
- the process of VMX interrupt/exception injection.
A standard VM never owns any physical interrupts, all interrupts received by
Guest OS come from a virtual interrupt injected by vLAPIC, vIOAPIC or
A standard VM never owns any physical interrupts; all interrupts received by the
Guest OS come from a virtual interrupt injected by vLAPIC, vIOAPIC, or
vPIC. Such virtual interrupts are triggered either from a pass-through
device or from I/O mediators in SOS via hypercalls. The
device or from I/O mediators in the Service VM via hypercalls. The
:ref:`interrupt-remapping` section discusses how the hypervisor manages
the mapping between physical and virtual interrupts for pass-through
devices. However, a hard RT VM with LAPIC pass-through does own the physical
@ -22,11 +22,11 @@ maskable external interrupts. On its physical CPUs, interrupts are disabled
in VMX root mode, while in VMX non-root mode, physical interrupts will be
deliverd to RT VM directly.
Emulation for devices is inside SOS user space device model, i.e.,
acrn-dm. However for performance consideration: vLAPIC, vIOAPIC, and vPIC
Emulation for devices is inside the Service VM user space device model, i.e.,
acrn-dm. However, for performance consideration, vLAPIC, vIOAPIC, and vPIC
are emulated inside HV directly.
From guest OS point of view, vPIC is Virtual Wire Mode via vIOAPIC. The
From the guest OS point of view, vPIC is Virtual Wire Mode via vIOAPIC. The
symmetric I/O Mode is shown in :numref:`pending-virt-interrupt` later in
this section.
@ -61,7 +61,7 @@ The eventid supported for virtual interrupt injection includes:
The *vcpu_make_request* is necessary for a virtual interrupt
injection. If the target vCPU is running under VMX non-root mode, it
will send an IPI to kick it out, which leads to an external-interrupt
VM-Exit. For some cases there is no need to send IPI when making a request,
VM-Exit. In some cases, there is no need to send IPI when making a request,
because the CPU making the request itself is the target VCPU. For
example, the #GP exception request always happens on the current CPU when it
finds an invalid emulation has happened. An external interrupt for a pass-thru
@ -72,11 +72,10 @@ target VCPU.
Virtual LAPIC
*************
LAPIC is virtualized for all Guest types: SOS and UOS. Given support by
the
physical processor, APICv Virtual Interrupt Delivery (VID) is enabled
and will support Posted-Interrupt feature. Otherwise, it will fall back to legacy
virtual interrupt injection mode.
LAPIC is virtualized for all Guest types: Serice and User VMs. Given support
by the physical processor, APICv Virtual Interrupt Delivery (VID) is enabled
and will support Posted-Interrupt feature. Otherwise, it will fall back to
the legacy virtual interrupt injection mode.
vLAPIC provides the same features as the native LAPIC:
@ -118,7 +117,7 @@ EOI processing
==============
EOI virtualization is enabled if APICv virtual interrupt delivery is
supported. Except for level triggered interrupts, VM will not exit in
supported. Except for level triggered interrupts, the VM will not exit in
case of EOI.
In case of no APICv virtual interrupt delivery support, vLAPIC requires
@ -133,7 +132,7 @@ indicate that is a level triggered interrupt.
LAPIC passthrough based on vLAPIC
=================================
LAPIC passthrough is supported based on vLAPIC, guest OS firstly boots with
LAPIC passthrough is supported based on vLAPIC, the guest OS first boots with
vLAPIC in xAPIC mode and then switches to x2APIC mode to enable the LAPIC
pass-through.
@ -201,7 +200,7 @@ When doing emulation, an exception may need to be triggered in
hypervisor, for example:
- if guest accesses an invalid vMSR register,
- hypervisor needs to inject a #GP, or
- hypervisor needs to inject a #GP, or
- hypervisor needs to inject #PF when an instruction accesses a non-exist page
from rip_gva during instruction emulation.

View File

@ -6,4 +6,4 @@ RTC Virtualization
This document describes the RTC virtualization implementation in
ACRN device model.
vRTC is a read-only RTC for For pre-launched VM, Service OS and post-launched RT VM. It supports RW for CMOS address port 0x70 and RO for CMOS data port 0x71. Reads to CMOS RAM offsets are fetched by reading CMOS h/w directly and writes to CMOS offsets are discarded.
vRTC is a read-only RTC for the pre-launched VM, Service OS, and post-launched RT VM. It supports RW for the CMOS address port 0x70 and RO for the CMOS data port 0x71. Reads to the CMOS RAM offsets are fetched by reading the CMOS h/w directly and writes to CMOS offsets are discarded.

View File

@ -3,28 +3,28 @@
System timer virtualization
###########################
ACRN supports RTC (Real-time clock), HPET (High Precision Event Timer)
and PIT (Programmable interval timer) devices for VM system timer.
Different timer devices support different resolutions, HPET device can
support higher resolution than RTC and PIT.
ACRN supports RTC (Real-time clock), HPET (High Precision Event Timer),
and PIT (Programmable interval timer) devices for the VM system timer.
Different timer devices support different resolutions. The HPET device can
support higher resolutions than RTC and PIT.
System timer virtualization architecture
|image0|
- In UOS, vRTC, vHPET and vPIT are used by clock event module and clock
source module in kernel space.
- In the User VM, vRTC, vHPET, and vPIT are used by the clock event module and the clock
source module in the kernel space.
- In SOS, all of vRTC, vHPET and vPIT devices are created by device
model in initialization phase and using timer\_create and
timerfd\_create interfaces to setup native timers for trigger timeout
- In the Service VM, all vRTC, vHPET, and vPIT devices are created by the device
model in the initialization phase and uses timer\_create and
timerfd\_create interfaces to set up native timers for the trigger timeout
mechanism.
System Timer initialization
===========================
Device model initializes vRTC, vHEPT and vPIT devices automatically when
ACRN device model starts booting initialization, and the initialization
The device model initializes vRTC, vHEPT, and vPIT devices automatically when
the ACRN device model starts the booting initialization, and the initialization
flow goes from vrtc\_init to vpit\_init and ends with vhept\_init, see
below code snippets.::
@ -51,7 +51,8 @@ below code snippets.::
PIT emulation
=============
ACRN emulated Intel 8253 Programmable Interval Timer, the chip has three
The ACRN emulated Intel 8253 Programmable Interval Timer includes a chip
that has three
independent 16-bit down counters that can be read on the fly. There are
three mode registers and three countdown registers. The countdown
registers are addressed directly, via the first three I/O ports.The
@ -88,9 +89,9 @@ RTC emulation
ACRN supports RTC (Real-Time Clock) that can only be accessed through
I/O ports (0x70 and 0x71).
0x70 is used to access CMOS address register, 0x71 is used to access
CMOS data register, user need to set CMOS address register then
read/write CMOS data register for CMOS accessing.
0x70 is used to access CMOS address register and 0x71 is used to access
CMOS data register; the user needs to set the CMOS address register and then
the read/write CMOS data register for CMOS accessing.
The RTC ACPI description as below::
@ -116,10 +117,10 @@ The RTC ACPI description as below::
HPET emulation
==============
ACRN supports HPET (High Precision Event Timer) that is high resolution
timer than RTC and PIT. Its frequency is 16.7Mhz and using MMIO to
access HPET device, the base address is 0xfed00000 and size is 1024
bytes. Accesses to the HPET should be 4 or 8 bytes wide.::
ACRN supports HPET (High Precision Event Timer) which is a higher resolution
timer than RTC and PIT. Its frequency is 16.7Mhz and uses MMIO to
access HPET device; the base address is 0xfed00000 and size is 1024
bytes. Access to the HPET should be 4 or 8 bytes wide.::
#define HPET_FREQ (16777216) /* 16.7 (2^24) Mhz */
#define VHPET_BASE (0xfed00000)

View File

@ -37,8 +37,8 @@ A vUART can be used as a console port, and it can be activated by
a ``vm_console <vm_id>`` command in the hypervisor console. From
:numref:`console-uart-arch`, there is only one physical UART, but four
console vUARTs (green color blocks). A hypervisor console is implemented
above the physical UART, and it works in polling mode. There is a timer
in hv console. The timer handler dispatches the input from physical UART
above the physical UART, and it works in polling mode. There is a timer
in the hv console. The timer handler dispatches the input from physical UART
to the vUART or the hypervisor shell process and gets data from vUART's
Tx FIFO and sends it to the physical UART. The data in vUART's FIFOs will be
overwritten when it is not taken out in time.
@ -53,11 +53,11 @@ Communication vUART
*******************
The communication vUART is used to transfer data between two VMs in low
speed. For kernel driver, it is a general UART, can be detected and
probed by 8250 serial driver. But in hypervisor, it has special process.
speed. For the kernel driver, it is a general UART that can be detected and
probed by 8250 serial driver. But in the hypervisor, it has a special process.
From :numref:`communication-uart-arch`, the vUART in two VMs is
connected according to the configuration in hypervisor. When user
connected according to the configuration in the hypervisor. When a user
writes a byte to the communication UART in VM0:
Operations in VM0
@ -118,7 +118,7 @@ Usage
}
The kernel bootargs ``console=ttySx`` should be the same with
vuart[0], otherwise, the kernel console log can not captured by
vuart[0]; otherwise, the kernel console log can not be captured by
hypervisor. Then, after bringing up the system, you can switch the console
to the target VM by:
@ -164,6 +164,6 @@ Usage
useful for Windows and vxworks as they probe the driver according to the ACPI
table.
If user enables both the device model UART and hypervisor vUART at the
If the user enables both the device model UART and the hypervisor vUART at the
same port address, access to the port address will be responded to
by the hypervisor vUART directly, and will not pass to the device model.

View File

@ -16,8 +16,8 @@ The following Intel Kaby Lake NUCs are verified:
ACRN Service VM Setup
*********************
You may refer to the steps in :ref:`kbl-nuc-sdc` for
Intel NUC to set up ACRN on the KBL NUC. After following the steps in that guide,
You may refer to the steps in :ref:`kbl-nuc-sdc` for the
Intel NUC in order to set up ACRN on the KBL NUC. After following the steps in that guide,
you should be able to launch the Service VM successfully.
Setup for Using Windows as Guest VM