doc: more spelling and grammar fixes

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder 2020-10-02 15:24:32 -07:00 committed by David Kinder
parent 576a3b5947
commit 5289a3eb97
18 changed files with 607 additions and 603 deletions

View File

@ -34,7 +34,7 @@ developers porting GVT-g to work on other hypervisors.
This document describes:
- the overall components of GVT-g
- interaction interface of each components
- interaction interface of each component
- core interaction scenarios
APIs of each component interface can be found in the :ref:`GVT-g_api`
@ -118,7 +118,7 @@ In this scenario, AcrnGT receives a destroy request from ACRN-DM. It
calls GVT's :ref:`intel_gvt_ops_interface` to inform GVT of the vGPU destroy
request, and cleans up all vGPU resources.
vGPU pci configure space write scenario
vGPU PCI configure space write scenario
=======================================
ACRN traps the vGPU's PCI config space write, notifies AcrnGT's
@ -127,13 +127,13 @@ handle all I/O trap notifications. This routine calls the GVT's
:ref:`intel_gvt_ops_interface` ``emulate_cfg_write`` to emulate the vGPU PCI
config space write:
#. If it's BAR0 (GTTMMIO) write, turn on/off GTTMMIO trap, according to
#. If it is BAR0 (GTTMMIO) write, turn on/off GTTMMIO trap, according to
the write value.
#. If it's BAR1 (Aperture) write, maps/unmaps vGPU's aperture to its
#. If it is BAR1 (Aperture) write, maps/unmaps vGPU's aperture to its
corresponding part in the host's aperture.
#. Otherwise, write to the virtual PCI configuration space of the vGPU.
pci configure space read scenario
PCI configure space read scenario
=================================
Call sequence is almost the same as the write scenario above,
@ -143,25 +143,25 @@ but instead it calls the GVT's :ref:`intel_gvt_ops_interface`
GGTT read/write scenario
========================
GGTT's trap is set up in the pci configure space write
GGTT's trap is set up in the PCI configure space write
scenario above.
MMIO read/write scenario
========================
MMIO's trap is set up in the pci configure space write
MMIO's trap is set up in the PCI configure space write
scenario above.
PPGTT write protection page set/unset scenario
PPGTT write-protection page set/unset scenario
==============================================
PPGTT write protection page is set by calling ``acrn_ioreq_add_iorange``
PPGTT write-protection page is set by calling ``acrn_ioreq_add_iorange``
with range type as ``REQ_WP``, and trap its write to device model while
allowing read without trap.
PPGTT write protection page is unset by calling ``acrn_ioreq_del_range``.
PPGTT write-protection page is unset by calling ``acrn_ioreq_del_range``.
PPGTT write protection page write
PPGTT write-protection page write
=================================
In the VHM module, ioreq for PPGTT WP and MMIO trap is the same. It will

View File

@ -42,13 +42,13 @@ model (DM), and is registered as a PCI virtio device to the guest OS
User VM starts. Second, it copies the received data from the RXQ to TXQ
and sends them to the backend. After receiving the message that the
transmission is completed, it starts again another round of reception
and transmission, and keeps running until a specified number of cycle
and transmission, and keeps running until a specified number of cycles
is reached.
- **virtio-echo Driver in DM**: This driver is used for initialization
configuration. It simulates a virtual PCI device for the frontend
driver use, and sets necessary information such as the device
configuration and virtqueue information to the VBS-K. After
initialization, all data exchange are taken over by the VBS-K
initialization, all data exchange is taken over by the VBS-K
vbs-echo driver.
- **vbs-echo Backend Driver**: This driver sets all frontend RX buffers to
be a specific value and sends the data to the frontend driver. After
@ -85,7 +85,7 @@ parts: kick overhead and notify overhead.
forwarded to the VHM module by the hypervisor. The VHM notifies its
client for this IOREQ, in this case, the client is the vbs-echo
backend driver. Kick overhead is defined as the interval from the
beginning of User VM trap to a specific VBS-K driver e.g. when
beginning of User VM trap to a specific VBS-K driver, e.g. when
virtio-echo gets notified.
- **Notify Overhead**: After the data in virtqueue being processed by the
backend driver, vbs-echo calls the VHM module to inject an interrupt
@ -113,7 +113,7 @@ Overhead of steps marked as blue depend on specific frontend and backend
virtual device drivers. For virtio-echo, the whole end-to-end process
(from step1 to step 9) costs about 4 dozens of microsecond. That's
because virtio-echo does little things in its frontend and backend
driver which is just for testing and there is very little process
driver that is just for testing and there is very little process
overhead.
.. figure:: images/vbsk-image1.png
@ -126,7 +126,7 @@ overhead.
:numref:`vbsk-virtio-echo-path` details the path of kick and notify
operation shown in :numref:`vbsk-virtio-echo-e2e`. The VBS-K framework
overhead is caused by operations through these paths. As we can see, all
these operations are processed in kernel mode which avoids extra
these operations are processed in kernel mode and avoids extra
overhead of passing IOREQ to userspace processing.
.. figure:: images/vbsk-image3.png
@ -143,5 +143,5 @@ Unlike VBS-U processing in user mode, VBS-K moves things into the kernel
mode and can be used to accelerate processing. A virtual device
virtio-echo based on VBS-K framework is used to evaluate the VBS-K
framework overhead. In our test, the VBS-K framework overhead (one kick
operation and one notify operation) is on the microsecond level which
operation and one notify operation) is on the microsecond level, which
can meet the needs of most applications.

View File

@ -37,10 +37,10 @@ Compliant example::
asm_showcase_1:
movl $0x1, %eax
asm_showcase_2:
movl $0x2, %eax
asm_showcase_3:
movl $0x3, %eax
@ -50,10 +50,10 @@ Compliant example::
text:
movl $0x1, %eax
mov:
movl $0x2, %eax
eax:
movl $0x3, %eax
@ -68,9 +68,9 @@ Compliant example::
asm_showcase_1:
movl $0x1, %eax
jmp asm_showcase_2
/* do something */
asm_showcase_2:
movl $0x2, %eax
@ -80,7 +80,7 @@ Compliant example::
asm_showcase_1:
movl $0x1, %eax
/*
* 'asm_showcase_2' is not used anywhere, including
* all C source/header files and Assembly files.
@ -139,19 +139,20 @@ Compliant example::
ASM-GN-06: .end directive statement shall be the last statement in an Assembly file
===================================================================================
This rule only applies to the Assembly file which uses .end directive. .end
directive shall be the last statement in this case. All the statements past .end
directive will not be processed by the assembler.
This rule only applies to the Assembly file that uses ``.end``
directive. ``.end`` directive shall be the last statement in this case.
All the statements past ``.end`` directive will not be processed by the
assembler.
Compliant example::
#include <types.h>
#include <spinlock.h>
.macro asm_showcase_mov
movl $0x1, %eax
.endm
.end
.. rst-class:: non-compliant-code
@ -159,11 +160,11 @@ Compliant example::
Non-compliant example::
#include <types.h>
.end
#include <spinlock.h>
.macro asm_showcase_mov
movl $0x1, %eax
.endm
@ -177,9 +178,9 @@ Compliant example::
asm_showcase_1:
movl $0x1, %eax
jmp asm_showcase_2
/* do something */
asm_showcase_2:
movl $0x2, %eax
@ -203,7 +204,7 @@ Compliant example::
jne asm_test
movl $0x2, %eax
movl $0x3, %eax
asm_test:
movl $0x6, %eax
@ -217,7 +218,7 @@ Compliant example::
/* the following two lines have no chance to be executed */
movl $0x2, %eax
movl $0x3, %eax
asm_test:
movl $0x6, %eax
@ -242,11 +243,11 @@ Compliant example::
* perform a far jump to start executing in 64-bit mode
*/
ljmp $0x0008, $execution_64_2
.code64
execution_64_1:
/* do something in 64-bit mode */
execution_64_2:
/* do something in 64-bit mode */
@ -257,7 +258,7 @@ Compliant example::
.data
asm_showcase_data:
.word 0x0008
.code32
execution_32:
/* do something in 32-bit mode */
@ -270,17 +271,17 @@ ASM-GN-10: Assembler directives shall be used with restrictions
Usage of the assembler directive refers to GNU assembler 'as' user manual. Only
the following assembler directives may be used:
1) .align
2) .end
3) .extern
4) repeat related directives, including .rept and .endr
5) global related directives, including .global and .globl
6) macro related directives, including .altmacro, .macro, and .endm
7) code bit related directives, including .code16, .code32, and .code64
8) section related directives, including .section, .data, and .text
9) number emission related directives, including .byte, .word, .short, .long,
and .quad
10) .org, which shall be used with restrictions. It shall only be used to
1) ``.align``
2) ``.end``
3) ``.extern``
4) repeat related directives, including ``.rept`` and ``.endr``
5) global related directives, including ``.global`` and ``.globl``
6) macro related directives, including ``.altmacro``, ``.macro``, and ``.endm``
7) code bit related directives, including ``.code16``, ``.code32``, and ``.code64``
8) section related directives, including ``.section``, ``.data``, and ``.text``
9) number emission related directives, including ``.byte``, ``.word``,
``.short``, ``.long``, and ``.quad``
10) ``.org`` shall be used with restrictions. It shall only be used to
advance the location counter due to code bit changes, such as change from 32-bit
mode to 64-bit mode.
@ -297,7 +298,7 @@ Compliant example::
asm_func_showcase:
movl $0x2, %eax
ret
asm_showcase:
movl $0x1, %eax
call asm_func_showcase
@ -308,7 +309,7 @@ Compliant example::
asm_func_showcase:
movl $0x2, %eax
asm_showcase:
movl $0x1, %eax
call asm_func_showcase
@ -330,7 +331,7 @@ Compliant example::
tmp:
movl $0x2, %eax
ret
asm_showcase:
movl $0x1, %eax
call asm_func_showcase
@ -344,7 +345,7 @@ Compliant example::
tmp:
movl $0x2, %eax
ret
asm_showcase:
movl $0x1, %eax
call asm_func_showcase
@ -390,7 +391,7 @@ Compliant example::
asm_func_showcase:
movl $0x2, %eax
ret
asm_showcase:
movl $0x1, %eax
call asm_func_showcase
@ -401,7 +402,7 @@ Compliant example::
asm_showcase:
movl $0x1, %eax
asm_func_showcase:
movl $0x2, %eax
ret
@ -417,7 +418,7 @@ Compliant example::
asm_func_showcase:
movl $0x2, %eax
ret
asm_showcase:
movl $0x1, %eax
call asm_func_showcase
@ -430,11 +431,11 @@ Compliant example::
movl $0x2, %eax
jmp asm_test
ret
asm_showcase:
movl $0x1, %ebx
call asm_func_showcase
asm_test:
cli
@ -447,7 +448,7 @@ Compliant example::
asm_func_showcase:
movl $0x2, %eax
ret
asm_showcase:
movl $0x1, %eax
call asm_func_showcase
@ -460,7 +461,7 @@ Compliant example::
movl $0x2, %eax
call asm_func_showcase
ret
asm_showcase:
movl $0x1, %eax
call asm_func_showcase
@ -568,12 +569,12 @@ Compliant example::
.extern cpu_primary_save32
.extern cpu_primary_save64
.section multiboot_header, "a"
.align 4
.long 0x0008
.long 0x0018
.section entry, "ax"
.align 8
.code32
@ -584,12 +585,12 @@ Compliant example::
.extern cpu_primary_save32
.extern cpu_primary_save64
.section multiboot_header, "a"
.align 4
.long 0x0008
.long 0x0018
.section entry, "ax"
.align 8
.code32
@ -602,7 +603,7 @@ Compliant example::
asm_showcase_1:
movl $0x1, %eax
asm_showcase_2:
movl $0x2, %eax
@ -612,7 +613,7 @@ Compliant example::
asm_showcase_1:
movl $0x1, %eax
asm_showcase_2:
movl $0x2, %eax
@ -643,7 +644,7 @@ Compliant example::
asm_showcase_1:
movl $0x1, %eax
lock and %rcx, (%rdx)
asm_showcase_2:
movl $0x3, %eax
@ -654,7 +655,7 @@ Compliant example::
asm_showcase_1:
movl $0x1, %eax
lock and %rcx, (%rdx)
asm_showcase_2:
movl $0x2, %eax
@ -718,13 +719,13 @@ Compliant example::
/* Legal entity shall be placed at the start of the file. */
-------------File Contents Start After This Line------------
/*
* Copyright (C) 2019 Intel Corporation.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
/* Coding or implementation related comments start after the legal entity. */
.code64
@ -734,7 +735,7 @@ Compliant example::
/* Neither copyright nor license information is included in the file. */
-------------------File Contents Start After This Line------------------
/* Coding or implementation related comments start directly. */
.code64
@ -786,15 +787,15 @@ ASM-NC-03: Label name shall be unique
=====================================
Label name shall be unique with the following exception. Usage of local labels
is allowed. Local label is defined with the format 'N:', where N represents any
non-negative integer. Using 'Nb' to refer to the most recent previous definition
of that label. Using 'Nf' to refer to the next definition of a local label.
is allowed. Local label is defined with the format ``N:``, where N represents any
non-negative integer. Using ``Nb`` to refer to the most recent previous definition
of that label. Using ``Nf`` to refer to the next definition of a local label.
Compliant example::
asm_showcase_1:
movl $0x1, %eax
asm_showcase_2:
movl $0x2, %eax
@ -804,13 +805,13 @@ Compliant example::
asm_showcase:
movl $0x1, %eax
asm_showcase:
movl $0x2, %eax
ASM-NC-04: Names defined by developers shall be less than 31 characters
=======================================================================
ASM-NC-04: Names defined by developers shall be fewer than 31 characters
========================================================================
Compliant example::

File diff suppressed because it is too large Load Diff

View File

@ -117,8 +117,8 @@ on https://github.com and have Git tools available on your development system.
Repository layout
*****************
To clone the ACRN hypervisor repository (including the hypervisor,
devicemodel, and doc folders) use::
To clone the ACRN hypervisor repository (including the ``hypervisor``,
``devicemodel``, and ``doc`` folders) use::
$ git clone https://github.com/projectacrn/acrn-hypervisor

View File

@ -5,7 +5,7 @@ Documentation Guidelines
Project ACRN content is written using the `reStructuredText`_ markup
language (``.rst`` file extension) with Sphinx extensions, and processed
using Sphinx to create a formatted standalone website. Developers can
using Sphinx to create a formatted stand-alone website. Developers can
view this content either in its raw form as ``.rst`` markup files, or (with
Sphinx installed) they can build the documentation using the Makefile
(on Linux systems) to
@ -32,7 +32,7 @@ Document sections are identified through their heading titles,
indicated with an underline below the title text. (While reST allows
use of both and overline and matching underline to indicate a heading,
we only use an underline indicator for headings.) For consistency in
our documentation, we define the order of characters used to indicated
our documentation, we define the order of characters used to indicate
the nested table of contents levels:
* Use ``#`` for the Document title underline character
@ -52,11 +52,11 @@ underlines to use:
Document Title heading
######################
Section 1.0 heading
*******************
Section 1 heading
*****************
Section 2.0 heading
*******************
Section 2 heading
*****************
Section 2.1 heading
===================
@ -67,8 +67,8 @@ underlines to use:
Section 2.2 heading
===================
Section 3.0 heading
*******************
Section 3 heading
*****************
@ -79,17 +79,17 @@ Some common reST inline markup samples:
* one asterisk: ``*text*`` for emphasis (*italics*),
* two asterisks: ``**text**`` for strong emphasis (**boldface**), and
* two backquotes: ````text```` for ``inline code`` samples.
* two back quotes: ````text```` for ``inline code`` samples.
ReST rules for inline markup try to be forgiving to account for common
cases of using these marks. For example using an asterisk to indicate
cases of using these marks. For example, using an asterisk to indicate
multiplication, such as ``2 * (x + y)`` will not be interpreted as an
unterminated italics section. For inline markup, the characters between
the beginning and ending characters must not start or end with a space,
so ``*this is italics*`` ( *this is italics*) while ``* this isn't*``
(* this isn't*).
If asterisks or backquotes appear in running text and could be confused with
If asterisks or back quotes appear in running text and could be confused with
inline markup delimiters, you can eliminate the confusion by adding a
backslash (``\``) before it.
@ -137,17 +137,17 @@ list item:
needed, but it wouldn't hurt for readability.
Definition lists (with a term and its definition) are a convenient way
to document a word or phrase with an explanation. For example this reST
to document a word or phrase with an explanation. For example, this reST
content:
.. code-block:: rest
The Makefile has targets that include:
html
``html``
Build the HTML output for the project
clean
``clean``
Remove all generated output, restoring the folders to a
clean state.
@ -198,7 +198,8 @@ would be rendered as:
* the page
A maximum of three columns will be displayed if you use ``rst-columns``
or ``rst-columns3`` (and two columns for ``rst-columns2``), and change
(or ``rst-columns3``), and two columns for ``rst-columns2``. The number
of columns displayed can be reduced
based on the available width of the display window, reducing to one
column on narrow (phone) screens if necessary. We've deprecated use of
the ``hlist`` directive because it misbehaves on smaller screens.
@ -361,8 +362,8 @@ it will show up as :ref:`doc_guidelines`. This type of internal cross reference
multiple files, and the link text is obtained from the document source so if the title changes,
the link text will update as well.
There may be times where you'd like to change the link text that's shown
in the generated document. In this case, you can add specify alternate
There may be times when you'd like to change the link text that's shown
in the generated document. In this case, you can specify alternate
text using ``:ref:`alternate text <doc_guidelines>``` (renders as
:ref:`alternate text <doc_guidelines>`).
@ -614,7 +615,7 @@ sphinx-tabs from the link above.
Instruction Steps
*****************
Numbered instruction steps is a style that makes it
A numbered instruction steps style makes it
easy to create tutorial guides with clearly identified steps. Add
the ``.. rst-class:: numbered-step`` directive immediately before a
second-level heading (by project convention, a heading underlined with

View File

@ -39,7 +39,7 @@ Simple directed graph
*********************
For simple drawings with shapes and lines, you can put the graphviz
commands in the content block for the directive. For example for a
commands in the content block for the directive. For example, for a
simple directed graph (digraph) with two nodes connected by an arrow,
you can write:

View File

@ -28,7 +28,7 @@ framework. There are 3 major subsystems in Service VM:
- HV initializes an I/O request and notifies VHM driver in Service VM
through upcall.
- VHM driver dispatches I/O requests to I/O clients and notifies the
clients (in this case the client is the DM which is notified
clients (in this case the client is the DM, which is notified
through char device)
- DM I/O dispatcher calls corresponding I/O handlers
- I/O dispatcher notifies VHM driver the I/O request is completed
@ -160,8 +160,8 @@ DM Initialization
mapping, and maps the memory segments into user space.
- **PIO/MMIO Handler Init**: PIO/MMIO handlers provide callbacks for
trapped PIO/MMIO request which are triggered from I/O request
server in HV for DM owned device emulation. This is the endpoint
trapped PIO/MMIO requests that are triggered from I/O request
server in HV for DM-owned device emulation. This is the endpoint
of I/O path in DM. After this initialization, device emulation
driver in DM could register its MMIO handler by *register_mem()*
API and PIO handler by *register_inout()* API or INOUT_PORT()
@ -283,7 +283,7 @@ VHM overview
============
Device Model manages User VM by accessing interfaces exported from VHM
module. VHM module is an Service VM kernel driver. The ``/dev/acrn_vhm`` node is
module. VHM module is a Service VM kernel driver. The ``/dev/acrn_vhm`` node is
created when VHM module is initialized. Device Model follows the standard
Linux char device API (ioctl) to access the functionality of VHM.
@ -305,7 +305,7 @@ hypercall to the hypervisor. There are two exceptions:
VHM ioctl interfaces
====================
.. note:: Reference API docs for General interface, VM Management,
.. note:: Reference API documents for General interface, VM Management,
IRQ and Interrupts, Device Model management, Guest Memory management,
PCI assignment, and Power management
@ -338,10 +338,10 @@ I/O Clients
An I/O client is either a Service VM userland application or a Service VM kernel space
module responsible for handling I/O access whose address
falls in a certain range. Each VM has an array of registered I/O
clients which are initialized with a fixed I/O address range, plus a PCI
clients that are initialized with a fixed I/O address range, plus a PCI
BDF on VM creation. There is a special client in each VM, called the
fallback client, that handles all I/O requests that do not fit into
the range of any other client. In the current design the device model
the range of any other client. In the current design, the device model
acts as the fallback client for any VM.
Each I/O client can be configured to handle the I/O requests in the
@ -358,8 +358,8 @@ specifically created for this purpose.
- On registration, the client requests a fresh ID, registers a
handler, adds the I/O range (or PCI BDF) to be emulated by this
client, and finally attaches it to VHM which creates the kicks off
for a new kernel thread.
client, and finally attaches it to VHM that kicks off
a new kernel thread.
- The kernel thread waits for any I/O request to be handled. When a
pending I/O request is assigned to the client by VHM, the kernel
@ -414,9 +414,9 @@ are as follows:
all clients that have I/O requests to be processed. The flow is
illustrated in more detail in :numref:`io-dispatcher-flow`.
4. The waked client (the DM in :numref:`io-sequence-sos` above) handles the
4. The woken client (the DM in :numref:`io-sequence-sos` above) handles the
assigned I/O requests, updates their state to COMPLETE, and notifies
the VHM of the completion via ioctl. :numref:`dm-io-flow` show this
the VHM of the completion via ioctl. :numref:`dm-io-flow` shows this
flow.
5. The VHM device notifies the hypervisor of the completion via
@ -499,10 +499,10 @@ from different devices including PIO, MMIO, and PCI CFG
SPACE access. For example, a CMOS RTC device may access 0x70/0x71 PIO to
get CMOS time, a GPU PCI device may access its MMIO or PIO bar space to
complete its framebuffer rendering, or the bootloader may access a PCI
devices' CFG SPACE for BAR reprogramming.
device's CFG SPACE for BAR reprogramming.
The DM needs to inject interrupts/MSIs to its frontend devices whenever
necessary. For example, a RTC device needs get its ALARM interrupt, or a
necessary. For example, an RTC device needs get its ALARM interrupt, or a
PCI device with MSI capability needs to get its MSI.
DM also provides a PIRQ routing mechanism for platform devices.
@ -543,7 +543,7 @@ A PIO emulation handler is defined as:
The DM pre-registers the PIO emulation handlers through MACRO
INOUT_PORT, or registers the PIO emulation handers through
INOUT_PORT, or registers the PIO emulation handlers through
register_inout() function after init_inout():
.. code-block:: c
@ -565,7 +565,7 @@ register_inout() function after init_inout():
MMIO Handler Register
---------------------
A MMIO range structure is defined below. As with PIO, it's the
An MMIO range structure is defined below. As with PIO, it's the
parameter needed to register MMIO handler for special MMIO range:
.. code-block:: c
@ -580,7 +580,7 @@ parameter needed to register MMIO handler for special MMIO range:
uint64_t size;
};
A MMIO emulation handler is defined as:
An MMIO emulation handler is defined as:
.. code-block:: c
@ -694,17 +694,17 @@ DM calls pci_lintr_route() to emulate this PIRQ routing:
The PIRQ routing for IOAPIC and PIC is dealt with differently.
* For IOAPIC, the irq pin is allocated in a round-robin fashion within the
pins permitted for PCI devices. The irq information will be built
* For IOAPIC, the IRQ pin is allocated in a round-robin fashion within the
pins permitted for PCI devices. The IRQ information will be built
into ACPI DSDT table then passed to guest VM.
* For PIC, the pin2irq information is maintained in a pirqs[] array (the array size is 8
representing 8 shared PIRQs). When a PCI device tries to allocate a
pirq pin, it will do a balancing calculation to figure out a best pin
vs. irq pair. The irq# will be programed into PCI INTLINE config space
vs. IRQ pair. The irq# will be programed into PCI INTLINE config space
and the pin# will be built into ACPI DSDT table then passed to guest VM.
.. note:: "irq" here is also called as "gsi" in ACPI terminology .
.. note:: "IRQ" here is also called as "GSI" in ACPI terminology.
Regarding to INT A/B/C/D for PCI devices, DM just allocates them evenly
prior to PIRQ routing and then programs into PCI INTPIN config space.
@ -742,9 +742,9 @@ During PCI initialization, ACRN DM will scan each PCI bus, slot and
function and identify the PCI devices configured by acrn-dm command
line. The corresponding PCI device's initialization function will
be called to initialize its config space, allocate its BAR resource, its
irq, and do its irq routing.
irq, and do its IRQ routing.
.. note:: reference API doc for pci_vdev, pci_vdef_ops
.. note:: reference API documentation for pci_vdev, pci_vdef_ops
The pci_vdev_ops of the pci_vdev structure could be installed by
customized handlers for cfgwrite/cfgread and barwrite/barread.
@ -797,7 +797,7 @@ Introduction
Advanced Configuration and Power Interface (ACPI) provides an open
standard that operating systems can use to discover and configure
computer hardware components to perform power management for example, by
computer hardware components to perform power management, for example, by
monitoring status and putting unused components to sleep.
Functions implemented by ACPI include:
@ -1176,7 +1176,7 @@ Bluetooth UART enumeration.
PM in Device Model
******************
PM module in Device Model emulate the User VM low power state transition.
PM module in Device Model emulates the User VM low power state transition.
Each time User VM writes an ACPI control register to initialize low power
state transition, the writing operation is trapped to DM as an I/O
@ -1196,4 +1196,4 @@ Passthrough in Device Model
You may refer to :ref:`hv-device-passthrough` for passthrough realization
in device model and :ref:`mmio-device-passthrough` for MMIO passthrough realization
in device model and ACRN Hypervisor..
in device model and ACRN Hypervisor.

View File

@ -3,11 +3,13 @@
ACRN high-level design overview
###############################
ACRN is an open source reference hypervisor (HV) that runs on top of Intel
platforms (APL, KBL, etc) for heterogeneous scenarios such as the Software Defined
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & real-time OS for industry. ACRN provides embedded hypervisor vendors with a reference
I/O mediation solution with a permissive license and provides auto makers and
industry users a reference software stack for corresponding use.
ACRN is an open source reference hypervisor (HV) that runs on top of
Intel platforms (APL, KBL, etc.) for heterogeneous scenarios such as the
Software-defined Cockpit (SDC), or the In-vehicle Experience (IVE) for
automotive, or HMI & real-time OS for industry. ACRN provides embedded
hypervisor vendors with a reference I/O mediation solution with a
permissive license and provides auto makers and industry users a
reference software stack for corresponding use.
ACRN Supported Use Cases
************************
@ -20,16 +22,16 @@ system, the In-vehicle Infotainment (IVI) system, and one or more rear
seat entertainment (RSE) systems. Each system runs as a VM for better
isolation.
The Instrument Control (IC) system manages graphic displays of
The Instrument Control (IC) system manages graphic displays of:
- driving speed, engine RPM, temperature, fuel level, odometer, trip mile, etc.
- alerts of low fuel or tire pressure
- rear-view camera (RVC) and surround-camera view for driving assistance
In-Vehicle Infotainment
In-vehicle Infotainment
=======================
A typical In-Vehicle Infotainment (IVI) system supports:
A typical In-vehicle Infotainment (IVI) system supports:
- Navigation systems
- Radios, audio and video playback
@ -49,11 +51,11 @@ VMs for a customized IC/IVI/RSE.
Industry Usage
==============
A typical industry usage would include one windows HMI + one RT VM:
A typical industry usage would include one Windows HMI + one RT VM:
- windows HMI as a guest OS with display to provide Human Machine Interface
- RT VM which running specific RTOS on it to provide capability of handling
real-time workloads like PLC control
- Windows HMI as a guest OS with display to provide Human Machine Interface
- RT VM that runs a specific RTOS on it to handle
real-time workloads such as PLC control
ACRN supports guest OS of Windows; ACRN has also added/is adding a
series features to enhance its real-time performance then meet hard-RT KPI
@ -76,7 +78,7 @@ Mandatory IA CPU features are support for:
- MTRR
- TSC deadline timer
- NX, SMAP, SMEP
- Intel-VT including VMX, EPT, VT-d, APICv, VPID, invept and invvpid
- Intel-VT including VMX, EPT, VT-d, APICv, VPID, INVEPT and INVVPID
Recommended Memory: 4GB, 8GB preferred.
@ -102,7 +104,7 @@ in the future. Running the IC system in a separate VM can isolate it from
other VMs and their applications, thereby reducing the attack surface
and minimizing potential interference. However, running the IC system in
a separate VM introduces additional latency for the IC applications.
Some country regulations requires an IVE system to show a rear-view
Some country regulations require an IVE system to show a rear-view
camera (RVC) within 2 seconds, which is difficult to achieve if a
separate instrument cluster VM is started after the User VM is booted.
@ -111,7 +113,7 @@ the IC VM and Service VM. As shown, the Service VM owns most of platform devices
provides I/O mediation to VMs. Some of the PCIe devices function as a
passthrough mode to User VMs according to VM configuration. In addition,
the Service VM could run the IC applications and HV helper applications such
as the Device Model, VM manager, etc. where the VM manager is responsible
as the Device Model, VM manager, etc., where the VM manager is responsible
for VM start/stop/pause, virtual CPU pause/resume, etc.
.. figure:: images/over-image34.png
@ -130,7 +132,7 @@ and real-time (RT) VM.
compared to ACRN 1.0 is that:
- a pre-launched VM is supported in ACRN 2.0, with isolated resources, including
CPU, memory, and HW devices, etc
CPU, memory, and HW devices, etc.
- ACRN 2.0 adds a few necessary device emulations in hypervisor like vPCI and vUART to avoid
interference between different VMs
@ -190,7 +192,7 @@ I/O read from the User VM.
I/O (PIO/MMIO) Emulation Path
:numref:`overview-io-emu-path` shows an example I/O emulation flow path.
When a guest executes an I/O instruction (port I/O or MMIO), an VM exit
When a guest executes an I/O instruction (port I/O or MMIO), a VM exit
happens. The HV takes control and executes the request based on the VM exit
reason ``VMX_EXIT_REASON_IO_INSTRUCTION`` for port I/O access, for
example. The HV will then fetch the additional guest instructions, if any,
@ -349,7 +351,7 @@ Kernel Mediators
================
Kernel mediators are kernel modules providing a para-virtualization method
for the User VMs, for example, an i915 gvt driver.
for the User VMs, for example, an i915 GVT driver.
Log/Trace Tools
===============
@ -478,7 +480,7 @@ the following mechanisms:
scheduling latency and vCPU priority, exposing more opportunities
for one VM to interfere another.
To prevent such interference, ACRN hypervisor could adopts static
To prevent such interference, ACRN hypervisor could adopt static
core partitioning by dedicating each physical CPU to one vCPU. The
physical CPU loops in idle when the vCPU is paused by I/O
emulation. This makes the vCPU scheduling deterministic and physical
@ -497,7 +499,7 @@ the following mechanisms:
3. The hypervisor does not unintendedly access the memory of the Service or User VM.
- Destination of external interrupts are set to be the physical core
- The destination of external interrupts is set to be the physical core
where the VM that handles them is running.
External interrupts are always handled by the hypervisor in ACRN.
@ -564,7 +566,7 @@ System power state
==================
ACRN supports ACPI standard defined power state: S3 and S5 in system
level. For each guest, ACRN assume guest implements OSPM and controls its
level. For each guest, ACRN assumes guest implements OSPM and controls its
own power state accordingly. ACRN doesn't involve guest OSPM. Instead,
it traps the power state transition request from guest and emulates it.
@ -582,8 +584,8 @@ transition of the User VM (Linux VM or Android VM in
notifies the OSPM of the Service VM (Service OS in :numref:`overview-pm-block`) once
active the User VM is in the required power state.
Then the OSPM of the Service VM starts the power state transition of the Service VM which is
trapped to "Sx Agency" in ACRN, and it will start the power state
Then the OSPM of the Service VM starts the power state transition of the Service VM
trapped to "Sx Agency" in ACRN, and it starts the power state
transition.
Some details about the ACPI table for the User and Service VMs:
@ -594,4 +596,4 @@ Some details about the ACPI table for the User and Service VMs:
- The ACPI table in the Service VM is passthrough. There is no ACPI parser
in ACRN HV. The power management related ACPI table is
generated offline and hardcoded in ACRN HV.
generated offline and hard-coded in ACRN HV.

View File

@ -74,7 +74,7 @@ The build flow is:
1) Use an offline tool (e.g. **iasl**) to parse the Px/Cx data and hard-code to
a CPU state table in the Hypervisor. The Hypervisor loads the data after
system boots up.
the system boots.
2) Before User VM launching, the Device mode queries the Px/Cx data from the Service
VM VHM via ioctl interface.
3) VHM transmits the query request to the Hypervisor by hypercall.
@ -94,10 +94,10 @@ table) should be rejected.
It is better not to intercept C-state request because the trap would
impact both power and performance.
.. note:: For P-state control you should pay attention to SoC core
.. note:: For P-state control, you should pay attention to SoC core
voltage domain design when doing P-state measurement. The highest
P-state would win if different P-state requests on the cores shared
same voltage domain. In this case APERF/MPERF must be used to see
same voltage domain. In this case, APERF/MPERF must be used to see
what P-state was granted on that core.
S3/S5
@ -111,14 +111,14 @@ assumptions:
4) Highest severity guest's power state is promoted to system power state.
5) Guest has lifecycle manager running to handle power state transaction
requirement and initialize guest power state transaction.
6) S3 is only available on configurations which has no DM launched RTVM.
6) S3 is only available on configurations that have no DM launched RTVM.
7) S3 is only supported at platform level - not VM level.
ACRN has a common implementation for notification between lifecycle manager
in different guest. Which is vUART based cross-vm notification. But user
could customize it according to their hardware/software requirements.
:numref:`systempmdiag` shows the basic system level S3/S5 diagram
:numref:`systempmdiag` shows the basic system level S3/S5 diagram.
.. figure:: images/hld-pm-image62.png
:align: center
@ -127,7 +127,7 @@ could customize it according to their hardware/software requirements.
ACRN System S3/S5 diagram
System low power state enter process
System low power state entry process
====================================
Each time, when lifecycle manager of User VM starts power state transition,
@ -156,19 +156,19 @@ with typical ISD configuration(S3 follows very similar process)
:align: center
:name: pmworkflow
ACRN system S5 enter workflow
ACRN system S5 entry workflow
For system power state entry:
1. Service VM received S5 request.
2. Lifecycle manager in Service VM notify User VM1 and RTVM through
2. Lifecycle manager in Service VM notifies User VM1 and RTVM through
vUART for S5 request.
3. Guest lifecycle manager initialize S5 action. And guest enter S5.
4. RTOS cleanup rt task, send response of S5 request back to Service
3. Guest lifecycle manager initializes S5 action and guest enters S5.
4. RTOS cleanup RT task, send response of S5 request back to Service
VM and RTVM enter S5.
5. After get response from RTVM and all User VM are shutdown, Service VM
enter S5.
6. OSPM in ACRN hypervisor check all guest in S5 state and shutdown
6. OSPM in ACRN hypervisor checks all guests are in S5 state and shuts down
whole system.
System low power state exit process

View File

@ -94,7 +94,7 @@ call these two OS systems "secure world" and
"non-secure world", and they are isolated from each other by the
hypervisor. Secure world has a higher "privilege level" than non-secure
world; for example, the secure world can access the non-secure world's
physical memory but not vice-versa. This document discusses how this
physical memory but not vice versa. This document discusses how this
security works and why it is required.
Careful consideration should be made when evaluating using the Service
@ -150,7 +150,7 @@ before launching.
2) Verified Boot Sequence with UEFI
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As shown in :numref:`security-bootflow-uefi`, in this boot sequence,UEFI
As shown in :numref:`security-bootflow-uefi`, in this boot sequence, UEFI
authenticates and starts the ACRN hypervisor firstly,and hypervisor will return
to UEFI environment to authenticate and load Service VM kernel bootloader.
@ -184,7 +184,7 @@ The 2018 minimal requirements for cryptographic strength currently are:
#. RSA2048 for cryptographic digital signature signing and verification.
We strongly recommend that SHA512 and RSA3072+ be used for a product shipped
in 2018, especially for a product which has a long production life such as
in 2018, especially for a product that has a long production life such as
an automotive vehicle.
The CSE FW image is signed with an Intel RSA private key. All other
@ -216,9 +216,9 @@ UEFI Secure Boot is already supported by OVMF.
UEFI Secure Boot Overview
UEFI Secure Boot is controlled by a set of UEFI Authenticated Variables which specify
the UEFI Secure Boot Policy; the platform manufacturer or the platform owner enroll the
policy objects, which includes the n-tuple of keys {PK, KEK, db,dbx} as step 1.
UEFI Secure Boot is controlled by a set of UEFI Authenticated Variables that specify
the UEFI Secure Boot Policy; the platform manufacturer or the platform owner enrolls the
policy objects, which include the n-tuple of keys {PK, KEK, db,dbx} as step 1.
During each successive boot, the UEFI secure boot implementation will assess the
policy in order to verify the signed images that are discovered in a host-bus adapter
or on a disk. If the images pass policy, then they are invoked.
@ -230,7 +230,7 @@ UEFI Secure Boot implementations use these keys:
#. Key Exchange Key (KEK) is used to sign Signature and Forbidden Signature Database updates.
#. Signature Database (db) contains keys and/or hashes of allowed EFI binaries.
And keys and certificates are in multiple format:
And keys and certificates are in multiple formats:
#. `.key` PEM format private keys for EFI binary and EFI signature list signing.
#. `.crt` PEM format certificates for sbsign.
@ -272,7 +272,7 @@ In practice, the Service VM designer and implementer should obey at least the
following rules:
#. Verify that the Service VM is a closed system and doesn't allow the user to
install any unauthorized 3rd-party software or components.
install any unauthorized third-party software or components.
#. Verify that external peripherals are constrained.
#. Enable kernel-based hardening techniques, for example dm-verity (to
ensure integrity of the DM and vBIOS/vOSloaders), and kernel module
@ -283,7 +283,7 @@ Detailed configurations and policies are out of scope for this document.
For good references on OS system security hardening and enhancement,
see `AGL security
<https://docs.automotivelinux.org/docs/en/master/architecture/reference/security/part-2/0_Abstract.html>`_
and `Android security <https://source.android.com/security/>`_
and `Android security <https://source.android.com/security/>`_.
Hypervisor Security Enhancement
===============================
@ -355,7 +355,7 @@ CR3 MMU paging tables, such as splitting hypervisor code and data
(stack/heap) sections, and then applying W |oplus| X policy, which means if memory
is Writable, then the hypervisor must make it non-eXecutable. The
hypervisor must configure its code as read-only and executable, and
configure its data as read-write. Optionally, if there are read-only
configure its data as read/write. Optionally, if there are read-only
data sections, it would be best if the hypervisor configures them as
read-only.
@ -421,7 +421,7 @@ three typical solutions exist:
behavior can be thwarted with a page fault (#PF) by the processor in the
hypervisor. Whenever the hypervisor has a valid reason to have a write
access to user-accessible read-only memory (guest memory), it can
disable CR0.WP (clear CR0.WP) before writing, and afterwards set CR0.WP
disable CR0.WP (clear CR0.WP) before writing, and then set CR0.WP
back to 1.
This solution is better than the 1st solution above because it doesn't
@ -451,7 +451,7 @@ mitigate many vulnerability exploits.
Guest Memory Execution Prevention
+++++++++++++++++++++++++++++++++
SMEP is designed to prevent user memory malicious code (typically
SMEP is designed to prevent user memory malware (typically
attacker-supplied) from being executed in the kernel (Ring 0) privilege
level. As long as the CR4.SMEP = 1, software operating in supervisor
mode cannot fetch instructions from linear addresses that are accessible
@ -466,7 +466,7 @@ In order to activate SMEP protection, the ACRN hypervisor must:
#. Configure all the guest memory as user-accessible memory (U/S = 1).
No matter what settings for NX bit and R/W bit in corresponding host
CR3 paging tables.
#. Set CR4.SMEP bit. In the entire lifecycle of the hypervisor, this bit
#. Set CR4.SMEP bit. In the entire life cycle of the hypervisor, this bit
value always remains one.
As an alternative, NX feature is used for this purpose by setting the
@ -481,7 +481,7 @@ Guest Memory Access Prevention
++++++++++++++++++++++++++++++
Supervisor Mode Access Prevention (SMAP) is yet another powerful
processor feature which makes it harder for malicious programs to
processor feature that makes it harder for malware to
"trick" the kernel into using instructions or data from a user-space
application program.
@ -505,7 +505,7 @@ To activate SMAP protection in the ACRN hypervisor:
#. Configure all the guest memory as user-writable memory (U/S bit = 1,
and R/W bit = 1) in corresponding host CR3 paging table entries, as
shown in :numref:`security-smap` below.
#. Set CR4.SMAP bit. In the entire lifecycle of the hypervisor, this bit
#. Set CR4.SMAP bit. In the entire life cycle of the hypervisor, this bit
value always remains one.
#. When needed, use STAC instruction to suppress SMAP protection, and
use CLAC instruction to restore SMAP protection.
@ -548,8 +548,8 @@ an arbitrary amount of data to or from VM memory area.
Whenever the hypervisor needs to perform legitimate read/write access to
guest memory pages, one of functions above must be used. Otherwise, the
#PF will be triggered by the processor to prevent malicious or
unintended access from/to the guest memory pages.
#PF will be triggered by the processor to prevent malware or
unintended access from or to the guest memory pages.
These functions must also internally check the address availabilities,
for example, ensuring the input address accessed by the hypervisor must have
@ -574,7 +574,7 @@ Memory content from one guest VM might be leaked to another guest VM. So
in ACRN and Device Model design, when one guest VM is destroyed or
crashes, its memory content should be scrubbed either by the hypervisor
or the Service VM device model process, in case its memory content is
re-allocated to another guest VM which could otherwise leave the
re-allocated to another guest VM that could otherwise leave the
previous guest VM secrets in memory.
.. _secure-hypervisor-interface:
@ -639,12 +639,12 @@ guest VM. The hypervisor then emulates the MMIO instructions with design
behaviors.
As done for I/O emulation, this interface could also be manipulated by
malicious software in guest VM to compromise system security.
malware in guest VM to compromise system security.
Other VMEXIT Handlers
~~~~~~~~~~~~~~~~~~~~~
There are some other VMEXIT handlers in the hypervisor which might take
There are some other VMEXIT handlers in the hypervisor that might take
untrusted parameters and registers from guest VM, for example, MSR write
VMEXIT, APIC VMEXIT.
@ -682,7 +682,7 @@ User VM Power On and Shutdown
The memory of the User VM is allocated dynamically by the DM
process in the Service VM before the User VM is launched. When the User VM
is shutdown (or crashed), its memory will be freed to Service VM memory space.
is shut down (or crashed), its memory will be freed to Service VM memory space.
Later on, if there is a new User VM launch event occurring, DM may potentially allocate
the same memory content (or some overlaps) for this new User VM.
@ -696,12 +696,12 @@ access the previous User VM's secrets by scanning the memory regions
allocated for the new User VM.
In ACRN, the memory content is scrubbed in Device Model after the guest
VM is shutdown.
VM is shut down.
User VM Reboot
~~~~~~~~~~~~~~
The behaviors of **cold** boot of virtual User VM reboot is the same as that of
The behaviors of **cold** boot of virtual User VM reboot are the same as that of
previous virtual power-on and shutdown events. There is a special case:
virtual **warm** reboot.
@ -730,7 +730,7 @@ enabling the configuration.
User VM Suspend/Resume
~~~~~~~~~~~~~~~~~~~~~~
There is no special design considerations for normal User VM without secure
There are no special design considerations for normal User VM without secure
world supported, as long as the EPT/VT-d memory protection/isolation is
active during the entire suspended time.
@ -740,7 +740,7 @@ Service VM, the memory content of secure world of User VM must not be visible to
Service VM. This is designed for security with defense in depth.
During the entire process of User VM sleep/suspend, the memory protection
for secure-world is preserved too.The physical memory region of
for secure-world is preserved too. The physical memory region of
secure world is removed from EPT paging tables of any guest VM,
even including the Service VM.
@ -791,9 +791,9 @@ The parameters of HDKF derivation in the hypervisor are:
#. OutSeedLen = 64 in bytes
#. Guest Dev and User SEED (dvSEED/uvSEED)
dvSEED = HKDF(theHash, nil, dSEEd, VMInfo\|"devseed", OutSeedLen)
``dvSEED = HKDF(theHash, nil, dSEEd, VMInfo\|"devseed", OutSeedLen)``
uvSEED = HKDF(theHash, nil, uSEEd, VMInfo\|"userseed", OutSeedLen
``uvSEED = HKDF(theHash, nil, uSEEd, VMInfo\|"userseed", OutSeedLen)``
.. _secure_trusty:
@ -805,7 +805,7 @@ guest VM such as the Android User VM. (See :ref:`trusty_tee` for more
information.)
On the APL platform, the secure world is used to run a
virtualization-based Trusty TEE in an isolated world which serves
virtualization-based Trusty TEE in an isolated world that serves
Android as a guest (AaaG,) to get Google's Android relevant certificates
by fulfilling Android CDD requirements. Also as a plan, Trusty will be
supported to provide security services for LaaG User VM as well.
@ -868,7 +868,7 @@ configuration.
To save page tables and share the mappings for non-secure world address
space, the hypervisor relocates the Secure World's GPA to a very high
position: 511G-512G. Hence, the PML4 for Trusty World are separated from
position: 511G-512G. Hence, the PML4 for Trusty World is separated from
non-secure World. PDPT/PD/PT for low memory (<511G) are shared in both
Trusty World's EPT and non-secure World's EPT. PDPT/PD/PT for high
memory (>=511G) are valid for Trusty World's EPT only.
@ -892,7 +892,7 @@ Hypercall - Trusty Initialization
When a User VM is created by the DM in the Service VM, if this User VM
supports a secure isolated world, then this hypercall will be invoked
by OSLoader(it could be Android OS loader in :numref:`security-bootflow-sbl` and
:numref:`security-bootflow-uefi` above) to create / initialize the
:numref:`security-bootflow-uefi` above) to create or initialize the
secure world (Trusty/TEE).
.. figure:: images/security-image9.png
@ -905,18 +905,18 @@ secure world (Trusty/TEE).
In :numref:`security-start-flow` above, the OSLoader is responsible for
loading TEE/Trusty image to a dedicated and reserved memory region, and
locating its entry point of TEE/Trusty executable, then executes a
hypercall which exits to the hypervisor handler.
hypercall that exits to the hypervisor handler.
In the hypervisor, from a security perspective, it removes GPA->HPA
mapping of secure world from EPT paging tables of both User VM non-secure
world and even Service VM. This is intended to disallow non-secure world and
Service VM to access the memory region of secure world for security reasons as
previously mentioned
previously mentioned.
After all is set up by the hypervisor, including vCPU context
initialization, the hypervisor eventually does vmresume (step 4 in
:numref:`security-start-flow` above) to the entry point of secure world
TEE/Trusty, then Trusty OS gets started in vmx non-root mode to
TEE/Trusty, then Trusty OS gets started in VMX non-root mode to
initialize itself, and loads its TAs (Trusted Applications) so that the
security services can be ready right before non-secure OS gets started.

View File

@ -22,10 +22,10 @@ are two use scenarios of Sbuf:
Both ACRNTrace and ACRNLog use sbuf as a lockless ring buffer. The Sbuf
is allocated by Service VM and assigned to HV via a hypercall. To hold pointers
to sbuf passed down via hypercall, an array ``sbuf[ACRN_SBUF_ID_MAX]``
is defined in per_cpu region of HV, with predefined sbuf id to identify
is defined in per_cpu region of HV, with predefined sbuf ID to identify
the usage, such as ACRNTrace, ACRNLog, etc.
For each physical CPU there is a dedicated Sbuf. Only a single producer
For each physical CPU, there is a dedicated Sbuf. Only a single producer
is allowed to put data into that Sbuf in HV, and a single consumer is
allowed to get data from Sbuf in Service VM. Therefore, no lock is required to
synchronize access by the producer and consumer.
@ -33,7 +33,7 @@ synchronize access by the producer and consumer.
sbuf APIs
=========
The sbuf APIs are defined in ``hypervisor/include/debug/sbuf.h``
The sbuf APIs are defined in ``hypervisor/include/debug/sbuf.h``.
ACRN Trace
@ -77,7 +77,7 @@ Service VM Trace Module
The Service VM trace module is responsible for:
- allocating sbuf in Service VM memory range for each physical CPU, and assign
the gpa of Sbuf to ``per_cpu sbuf[ACRN_TRACE]``
the GPA of Sbuf to ``per_cpu sbuf[ACRN_TRACE]``
- create a misc device for each physical CPU
- provide mmap operation to map entire Sbuf to userspace for high
flexible and efficient access.
@ -104,7 +104,7 @@ Once ACRNTrace is launched, for each physical CPU a consumer thread is
created to periodically read RAW trace data from sbuf and write to a
file.
.. note:: figure is missing
.. note:: TODO figure is missing
Figure 2.2 Sequence of trace init and trace data collection
These are the Python scripts provided:
@ -113,7 +113,7 @@ These are the Python scripts provided:
text offline according to given format;
- **acrnalyze.py** analyzes trace data (as output by acrntrace)
based on given analyzer filters, such as vm_exit or irq, and generates a
based on given analyzer filters, such as vm_exit or IRQ, and generates a
report.
See :ref:`acrntrace` for details and usage.
@ -122,7 +122,7 @@ ACRN Log
********
acrnlog is a tool used to capture ACRN hypervisor log to files on
Service VM filesystem. It can run as an Service VM service at boot, capturing two
Service VM filesystem. It can run as a Service VM service at boot, capturing two
kinds of logs:
- Current runtime logs;
@ -179,7 +179,7 @@ at runtime via hypervisor shell command "loglevel".
The element size of sbuf for logs is fixed at 80 bytes, and the max size
of a single log message is 320 bytes. Log messages with a length between
80 and 320 bytes will be separated into multiple sbuf elements. Log
messages with length larger then 320 will be truncated.
messages with length larger than 320 will be truncated.
For security, Service VM allocates sbuf in its memory range and assigns it to
the hypervisor.
@ -200,7 +200,7 @@ On Service VM boot, Service VM acrnlog module is responsible to:
these last logs
- construct sbuf in the usable buf range for each physical CPU,
assign the gpa of Sbuf to ``per_cpu sbuf[ACRN_LOG]`` and create a misc
assign the GPA of Sbuf to ``per_cpu sbuf[ACRN_LOG]`` and create a misc
device for each physical CPU
- the misc devices implement read() file operation to allow

View File

@ -55,7 +55,7 @@ hypervisor. Virtio was developed by Rusty Russell when he worked at IBM
research to support his lguest hypervisor in 2007, and it quickly became
the de facto standard for KVM's para-virtualized I/O devices.
Virtio is very popular for virtual I/O devices because is provides a
Virtio is very popular for virtual I/O devices because it provides a
straightforward, efficient, standard, and extensible mechanism, and
eliminates the need for boutique, per-environment, or per-OS mechanisms.
For example, rather than having a variety of device emulation
@ -126,9 +126,9 @@ Standard: virtqueue
The virtqueues are created in guest physical memory by the FE drivers.
BE drivers only need to parse the virtqueue structures to obtain
the requests and process them. How a virtqueue is organized is
the requests and process them. The virtqueue organization is
specific to the Guest OS. In the Linux implementation of virtio, the
virtqueue is implemented as a ring buffer structure called vring.
virtqueue is implemented as a ring buffer structure called `vring``.
In ACRN, the virtqueue APIs can be leveraged directly so that users
don't need to worry about the details of the virtqueue. (Refer to guest
@ -178,8 +178,8 @@ Virtio Device Discovery
Virtio Frameworks
*****************
This section describes the overall architecture of virtio, and then
introduce ACRN specific implementations of the virtio framework.
This section describes the overall architecture of virtio, and
introduces the ACRN-specific implementations of the virtio framework.
Architecture
============
@ -223,7 +223,7 @@ can be classified into two types, virtio backend service in user-land
where the virtio backend service (VBS) is located. Although different in BE
drivers, both VBS-U and VBS-K share the same FE drivers. The reason
behind the two virtio implementations is to meet the requirement of
supporting a large amount of diverse I/O devices in ACRN project.
supporting a large number of diverse I/O devices in ACRN project.
When developing a virtio BE device driver, the device owner should choose
carefully between the VBS-U and VBS-K. Generally VBS-U targets
@ -288,7 +288,7 @@ for feature negotiations between FE and BE drivers. This means the
"control plane" of the virtio device still remains in VBS-U. When
feature negotiation is done, which is determined by FE driver setting up
an indicative flag, VBS-K module will be initialized by VBS-U.
Afterwards, all request handling will be offloaded to the VBS-K in
Afterward, all request handling will be offloaded to the VBS-K in
kernel.
Finally the FE driver is not aware of how the BE driver is implemented,
@ -336,8 +336,8 @@ can be described as:
device:
a) Ioevenftd is bound with a PIO/MMIO range. If it is a PIO, it is
registered with (fd, port, len, value). If it is a MMIO, it is
registered with (fd, addr, len).
registered with ``(fd, port, len, value)``. If it is an MMIO, it is
registered with ``(fd, addr, len)``.
b) Irqfd is registered with MSI vector.
3. vhost proxy sets the two fds to vhost kernel through ioctl of vhost
@ -362,20 +362,20 @@ general workflow of ioeventfd.
The workflow can be summarized as:
1. vhost device init. Vhost proxy create two eventfd for ioeventfd and
1. vhost device init. Vhost proxy creates two eventfd for ioeventfd and
irqfd.
2. pass ioeventfd to vhost kernel driver.
3. pass ioevent fd to vhm driver
4. User VM FE driver triggers ioreq and forwarded to Service VM by hypervisor
5. ioreq is dispatched by vhm driver to related vhm client.
6. ioeventfd vhm client traverse the io_range list and find
6. ioeventfd vhm client traverses the io_range list and find
corresponding eventfd.
7. trigger the signal to related eventfd.
Irqfd implementation
~~~~~~~~~~~~~~~~~~~~
The irqfd module is implemented in VHM, and can enhance an registered
The irqfd module is implemented in VHM, and can enhance a registered
eventfd to inject an interrupt to a guest OS when the eventfd gets
signaled. :numref:`irqfd-workflow` shows the general flow for irqfd.
@ -387,7 +387,7 @@ signaled. :numref:`irqfd-workflow` shows the general flow for irqfd.
The workflow can be summarized as:
1. vhost device init. Vhost proxy create two eventfd for ioeventfd and
1. vhost device init. Vhost proxy creates two eventfd for ioeventfd and
irqfd.
2. pass irqfd to vhost kernel driver.
3. pass irq fd to vhm driver
@ -395,8 +395,8 @@ The workflow can be summarized as:
transfer is completed.
5. irqfd related logic traverses the irqfd list to retrieve related irq
information.
6. irqfd related logic inject an interrupt through vhm interrupt API.
7. interrupt is delivered to User VM FE driver through hypervisor.
6. irqfd related logic injects an interrupt through vhm interrupt API.
7. Interrupt is delivered to User VM FE driver through hypervisor.
.. _virtio-APIs:
@ -464,7 +464,7 @@ relationships are shown in :numref:`VBS-K-data`.
A single virtqueue information to be
synchronized from VBS-U to VBS-K kernel module.
``struct vbs_k_vqs_info``
Virtqueue(s) information, of a virtio device,
Virtqueue information, of a virtio device,
to be synchronized from VBS-U to VBS-K kernel module.
.. figure:: images/virtio-hld-image8.png
@ -480,7 +480,7 @@ to open and register device status after feature negotiation with the FE
driver.
The device status includes negotiated features, number of virtqueues,
interrupt information, and more. All these status will be synchronized
interrupt information, and more. All these statuses will be synchronized
from VBS-U to VBS-K. In VBS-U, the ``struct vbs_k_dev_info`` and ``struct
vbs_k_vqs_info`` will collect all the information and notify VBS-K through
ioctls. In VBS-K, the ``struct vbs_k_dev`` and ``struct vbs_k_vq``, which are

View File

@ -3,10 +3,10 @@
High-Level Design Guides
########################
The ACRN Hypervisor acts as a host with full control of the processor(s)
and the hardware (physical memory, interrupt management and I/O). It
The ACRN Hypervisor acts as a host with full control of the processors
and the hardware (physical memory, interrupt management, and I/O). It
provides the User OS with an abstraction of a virtual platform, allowing
the guest to behave as if were executing directly on a logical
the guest to behave as if it were executing directly on a physical
processor.
These chapters describe the ACRN architecture, high-level design,

View File

@ -14,10 +14,10 @@ Refer to `Intel Analysis of L1TF`_ and `Linux L1TF document`_ for details.
.. _Linux L1TF document:
https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html
L1 Terminal Fault is a speculative side channel which allows unprivileged
speculative access to data which is available in the Level 1 Data Cache
when the page table entry controlling the virtual address, which is used
for the access, has the Present bit cleared or reserved bits set.
L1 Terminal Fault is a speculative side channel that allows unprivileged
speculative access to data that is available in the Level 1 Data Cache
when the page table entry controlling the virtual address, used
for the access, has the present bit cleared or reserved bits set.
When the processor accesses a linear address, it first looks for a
translation to a physical address in the translation lookaside buffer (TLB).
@ -77,7 +77,7 @@ PTEs (with present bit cleared, or reserved bit set) pointing to valid
host PFNs, a malicious guest may use those EPT PTEs to construct an attack.
A special aspect of L1TF in the context of virtualization is symmetric
multi threading (SMT), e.g. Intel |reg| Hyper-threading Technology.
multithreading (SMT), e.g. Intel |reg| Hyper-threading Technology.
Logical processors on the affected physical cores share the L1 Data Cache
(L1D). This fact could make more variants of L1TF-based attack, e.g.
a malicious guest running on one logical processor can attack the data which
@ -93,7 +93,7 @@ e.g. whether CPU partitioning is used, whether Hyper-threading is on, etc.
If CPU partitioning is enabled (default policy in ACRN), there is
1:1 mapping between vCPUs and pCPUs i.e. no sharing of pCPU. There
may be an attack possibility when Hyper-threading is on, where
logical processors of same physical core may be allocated to two
logical processors of the same physical core may be allocated to two
different guests. Then one guest may be able to attack the other guest
on sibling thread due to shared L1D.
@ -167,7 +167,7 @@ EPT Sanitization
EPT is sanitized to avoid pointing to valid host memory in PTEs
which has present bit cleared or reserved bits set.
For non-present PTEs, ACRN currently set pfn bits to ZERO, which
For non-present PTEs, ACRN currently set PFN bits to ZERO, which
means page ZERO might fall into risk if containing security info.
ACRN reserves page ZERO (0~4K) from page allocator thus page ZERO
won't be used by anybody for valid usage. This sanitization logic
@ -189,7 +189,7 @@ other security usage, e.g. disk encryption, secure storage.
If the critical secret data in ACRN is identified, then such
data can be put into un-cached memory. As the content will
never go to L1D, it is immune to L1TF attack
never go to L1D, it is immune to L1TF attack.
For example, after getting the physical seed from CSME, before any guest
starts, ACRN can pre-derive all the virtual seeds for all the
@ -242,7 +242,7 @@ There is no mitigation required on Apollo Lake based platforms.
The majority use case for ACRN is in pre-configured environment,
where the whole software stack (from ACRN hypervisor to guest
kernel to Service VM root) is tightly controlled by solution provider
and not allowed for run-time change after sale (guest kernel is
and not allowed for run time change after sale (guest kernel is
trusted). In that case solution provider will make sure that guest
kernel is up-to-date including necessary page table sanitization,
thus there is no attack interface exposed within guest. Then a

View File

@ -51,7 +51,7 @@ be resolved by design.
`acrn-dev mailing list <https://lists.projectacrn.org/g/acrn-dev>`_ for
discussing if specific callbacks are appropriate.
* **Making the cyclic dependency an exception** A specific cyclic dependency can
be regarded as an exception if it is well justified and a work around is
be regarded as an exception if it is well justified and a workaround is
available to break the cyclic dependency for integration testing.
Measuring Complexity
@ -81,8 +81,8 @@ The components are listed as follows.
initialization. Examples include standard memory and string manipulation
functions like strncpy, atomic operations and bitmap operations. This
component is independent from and widely used in the other components.
* **Hardware Management and Utilities** This component abstract hardware
resources and provide services like timers and physical interrupt handler
* **Hardware Management and Utilities** This component abstracts hardware
resources and provide services such as timers and physical interrupt handler
registration to the upper layers.
* **Virtual CPU** This component implements CPU, memory and interrupt
virtualization. The vCPU loop module in this component handles VM exit events
@ -95,14 +95,14 @@ The components are listed as follows.
* **Passthrough Management** This component manages devices that are passed-through
to specific VMs.
* **Extended Device Emulation** This component implements an I/O request
mechanism that allow the hypervisor to forward I/O accesses from a User
mechanism that allows the hypervisor to forward I/O accesses from a User
VM to the Service VM.
for emulation.
* **VM Management** This component manages the creation, deletion and other
lifecycle operations of VMs.
* **Hypervisor Initialization** This component invokes the initialization
subroutines in the other components to bring up the hypervisor and start up
Service VM in sharing mode or all the VMs in partitioning mode.
subroutines in the other components to bring up the hypervisor and
start the Service VM in sharing mode or all the VMs in partitioning mode.
ACRN hypervisor adopts a layered design where higher layers can invoke the
interfaces of lower layers but not vice versa. The only exception is the

View File

@ -29,7 +29,7 @@ below:
Pre-conditions shall be defined right before the definition/declaration of
the corresponding function in the C source file or header file.
All pre-conditions shall be guaranteed by the caller of the function.
Error checking of the pre-conditions are not needed in release version of the
Error checking of the pre-conditions is not needed in release version of the
function. Developers could use ASSERT to catch design errors in a debug
version for some cases. Verification of the hypervisor shall check whether
each caller guarantees all pre-conditions of the callee (or not).
@ -44,7 +44,7 @@ below:
the corresponding function in the C source file or header file.
All post-conditions shall be guaranteed by the function. All callers of the
function should trust these post-conditions are met.
Error checking of the post-conditions are not needed in release version of
Error checking of the post-conditions is not needed in release version of
each caller. Developers could use ASSERT to catch design errors in a debug
version for some cases. Verification of the hypervisor shall check whether the
function guarantees all post-conditions (or not).
@ -73,7 +73,7 @@ below:
- Configuration data defined by external safety application, such as physical
PCI device information specific for each board design.
- Input data which is only specified by external safety application.
- Input data that is only specified by external safety application.
.. note:: If input data can be specified by both a non-safety VM and a safety VM,
the application constraint isn't applicable to these data. Related error checking
@ -89,7 +89,7 @@ Functional Safety Consideration
-------------------------------
The hypervisor will do range check in hypercalls and HW capability checks
according to Table A.2 of FuSa Standards [IEC_61508-3_2010]_ .
according to Table A.2 of FuSA Standards [IEC_61508-3_2010]_.
Error Handling Methods
----------------------
@ -162,7 +162,7 @@ shown in :numref:`rules_arch_level` below.
+====================+=========================+==============+===========================+=========================+
| External resource | Invalid register/memory | Yes | Follow SDM strictly, or | Unsupported MSR |
| provided by VM | state on VM exit | | state any deviation to the| or invalid CPU ID |
| | | | document explicitly | |
| | | | document explicitly. | |
| +-------------------------+--------------+---------------------------+-------------------------+
| | Invalid hypercall | Yes | The hypervisor shall | Invalid hypercall |
| | parameter | | return related error code | parameter provided by |
@ -176,12 +176,12 @@ shown in :numref:`rules_arch_level` below.
+--------------------+-------------------------+--------------+---------------------------+-------------------------+
| External resource | Invalid E820 table or | Yes | The hypervisor shall | Invalid E820 table or |
| provided by | invalid boot information| | panic during platform | invalid boot information|
| bootloader | | | initialization | |
| bootloader | | | initialization. | |
| (GRUB or SBL) | | | | |
+--------------------+-------------------------+--------------+---------------------------+-------------------------+
| Physical resource | 1GB page is not | Yes | The hypervisor shall | 1GB page is not |
| used by the | available on the | | panic during platform | available on the |
| hypervisor | platform or invalid | | initialization | platform or invalid |
| hypervisor | platform or invalid | | initialization. | platform or invalid |
| | physical CPU ID | | | physical CPU ID |
+--------------------+-------------------------+--------------+---------------------------+-------------------------+
@ -212,7 +212,7 @@ VM. In this case, we shall add the error checking codes before calling
``vcpu_from_vid`` to make sure that the passed parameters are valid and the
pre-conditions are guaranteed.
Here is the sample codes for error checking before calling ``vcpu_from_vid``:
Here is the sample code for error checking before calling ``vcpu_from_vid``:
.. code-block:: c
@ -240,7 +240,7 @@ Functional Safety Consideration
Data verification, and explicit specification of pre-conditions and post-conditions
are applied for internal functions of the hypervisor according to Table A.4 of
FuSa Standards [IEC_61508-3_2010]_ .
FuSA Standards [IEC_61508-3_2010]_ .
Error Handling Methods
----------------------
@ -275,12 +275,13 @@ The rules of error detection and error handling on a module level are shown in
+====================+===========+============================+===========================+=========================+
| Internal data of | N/A | Partial. | The hypervisor shall use | virtual PCI device |
| the hypervisor | | The related pre-conditions | the internal resource/data| information, defined |
| | | are required. | directly. | with array 'pci_vdevs[]'|
| | | are required. | directly. | with array |
| | | | | ``pci_vdevs[]`` |
| | | The design will guarantee | | through static |
| | | the correctness and the | | allocation. |
| | | test cases will verify the | | |
| | | related pre-conditions. | | |
| | | If the design can not | | |
| | | If the design cannot | | |
| | | guarantee the correctness, | | |
| | | the related error handling | | |
| | | codes need to be added. | | |
@ -290,7 +291,7 @@ The rules of error detection and error handling on a module level are shown in
| | | array size and non-null | | |
| | | pointer. | | |
+--------------------+-----------+----------------------------+---------------------------+-------------------------+
| Configuration data | Corrupted | No. | The bootloader initializes| 'vm_config->pci_devs' |
| Configuration data | Corrupted | No. | The bootloader initializes| ``vm_config->pci_devs`` |
| of the VM | VM config | The related pre-conditions | hypervisor (including | is configured |
| | | are required. | code, data, and bss) and | statically. |
| | | Note: VM configuration data| verifies the integrity of | |
@ -315,7 +316,7 @@ Examples
Here are some examples to illustrate when error handling codes are required on
a module level.
**Example_1: Analyze the function 'partition_mode_vpci_init'**
**Example_1: Analyze the function ``partition_mode_vpci_init``**
.. code-block:: c
@ -374,53 +375,53 @@ pre-conditions and ``get_vm_config`` itself shall guarantee the post-condition.
return &vm_configs[vm_id];
}
**Question_1: Is error checking required for 'vm_config'?**
**Question_1: Is error checking required for ``vm_config``?**
No. Because 'vm_config' is getting data from ``get_vm_config`` and the
No. Because ``vm_config`` is getting data from ``get_vm_config`` and the
post-condition of ``get_vm_config`` guarantees that the return value is not NULL.
**Question_2: Is error checking required for 'vdev'?**
**Question_2: Is error checking required for ``vdev``?**
No. Here are the reasons:
a) The pre-condition of ``partition_mode_vpci_init`` guarantees that 'vm' is not
NULL. It indicates that 'vpci' is not NULL. Since 'vdev' is getting data from
the array 'pci_vdevs[]' via indexing, 'vdev' is not NULL as long as the index
a) The pre-condition of ``partition_mode_vpci_init`` guarantees that ``vm`` is not
NULL. It indicates that ``vpci`` is not NULL. Since ``vdev`` is getting data from
the array ``pci_vdevs[]`` via indexing, ``vdev`` is not NULL as long as the index
is valid.
b) The post-condition of ``get_vm_config`` guarantees that 'vpci->pci_vdev_cnt'
is less than or equal to 'CONFIG_MAX_PCI_DEV_NUM', which is the array size of
'pci_vdevs[]'. It indicates that the index used to get 'vdev' is always
b) The post-condition of ``get_vm_config`` guarantees that ``vpci->pci_vdev_cnt``
is less than or equal to ``CONFIG_MAX_PCI_DEV_NUM``, which is the array size of
``pci_vdevs[]``. It indicates that the index used to get ``vdev`` is always
valid.
Given the two reasons above, 'vdev' is always not NULL. So, the error checking
codes are not required for 'vdev'.
Given the two reasons above, ``vdev`` is always not NULL. So, the error checking
codes are not required for ``vdev``.
**Question_3: Is error checking required for 'pci_dev_config'?**
**Question_3: Is error checking required for ``pci_dev_config``?**
No. 'pci_dev_config' is getting data from the array 'pci_vdevs[]', which is the
No. ``pci_dev_config`` is getting data from the array ``pci_vdevs[]``, which is the
physical PCI device information coming from Board Support Package and firmware.
For physical PCI device information, the related application constraints
shall be defined in the design document or safety manual. For debug purpose,
developers could use ASSERT here to catch the Board Support Package or firmware
failures, which does not guarantee these application constraints.
failures, which do not guarantee these application constraints.
**Question_4: Is error checking required for 'vdev->ops->init'?**
**Question_4: Is error checking required for ``vdev->ops->init``?**
No. Here are the reasons:
a) Question_2 proves that 'vdev' is always not NULL.
a) Question_2 proves that ``vdev`` is always not NULL.
b) 'vdev->ops' is fully initialized before 'vdev->ops->init' is called.
b) ``vdev->ops`` is fully initialized before ``vdev->ops->init`` is called.
Given the two reasons above, 'vdev->ops->init' is always not NULL. So, the error
checking codes are not required for 'vdev->ops->init'.
Given the two reasons above, ``vdev->ops->init`` is always not NULL. So, the error
checking codes are not required for ``vdev->ops->init``.
**Question_5: How to handle the case when 'vdev->ops->init(vdev)' returns non-zero?**
**Question_5: How to handle the case when ``vdev->ops->init(vdev)`` returns non-zero?**
This case indicates that the initialization of specific virtual device fails.
Investigation has to be done to figure out the root-cause. Default fatal error
@ -428,7 +429,7 @@ handler shall be invoked here if it is caused by a hardware failure or invalid
boot information.
**Example_2: Analyze the function 'partition_mode_vpci_deinit'**
**Example_2: Analyze the function ``partition_mode_vpci_deinit``**
.. code-block:: c
@ -453,9 +454,9 @@ boot information.
}
**Question_6: Is error checking required for 'vdev->ops' and 'vdev->ops->init'?**
**Question_6: Is error checking required for ``vdev->ops`` and ``vdev->ops->init``?**
Yes. Because 'vdev->ops' and 'vdev->ops->init' can not be guaranteed to be
Yes. Because ``vdev->ops`` and ``vdev->ops->init`` cannot be guaranteed to be
not NULL. If the VM called ``partition_mode_vpci_deinit`` twice, it may be NULL.
@ -528,7 +529,7 @@ The module level configuration design rules are shown below:
1. The platform configurations shall be detectable by hypervisor in DETECT mode;
2. Configurable module APIs shall be abstracted as operations which are
2. Configurable module APIs shall be abstracted as operations that are
implemented through a set of function pointers in the operations data
structure;
@ -544,7 +545,7 @@ The module level configuration design rules are shown below:
6. In order to guarantee that the function pointer in the operations data
structure is dereferenced after it has been instantiated, the pre-condition
shall be added for the function which dereferences the function pointer,
shall be added for the function that dereferences the function pointer,
instead of checking the pointer for NULL.
.. note:: The third rule shall be double checked during code review.

View File

@ -73,7 +73,7 @@ As shown in the above figure, here are some details about the Trusty boot flow p
#. Call ``hcall_world_switch`` to switch back to Normal World if boot completed
#. ACRN (``hcall_world_switch``)
a. Save World context for the World which caused this ``vmexit`` (Secure World)
a. Save World context for the World that caused this ``vmexit`` (Secure World)
#. Restore World context for next World (Normal World (UOS_Loader))
#. Resume to next World (UOS_Loader)
#. UOS_Loader