doc: Remove outdated tutorials

- Remove SGX tutorial, partitioned mode GSG, and Trusty reference

Signed-off-by: Reyes, Amy <amy.reyes@intel.com>
This commit is contained in:
Reyes, Amy 2022-04-13 10:40:55 -07:00 committed by David Kinder
parent 108424180d
commit 9ef08d6021
5 changed files with 1 additions and 995 deletions

View File

@ -12,7 +12,6 @@ Advanced Scenario Tutorials
:maxdepth: 1
tutorials/using_hybrid_mode_on_nuc
tutorials/using_partition_mode_on_nuc
.. _develop_acrn_user_vm:
@ -58,7 +57,6 @@ Advanced Features
.. toctree::
:maxdepth: 1
tutorials/sgx_virtualization
tutorials/nvmx_virtualization
tutorials/vuart_configuration
tutorials/rdt_configuration
@ -69,7 +67,6 @@ Advanced Features
tutorials/sriov_virtualization
tutorials/gpu-passthru
tutorials/run_kata_containers
tutorials/trustyACRN
tutorials/rtvm_workload_design_guideline
tutorials/setup_openstack_libvirt
tutorials/acrn_on_qemu

View File

@ -1104,6 +1104,4 @@ VM, such as:
#. One-to-one mapping between running vTPM instances and logical vTPM in
each VM.
SGX Virtualization (vSGX)
-------------------------
Refer to :ref:`sgx_virt`

View File

@ -1,278 +0,0 @@
.. _sgx_virt:
Enable SGX Virtualization
#########################
SGX refers to `Intel Software Guard Extensions <https://software.intel.com/
en-us/sgx>`_ (Intel SGX). This is a set of instructions that can be used by
applications to set aside protected areas for select code and data in order to
prevent direct attacks on executing code or data stored in memory. SGX allows
an application to instantiate a protected container, referred to as an
enclave, which is protected against external software access, including
privileged malware.
High-Level ACRN SGX Virtualization Design
*****************************************
ACRN SGX virtualization support can be divided into three parts:
* SGX capability exposed to Guest
* EPC (Enclave Page Cache) management
* Enclave System function handling
The image below shows the high-level design of SGX virtualization in ACRN.
.. figure:: images/sgx-1.png
:width: 500px
:align: center
SGX Virtualization in ACRN
Enable SGX Support for Guest
****************************
Presumptions
============
No Enclave in a Hypervisor
--------------------------
ACRN does not support running an enclave in a hypervisor since the whole
hypervisor is running in VMX root mode, ring 0, and an enclave must
run in ring 3. ACRN SGX virtualization provides the capability to
non-Service VMs.
Enable SGX on Host
------------------
For SGX virtualization support in ACRN, you must manually enable the SGX
feature and configure the Processor Reserved Memory (PRM) in the platform
BIOS. (ACRN does not support the "Software Control" option to enable SGX at
run time.) If SGX is not enabled or the hardware platform does not support
SGX, ACRN SGX virtualization will not be enabled.
EPC Page Swapping in Guest
--------------------------
ACRN only partitions the physical EPC resources for VMs. The Guest OS kernel
handles EPC page swapping inside Guest.
Instructions
============
SGX support for a Guest OS is not enabled by default. Follow these steps to
enable SGX support in the BIOS and in ACRN:
#. Check the system BIOS on your target platform to see if Intel SGX is
supported (CPUID.07H.EBX[2] should be 1).
#. Enable the SGX feature in the BIOS setup screens. Follow these instructions:
a) Go to the Security page:
.. figure:: images/sgx-2.jpg
:width: 500px
:align: center
#) Enable SGX and configure the SGX Reserved Memory size as below:
* Intel Software Guard Extension (SGX) -> Enabled
* SGX Reserved Memory Size -> 128MB
.. figure:: images/sgx-3.jpg
:width: 500px
:align: center
.. note::
Not all SGX Reserved Memory can be used as EPC. On KBL-NUC-i7,
the SGX EPC size is 0x5d80000 (93.5MB) when the SGX Reserved Memory
Size is set to 128MB.
#. Add the EPC config in the VM configuration.
Apply the patch to enable SGX support in User VM in the SDC scenario:
.. code-block:: bash
cd <projectacrn base folder>
curl https://github.com/binbinwu1/acrn-hypervisor/commit/0153b2b9b9920b61780163f19c6f5318562215ef.patch | git apply
#. Enable SGX in Guest:
* **For a Linux Guest**, follow these `Linux SGX build instructions
<https://github.com/intel/linux-sgx>`_
to build and install the SGX driver and the SGX SDK and PSW packages.
* **For a Windows Guest**, follow these `Windows SGX build instructions
<https://software.intel.com/en-us/articles/getting-started-with-sgx-sdk-for-windows>`_
for enabling applications with Intel SGX using Microsoft Visual Studio
2015 on a 64-bit Microsoft Windows OS.
SGX Capability Exposure
***********************
ACRN exposes SGX capability and EPC resource to a guest VM via CPUIDs and
Processor model-specific registers (MSRs), as explained in the following
sections.
CPUID Virtualization
====================
CPUID Leaf 07H
--------------
* CPUID_07H.EAX[2] SGX: Supports Intel Software Guard Extensions if 1. If SGX
is supported in Guest, this bit will be set.
* CPUID_07H.ECX[30] SGX_LC: Supports SGX Launch Configuration if 1.
ACRN does not support the SGX Launch Configuration. This bit will not be
set. Thus, the Launch Enclave must be signed by the Intel SGX Launch Enclave
Key.
CPUID Leaf 12H
--------------
**Intel SGX Capability Enumeration**
* CPUID_12H.0.EAX[0] SGX1: If 1, indicates that Intel SGX supports the
collection of SGX1 leaf functions. If is_sgx_supported and the section count
is initialized for the VM, this bit will be set.
* CPUID_12H.0.EAX[1] SGX2: If 1, indicates that Intel SGX supports the
collection of SGX2 leaf functions. If hardware supports it and SGX enabled
for the VM, this bit will be set.
* Other fields of CPUID_12H.0.EAX align with the physical CPUID.
**Intel SGX Attributes Enumeration**
* CPUID_12H.1.EAX & CPUID_12H.1.EBX aligns with the physical CPUID.
* CPUID_12H.1.ECX & CPUID_12H.1.EDX reflects the allow-1 setting in the
Extended feature (same structure as XCR0).
The hypervisor may change the allow-1 setting of XFRM in ATTRIBUTES for VM.
If some feature is disabled for the VM, the bit is also cleared, e.g. MPX.
**Intel SGX EPC Enumeration**
* CPUID_12H.2: The hypervisor presents only one EPC section to Guest. This
vcpuid value will be constructed according to the EPC resource allocated to
Guest.
MSR Virtualization
==================
IA32_FEATURE_CONTROL
--------------------
The hypervisor will opt in to SGX for VM if SGX is enabled for VM.
* MSR_IA32_FEATURE_CONTROL_LOCK is set
* MSR_IA32_FEATURE_CONTROL_SGX_GE is set
* MSR_IA32_FEATURE_CONTROL_SGX_LC is not set
IA32_SGXLEPUBKEYHASH[0-3]
-------------------------
This is read-only since SGX LC is not supported.
SGXOWNEREPOCH[0-1]
------------------
* This is a 128-bit external entropy value for key derivation of an enclave.
* These MSRs are at the package level; they cannot be controlled by the VM.
EPC Virtualization
==================
* EPC resource is statically partitioned according to the configuration of the
EPC size of VMs.
* During platform initialization, the physical EPC section information is
collected via CPUID. SGX initialization function allocates EPC resource to
VMs according to the EPC config in VM configurations.
* If enough EPC resource is allocated for the VM, assign the GPA of the EPC
section.
* EPC resource is allocated to the non-Service VM; the EPC base GPA is specified
by the EPC config in the VM configuration.
* The corresponding range of memory space should be marked as reserved in E820.
* During initialization, the mapping relationship of EPC HPA and GPA is saved
for building the EPT table later when the VM is created.
Enclave System Function Handling
********************************
A new "Enable ENCLS exiting" control bit (bit 15) is defined in the secondary
processor-based VM execution control.
* 1-Setting of "Enable ENCLS exiting" enables ENCLS-exiting bitmap control,
which is a new 64-bit ENCLS-exiting bitmap control field added to VMX VMCS (
0202EH) to control VMEXIT on ENCLS leaf functions.
* ACRN does not emulate ENCLS leaf functions and will not enable ENCLS exiting.
ENCLS[ECREATE]
==============
* The enclave execution environment is heavily influenced by the value of
ATTRIBUTES in the enclave's SECS.
* When ECREATE is executed, the processor will check and verify that the
enclave requirements are supported on the platform. If not, ECREATE will
generate a #GP.
* The hypervisor can present the same extended features to Guest as the
hardware. However, if the hypervisor hides some extended features that the
hardware supports from the VM/guest, then if the hypervisor does not trap
ENCLS[ECREATE], ECREATE may succeed even if the ATTRIBUTES the enclave
requested is not supported in the VM.
* Fortunately, ENCLU[EENTER] will fault if SECS.ATTRIBUTES.XFRM is not a
subset of XCR0 when CR4.OSXSAVE = 1.
* XCR0 is controlled by the hypervisor in ACRN; if the hypervisor hides some
extended feature from the VM/guest, then ENCLU[EENTER] will fault if the
enclave requests a feature that the VM does not support if the hypervisor
does not trap/emulate ENCLS[ECREATE].
* Above all, the security feature is not compromised if the hypervisor does
not trap ENCLS[ECREATE] to check the attributes of the enclave.
Other VMExit Control
********************
RDRAND Exiting
==============
* ACRN allows Guest to use RDRAND/RDSEED instruction but does not set "RDRAND
exiting" to 1.
PAUSE Exiting
=============
* ACRN does not set "PAUSE exiting" to 1.
Future Development
******************
Following are some unplanned areas of interest for future
ACRN development around SGX virtualization.
Launch Configuration Support
============================
When the following two conditions are both satisfied:
* The hardware platform supports the SGX Launch Configuration.
* The platform BIOS must enable the feature in Unlocked mode, so that the
ring0 software can configure the Model Specific Register (MSR)
IA32_SGXLEPUBKEYHASH[0-3] values.
the following statements apply:
* If CPU sharing is supported, ACRN can emulate MSR IA32_SGXLEPUBKEYHASH[0-3]
for VM. ACRN updates MSR IA32_SGXLEPUBKEYHASH[0-3] when the VM context
switch happens.
* If CPU sharing is not supported, ACRN can support SGX LC by passthrough MSR
IA32_SGXLEPUBKEYHASH[0-3] to Guest.
ACPI Virtualization
===================
* The Intel SGX EPC ACPI device is provided in the ACPI Differentiated System
Descriptor Table (DSDT), which contains the details of the Intel SGX
existence on the platform as well as memory size and location.
* Although the EPC can be discovered by the CPUID, several versions of Windows
do rely on the ACPI tables to enumerate the address and size of the EPC.

View File

@ -1,418 +0,0 @@
.. _trusty-security-services:
Trusty and Security Services Reference
######################################
This document provides an overview of the Trusty architecture for
Linux-based system, what security services Trusty provides, and how
Trusty works on top of the ACRN Hypervisor.
Trusty Architecture
*******************
Trusty is a set of software components supporting a Trusted Execution
Environment (TEE) on embedded devices. It is a full software stack
environment including OS, services, and APIs.
As shown in :numref:`trusty-arch` below, it consists of:
- An operating system (the Trusty OS) that runs on a processor
providing a TEE;
- Drivers for the kernel (Linux) to facilitate communication with
applications running under the Trusty OS;
- A set of libraries for Android systems software to facilitate
communication with trusted applications executed within the Trusty OS
using the kernel drivers.
.. figure:: images/trustyacrn-image1.png
:align: center
:width: 600px
:name: trusty-arch
Trusty Architecture
Google provides an Android Open Source Project (AOSP) implementation of
Trusty based on ARM TrustZone technology. Intel enables Trusty
implementation on x86 based platforms with hardware virtualization
technology (e.g. VT-x and VT-d). In :numref:`trusty-arch` above, the
Secure Monitor is a VMM hypervisor. It could be any x86 hypervisor, and
it is the customer's responsibility to pick the right hypervisor for
their product. Intel has developed a product-quality open source
lightweight hypervisor reference implementation for customers to use;
see https://github.com/intel/ikgt-core/tree/trusty.
The purpose of this secure monitor (hypervisor) is to isolate the normal
and secure worlds, and to schedule Trusty OS in and out on demand. In
the Trusty implementation, all the security services provided by Trusty
OS in the secure world are event-driven. As long as there is no service
request from normal world, Trusty OS won't be scheduled in by the
hypervisor. The normal world and secure world share the same processor
resources, so this minimizes the context switching performance penalty.
In Trusty OS, the kernel is a derivative of the `Little Kernel project
<https://github.com/littlekernel/lk/wiki/Introduction>`_,
an embedded kernel supporting multi-thread, interrupt management, MMU,
scheduling, and more. Google engineers added user-mode application
support and a syscall layer to support privilege level isolation, so
that each Trusted App can run in an isolated virtual address space to
enhance application security. Intel added many more security
enhancements such as SMEP (Supervisor Mode Execution Prevention), SMAP
(Supervisor Mode Access Prevention), NX (Non-eXecution), ASLR (Address
Space Layout Randomization), and stack overflow protector.
There are a couple of built-in Trusted Apps running in user mode of
Trusty OS. However, an OEM can add more Trusted Apps in Trusty OS to
serve any other customized security services. For security reasons and
for serving early-boot time security requests (e.g. disk decryption),
Trusty OS and Apps are typically started before Normal world OS.
In normal world OS, Trusty Driver is responsible for IPC communication
with Trusty OS (over hypervisor) to exchange service request commands
and messages. The IPC manager can support concurrent sessions for
communications between Trusted App and Untrusted Client App. Typically,
Trusty provides APIs for developing two classes of applications:
- Trusted applications or services that run on the TEE/Trusty OS in
secure world;
- Untrusted applications running in normal world that use services
provided by Trusted applications.
Software running in normal world can use Trusty client library APIs to
connect to trusted applications and exchange arbitrary messages with
them, just like a network service over IP. It is up to the application
to determine the data format and semantics of these messages using an
app-level protocol. Reliable delivery of messages is guaranteed by the
underlying Trusty infrastructure (Trusty Drivers), and the communication
is completely asynchronous.
Although this Trusty infrastructure is built by Google for Android OS,
it can be applied to any normal world OS (typically a Linux-based OS).
The Trusty OS infrastructure in secure world is normal world
OS-agnostic. The differences truly depend on the security services that
normal world OS would like to have.
Trusty Services
***************
There are many uses for a Trusted Execution Environment such as mobile
payments, secure banking, full-disk encryption or file-based encryption,
multi-factor authentication, device reset protection, replay-protected
persistent storage (secure storage), wireless display ("cast") of
protected content, secure PIN and fingerprint processing, and even
malware detection.
In embedded products such as an automotive IVI system, the most important
security services requested by customers are keystore and secure
storage. In this article, we will focus on these two services.
Keystore
========
Keystore (or Keymaster app in Trusty OS) provides the following
services:
- Key generation
- Import and export of asymmetric keys (no key wrapping)
- Import of raw symmetric keys (no key wrapping)
- Asymmetric encryption and decryption with appropriate padding modes
- Asymmetric signing and verification with digesting and appropriate
padding modes
- Symmetric encryption and decryption in appropriate modes, including
an AEAD mode
- Generation and verification of symmetric message authentication codes
Protocol elements, such as purpose, mode and padding, as well as access
control constraints, are specified when keys are generated or imported
and are permanently bound to the key, ensuring the key cannot be used in
any other way.
In addition to the list above, there is one more service that Keymaster
implementations provide, but is not exposed as an API: Random
number generation. This is used internally for generation of keys,
Initialization Vectors (IVs), random padding, and other elements of
secure protocols that require randomness.
Using Android as an example, Keystore functions are explained in greater
details in this `Android keymaster functions document
<https://source.android.com/security/keystore/implementer-ref>`_.
.. figure:: images/trustyacrn-image3.png
:align: center
:width: 600px
:name: keymaster-app
Keystore Service and Keymaster HAL
As shown in :numref:`keymaster-app` above, the Keymaster HAL is a
dynamically-loadable library used by the Keystore service to provide
hardware-backed cryptographic services. To keep things secure, HAL
implementations don't perform any security sensitive
operations/algorithms in user space, or even in kernel space. Sensitive
operations are delegated to a secure world TEE (Trusty OS) reached
through a kernel interface. The purpose of the Keymaster HAL is only to
marshal and unmarshal requests to the secure world.
Secure Storage (SS)
===================
Trusty implements a secure storage services (in Secure Storage TA) based
on RPMB (Replay Protected Memory Block) partition in eMMC or UFS flash
storage. The details of how RPMB works are out of scope in this article.
You can read the `eMMC/UFS JEDEC specification
<https://www.jedec.org/standards-documents/focus/flash/universal-flash-storage-ufs>`_
to understand that.
This secure storage can provide data confidentiality, integrity, and
anti-replay protection. Confidentiality is guaranteed by data encryption
with a root key derived from the platform chipset's unique key/secret.
RPMB partition is a fixed size partition (128KB ~ 16MB) in eMMC (or UFS)
drive. Users can not change its size after buying an eMMC flash drive
from vendor.
This secure storage could be used for anti-rollback in verified boot,
for saving authentication (e.g. password/pin) retry attempt failure
record to prevent brute-force attacks, for storing Android attestation
keybox,
or for storing customer's credential/secrets (e.g. OEM image encryption
key). See `Android Key and ID Attestation
<https://source.android.com/security/keystore/attestation>`_
for details.
In Trusty, the secure storage architecture is shown in the figure below.
In the secure world, there is an SS (Secure Storage) TA, which has an
RPMB authentication key (AuthKey, an HMAC key) and uses this Authkey to
talk with the RPMB controller in the eMMC device. Since the eMMC device
is controlled by normal world driver, Trusty needs to send an RPMB data
frame (encrypted by hardware-backed unique encryption key and signed by
AuthKey) over Trusty IPC channel to Trusty SS proxy daemon, which then
forwards RPMB data frame to physical RPMB partition in eMMC.
.. figure:: images/trustyacrn-image2.png
:align: center
:width: 600px
:name: trusty-ss-ta
Trusty Secure Storage Trusted App
As shown in :numref:`trusty-ss-ta` above, Trusty SS TA provides two different services
simultaneously:
- **TD (Tamper-Detection)**:
The Trusty secure file system metadata is stored in RPMB, while the
user data (after encrypted with hardware-backed encryption key), is
stored in Linux-backed file system in user data partition of eMMC (as
shown in Figure above). This type of service supports large amount of
data storage.
Because of potential data deletion/modification, Trusty OS SS TA
provides a mechanism to detect such tampering behaviors
(deletion/modification, etc.)
- **TP (Tamper-Proof)**:
This is a tamper-resistant secure storage service with much higher
level of data protection. In this service, the file system metadata
and user data (encrypted) are both stored in RPMB. And both can
survive after a factory reset or user data partition wipe.
As previously mentioned though, the amount of data storage depends on
the eMMC RPMB partition size.
We've discussed how this secure storage architecture looks, and what
secure storage services Trusty SS TA can provide. Now let's briefly take
a look at how it can be used.
As :numref:`trusty-ss-ta-storage` below shows, an OEM can develop a
client App in normal world and a Trusted App (TA) in Trusty OS. The OEM
TA then can talk with either TD or TP (or both) of SS TA through Trusty
internal process IPC to request TA-specific secure file
open/creation/deletion/read/write operations.
.. figure:: images/trustyacrn-image5.png
:align: center
:width: 600px
:name: trusty-ss-ta-storage
Trusty Secure Storage Trusted App Storage
Here is a simple example showing data signing:
#. An OEM Client App sends the message that needs signing to the OEM
Trusted App in TEE/secure world.
#. The OEM Trusted App retrieves the signing key (that was previously
saved into SS TA) from SS TA, and uses it for signing the message,
then discard the signing key.
#. The OEM Trusted App sends the signed message (with signature) back to
OEM Client App.
In this entire process, the secret signing key is never released outside
of secure world.
Trusty in ACRN
**************
ACRN is a flexible, lightweight reference hypervisor, built with
real-time and safety-criticality in mind, optimized to streamline
embedded development through an open source platform. In this
section, we'll focus on two major components:
* one is the basic idea of
secure world and insecure world isolation (so called one-vm,
two-worlds),
* the other one is the secure storage virtualization in ACRN.
See :ref:`trusty_tee` for additional details of Trusty implementation in
ACRN.
One-VM, Two-Worlds
==================
As previously mentioned, Trusty Secure Monitor could be any
hypervisor. In the ACRN project, the ACRN hypervisor will behave as the
secure monitor to schedule in/out Trusty secure world.
.. figure:: images/trustyacrn-image4.png
:align: center
:width: 600px
:name: trusty-isolated
Trusty Secure World Isolated User VM
As shown in :numref:`trusty-isolated` above, the hypervisor creates an
isolated secure world User VM to support a Trusty OS running in a User VM on
ACRN.
:numref:`trusty-lhs-rhs` below shows further implementation details. The RHS
(right-hand system) is such a secure world in which the Trusty OS runs.
The LHS (left-hand system) is the non-secure world system in which a
Linux-based system (e.g. Android) runs.
.. figure:: images/trustyacrn-image7.png
:align: center
:width: 600px
:name: trusty-lhs-rhs
Trusty Secure World Isolation Details
The secure world is configured by the hypervisor so it has read/write
access to a non-secure world's memory space. But non-secure worlds do
not have access to a secure world's memory. This is guaranteed by
switching different EPT tables when a world switch (WS) Hypercall is
invoked. The WS Hypercall has parameters to specify the services cmd ID
requested from the non-secure world.
In the ACRN hypervisor design of the "one VM, two worlds"
architecture, there is a single User VM structure per-User VM in the
Hypervisor, but two vCPU structures that save the LHS/RHS virtual
logical processor states respectively.
Whenever there is a WS (world switch) Hypercall from LHS, the hypervisor
copies the LHS CPU contexts from Guest VMCS to the LHS-vCPU structure
for saving contexts, and then copies the RHS CPU contexts from RHS-vCPU
structure to Guest VMCS. It then does a VMRESUME to RHS, and vice versa!
In addition, the EPTP pointer will be updated accordingly in the VMCS
(not shown in the picture above).
Secure Storage Virtualization
=============================
As previously mentioned, secure storage is one of the security services
provided by secure world (TEE/Trusty). In the current ACRN
implementation, secure storage is built in the RPMB partition in eMMC
(or UFS storage).
The eMMC in the APL SoC platform only has a single RPMB
partition for tamper-resistant and anti-replay secure storage. The
secure storage (RPMB) is virtualized to support multiple guest User VM VMs.
Although newer generations of flash storage (e.g. UFS 3.0, and NVMe)
support multiple RPMB partitions, this article only discusses the
virtualization solution for single-RPMB flash storage device in APL SoC
platform.
:numref:`trusty-rpmb` shows an overview of the virtualization of secure storage
high-level architecture.
.. figure:: images/trustyacrn-image6.png
:align: center
:width: 600px
:name: trusty-rpmb
Virtualized Secure Storage Architecture
In :numref:`trusty-rpmb`, the rKey (RPMB AuthKey) is the physical RPMB
authentication key used for data authenticated read/write access between
Service VM kernel and physical RPMB controller in eMMC device. The VrKey is the
virtual RPMB authentication key used for authentication between Service VM DM
module and its corresponding User VM secure software. Each User VM (if secure
storage is supported) has its own VrKey, generated randomly when the DM
process starts, and is securely distributed to User VM secure world for each
reboot. The rKey is fixed on a specific platform unless the eMMC is
replaced with another one.
In the current ACRN project implementation on an APL platform, the rKey
is provisioned by the BIOS (SBL) near the end of the platform's
manufacturing process. (The details of physical RPMB key (rKey)
provisioning are out of scope for this document.)
For each reboot, the BIOS/SBL retrieves the rKey from CSE FW (or
generated from a special unique secret that is retrieved from CSE FW),
and SBL hands it off to the ACRN hypervisor, and the hypervisor in turn
sends the key to the Service VM kernel.
As an example, secure storage virtualization workflow for data write
access is like this:
#. User VM Secure world (e.g. Trusty) packs the encrypted data and signs it
with the vRPMB authentication key (VrKey), and sends the data along
with its signature over the RPMB FE driver in User VM non-secure world.
#. After DM process in Service VM receives the data and signature, the vRPMB
module in DM verifies them with the shared secret (vRPMB
authentication key, VrKey),
#. If verification is success, the vRPMB module does data address
remapping (remembering that the multiple User VM VMs share a single
physical RPMB partition), and forwards those data to Service VM kernel, then
kernel packs the data and signs it with the physical RPMB
authentication key (rKey). Eventually, the data and its signature
will be sent to physical eMMC device.
#. If the verification is successful in the eMMC RPMB controller, the
data will be written into the storage device.
The workflow of authenticated data read is very similar to this flow
above in reverse order.
Note that there are some security considerations in this architecture:
- The rKey protection is very critical in this system. If the key is
leaked, an attacker can change/overwrite the data on RPMB, bypassing
the "tamper-resistant & anti-replay" capability.
- Typically, the vRPMB module in DM process of Service VM system can filter
data access, i.e. it doesn't allow one User VM to perform read/write
access to the data from another User VM.
If the vRPMB module in DM process is compromised, a User VM could
change/overwrite the secure data of other User VMs.
Keeping Service VM system as secure as possible is a very important goal in the
system security design. In practice, the Service VM designer and implementer
should obey these following rules (and more):
- Make sure the Service VM is a closed system and doesn't allow users to
install any unauthorized third-party software or components.
- External peripherals are constrained.
- Enable kernel-based hardening techniques, e.g., dm-verity (to make
sure integrity of DM and vBIOS/vOSloaders), kernel module signing,
etc.
- Enable system level hardening such as MAC (Mandatory Access Control).
Detailed configurations and policies are out of scope in this article.
Good references for OS system security hardening and enhancement
include: `AGL security
<https://docs.automotivelinux.org/en/master/#2_Architecture_Guides/2_Security_Blueprint/9_Secure_development/>`_
and `Android security
<https://source.android.com/security/>`_
References:
===========
* `Trusty TEE | Android Open Source Project
<https://source.android.com/security/trusty/>`_
* `Secure Storage (Tamper-resistant and Anti-replay)
<https://android.googlesource.com/trusty/app/storage/>`_
* `Eddie Dong, ACRN: A Big Little Hypervisor for IoT Development
<https://elinux.org/images/3/3c/ACRN-brief2.pdf>`_

View File

@ -1,293 +0,0 @@
.. _using_partition_mode_on_nuc:
Getting Started Guide for ACRN Partitioned Mode
###############################################
The ACRN hypervisor supports a partitioned scenario in which the OS running in
a pre-launched VM can bypass the ACRN
hypervisor and directly access isolated PCI devices. The following
guidelines provide step-by-step instructions on how to set up the ACRN
hypervisor partitioned scenario running two
pre-launched VMs on an Intel NUC.
.. contents::
:local:
:depth: 1
Validated Versions
******************
- Ubuntu version: **18.04**
- ACRN hypervisor tag: **v2.7**
Prerequisites
*************
* `Intel NUC Kit NUC11TNBi5 <https://ark.intel.com/content/www/us/en/ark/products/205596/intel-nuc-11-pro-board-nuc11tnbi5.html>`_.
* NVMe disk
* SATA disk
* Storage device with USB interface (such as USB Flash
or SATA disk connected with a USB 3.0 SATA converter).
* Disable **Intel Hyper Threading Technology** in the BIOS to avoid
interference from logical cores for the partitioned scenario.
* In the partitioned scenario, two VMs (running Ubuntu OS)
are started by the ACRN hypervisor. Each VM has its own root
filesystem. Set up each VM by following the `Ubuntu desktop installation
<https://tutorials.ubuntu.com/tutorial/tutorial-install-ubuntu-desktop>`_ instructions
first on a SATA disk and then again on a storage device with a USB interface.
The two pre-launched VMs will mount the root file systems via the SATA controller and
the USB controller respectively.
.. rst-class:: numbered-step
Update Kernel Image and Modules of Pre-Launched VM
**************************************************
#. On the local Ubuntu target machine, find the kernel file,
copy to your (``/boot`` directory) and name the file ``bzImage``.
The ``uname -r`` command returns the kernel release, for example,
``5.4.0-89-generic``):
.. code-block:: none
sudo cp /boot/vmlinuz-$(uname -r) /boot/bzImage
sudo cp /boot/initrd.img-$(uname -r) /boot/initrd_Image
#. The current ACRN partitioned scenario implementation requires a
multi-boot capable bootloader to boot both the ACRN hypervisor and the
bootable kernel image built from the previous step. Install the Ubuntu OS
on the onboard NVMe SSD by following the `Ubuntu desktop installation
instructions <https://tutorials.ubuntu.com/tutorial/tutorial-install-ubuntu-desktop>`_. The
Ubuntu installer creates 3 disk partitions on the onboard NVMe SSD. By
default, the GRUB bootloader is installed on the EFI System Partition
(ESP) that's used to bootstrap the ACRN hypervisor.
#. After installing the Ubuntu OS, power off the Intel NUC. Attach the
SATA disk and storage device with the USB interface to the Intel NUC. Power on
the Intel NUC and make sure it boots the Ubuntu OS from the NVMe SSD. Plug in
the removable disk with the kernel image into the Intel NUC and then copy the
loadable kernel modules built in Step 1 to the ``/lib/modules/`` folder
on both the mounted SATA disk and storage device with USB interface. For
example, assuming the SATA disk and storage device with USB interface are
assigned to ``/dev/sda`` and ``/dev/sdb`` respectively, the following
commands set up the partition mode loadable kernel modules onto the root
file systems to be loaded by the pre-launched VMs:
To mount the Ubuntu OS root filesystem on the SATA disk:
.. code-block:: none
sudo mount /dev/sda3 /mnt
sudo cp -r /lib/modules/* /mnt/lib/modules
sudo umount /mnt
To mount the Ubuntu OS root filesystem on the USB flash disk:
.. code-block:: none
sudo mount /dev/sdb3 /mnt
sudo cp -r /lib/modules/* /mnt/lib/modules
sudo umount /mnt
Update ACRN Hypervisor Image
****************************
#. Before building the ACRN hypervisor, find the I/O address of the serial
port and the PCI BDF addresses of the SATA controller and the USB
controllers on the Intel NUC. Enter the following command to get the
I/O addresses of the serial port. The Intel NUC supports one serial port, **ttyS0**.
Connect the serial port to the development workstation in order to access
the ACRN serial console to switch between pre-launched VMs:
.. code-block:: none
dmesg | grep ttyS0
Output example:
.. code-block:: console
[ 0.000000] console [ttyS0] enabled
[ 1.562546] 00:01: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is
a 16550A
The following command prints detailed information about all PCI buses and
devices in the system:
.. code-block:: none
sudo lspci -vv
Output example:
.. code-block:: console
00:14.0 USB controller: Intel Corporation Device 9ded (rev 30) (prog-if 30 [XHCI])
Subsystem: Intel Corporation Device 7270
00:17.0 SATA controller: Intel Corporation Device 9dd3 (rev 30) (prog-if 01 [AHCI 1.0])
Subsystem: Intel Corporation Device 7270
02:00.0 Non-Volatile memory controller: Intel Corporation Device f1a8 (rev 03) (prog-if 02 [NVM Express])
Subsystem: Intel Corporation Device 390d
03:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
Subsystem: Intel Corporation I210 Gigabit Network Connection
04:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
Subsystem: Intel Corporation I210 Gigabit Network Connection
#. Clone the ACRN source code and configure the build options.
Refer to :ref:`gsg` to set up the ACRN build
environment on your development workstation.
Clone the ACRN source code and check out to the tag **v2.7**:
.. code-block:: none
git clone https://github.com/projectacrn/acrn-hypervisor.git
cd acrn-hypervisor
git checkout v2.7
#. Check the ``pci_devs`` sections in ``misc/config_tools/data/nuc11tnbi5/partitioned.xml``
for each pre-launched VM to ensure you are using the right PCI device BDF information (as
reported by ``lspci -vv``). If you need to make changes to this file, create a copy of it and
use it subsequently when building ACRN (``SCENARIO=/path/to/newfile.xml``).
#. Build the ACRN hypervisor and ACPI binaries for pre-launched VMs with default xmls:
.. code-block:: none
make hypervisor BOARD=nuc11tnbi5 SCENARIO=partitioned
.. note::
The ``acrn.bin`` will be generated to ``./build/hypervisor/acrn.bin``.
The ``ACPI_VM0.bin`` and ``ACPI_VM1.bin`` will be generated to ``./build/hypervisor/acpi/``.
#. Check the Ubuntu bootloader name.
In the current design, the partitioned scenario depends on the GRUB boot
loader; otherwise, the hypervisor will fail to boot. Verify that the
default bootloader is GRUB:
.. code-block:: none
sudo update-grub -V
The above command output should contain the ``GRUB`` keyword.
#. Copy the artifact ``acrn.bin``, ``ACPI_VM0.bin``, and ``ACPI_VM1.bin`` to the ``/boot`` directory on NVME:
#. Copy ``acrn.bin``, ``ACPI_VM1.bin`` and ``ACPI_VM0.bin`` to a removable disk.
#. Plug the removable disk into the Intel NUC's USB port.
#. Copy the ``acrn.bin``, ``ACPI_VM0.bin``, and ``ACPI_VM1.bin`` from the removable disk to ``/boot``
directory.
.. rst-class:: numbered-step
Update Ubuntu GRUB to Boot Hypervisor and Load Kernel Image
***********************************************************
#. Append the following configuration to the ``/etc/grub.d/40_custom`` file:
.. code-block:: none
menuentry 'ACRN hypervisor Partitioned Scenario' --id ACRN_Partitioned --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' {
recordfail
load_video
gfxmode $linux_gfx_mode
insmod gzio
insmod part_gpt
insmod ext2
search --no-floppy --fs-uuid --set 9bd58889-add7-410c-bdb7-1fbc2af9b0e1
echo 'Loading hypervisor partitioned scenario ...'
multiboot2 /boot/acrn.bin root=PARTUUID="e515916d-aac4-4439-aaa0-33231a9f4d83"
module2 /boot/bzImage XXXXXX
module2 /boot/initrd_Image XXXXXX
module2 /boot/ACPI_VM0.bin ACPI_VM0
module2 /boot/ACPI_VM1.bin ACPI_VM1
}
.. note::
Update the UUID (``--set``) and PARTUUID (``root=`` parameter)
(or use the device node directly) of the root partition (e.g., ``/dev/nvme0n1p2``). Hint: use ``sudo blkid``.
The kernel command-line arguments used to boot the pre-launched VMs is ``bootargs``
in the ``misc/config_tools/data/nuc11tnbi5/partitioned.xml`` file.
The ``module2 /boot/bzImage`` param ``XXXXXX`` is the bzImage tag and must exactly match the ``kern_mod``
in the ``misc/config_tools/data/nuc11tnbi5/partitioned.xml`` file.
The ``module2 /boot/initrd_Image`` param ``XXXXXX`` is the initrd_Image tag and must exactly match the ``ramdisk_mod``
in the ``misc/config_tools/data/nuc11tnbi5/partitioned.xml`` file.
The module ``/boot/ACPI_VM0.bin`` is the binary of ACPI tables for pre-launched VM0. The parameter ``ACPI_VM0`` is
VM0's ACPI tag and should not be modified.
The module ``/boot/ACPI_VM1.bin`` is the binary of ACPI tables for pre-launched VM1. The parameter ``ACPI_VM1`` is
VM1's ACPI tag and should not be modified.
#. Correct example Grub configuration (with ``module2`` image paths set):
.. code-block:: console
menuentry 'ACRN hypervisor Partitioned Scenario' --id ACRN_Partitioned --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' {
recordfail
load_video
gfxmode $linux_gfx_mode
insmod gzio
insmod part_gpt
insmod ext2
search --no-floppy --fs-uuid --set 9bd58889-add7-410c-bdb7-1fbc2af9b0e1
echo 'Loading hypervisor partitioned scenario ...'
multiboot2 /boot/acrn.bin root=PARTUUID="e515916d-aac4-4439-aaa0-33231a9f4d83"
module2 /boot/bzImage Linux_bzImage
module2 /boot/initrd_Image Ubuntu
module2 /boot/ACPI_VM0.bin ACPI_VM0
module2 /boot/ACPI_VM1.bin ACPI_VM1
}
#. Modify the ``/etc/default/grub`` file as follows to make the GRUB menu
visible when booting:
.. code-block:: none
GRUB_DEFAULT=ACRN_Partitioned
#GRUB_HIDDEN_TIMEOUT=0
#GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX=""
#. Update GRUB:
.. code-block:: none
sudo update-grub
#. Reboot the Intel NUC. Select the **ACRN hypervisor Partitioned
Scenario** entry to boot the partitioned scenario of the ACRN hypervisor on
the Intel NUC's display. The GRUB loader will boot the hypervisor, and the
hypervisor will automatically start the two pre-launched VMs.
.. rst-class:: numbered-step
Partitioned Scenario Startup Check
**********************************
#. Connect to the serial port as described in this :ref:`Connecting to the
serial port <connect_serial_port>` tutorial.
#. Use these steps to verify that the hypervisor is properly running:
#. Log in to the ACRN hypervisor shell from the serial console.
#. Use the ``vm_list`` to check the pre-launched VMs.
#. Use these steps to verify that the two pre-launched VMs are running
properly:
#. Use the ``vm_console 0`` to switch to VM0's console.
#. The VM0's OS should boot and log in.
#. Use a :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 1`` to switch to VM1's console.
#. The VM1's OS should boot and log in.
Refer to the :ref:`ACRN hypervisor shell user guide <acrnshell>`
for more information about available commands.