Doc: content updates to ACRN Config Tool and Build frm Source

Signed-off-by: Deb Taylor <deb.taylor@intel.com>
This commit is contained in:
Deb Taylor 2019-09-27 15:08:52 -04:00 committed by deb-intel
parent 4f9c2f3a7a
commit 96fc3fec10
9 changed files with 318 additions and 317 deletions

View File

@ -6,69 +6,68 @@ Build ACRN from Source
Introduction
************
Following a general embedded system programming model, the ACRN
hypervisor is designed to be customized at build-time per hardware
Following a general embedded-system programming model, the ACRN
hypervisor is designed to be customized at build time per hardware
platform and per usage scenario, rather than one binary for all
scenarios.
The hypervisor binary is generated based on Kconfig configuration
settings. Instruction about these settings can be found in
settings. Instructions about these settings can be found in
:ref:`getting-started-hypervisor-configuration`.
.. note::
A generic configuration named ``hypervisor/arch/x86/configs/generic.config``
is provided to help developers try out ACRN more easily. This configuration
will likely work for most x86-based platforms, supported with limited features.
This configuration can be enabled by specifying ``BOARD=generic`` in
the make command line.
is provided to help developers try out ACRN more easily.
This configuration works for most x86-based platforms; it is supported
with limited features. It can be enabled by specifying ``BOARD=generic``
in the ``make`` command line.
One binary for all platforms and all usage scenarios is currently not
supported, primarily because dynamic configuration parsing is restricted in
the ACRN hypervisor for the following reasons:
A primary reason one binary for all platforms and all usage scenarios is
not supported is because dynamic configuration parsing is restricted in
ACRN hypervisor, for the following considerations:
- **Meeting functional safety requirements.** Implementing dynamic parsing
introduces dynamic objects, which violates functional safety requirements.
* **Meeting functional safety requirements** Absence of dynamic objects is
required in functional safety standards. Implementation of dynamic parsing
would introduce dynamic objects. Avoiding use of dynamic
parsing would help the ACRN hypervisor meet functional safety requirements.
* **Reduce complexity** ACRN is a lightweight reference hypervisor, built for
- **Reduce complexity.** ACRN is a lightweight reference hypervisor, built for
embedded IoT. As new platforms for embedded systems are rapidly introduced,
support for one binary would require more and more complexity in the
hypervisor, something we need to avoid.
support for one binary could require more and more complexity in the
hypervisor, which is something we strive to avoid.
* **Keep small footprint** Implementation of dynamic parsing would introduce
hundreds or thousands of code. Avoiding dynamic parsing would help keep
Lines of Code (LOC) of the hypervisor in a desirable range (around 30K).
- **Keep small footprint.** Implementing dynamic parsing introduces
hundreds or thousands of lines of code. Avoiding dynamic parsing
helps keep the hypervisor's Lines of Code (LOC) in a desirable range (around 30K).
* **Improve boot up time** Dynamic parsing at runtime would increase the boot
up time. Using build-time configuration and not dynamic parsing would help
improve boot up time of the hypervisor.
- **Improve boot up time.** Dynamic parsing at runtime increases the boot
up time. Using a build-time configuration and not dynamic parsing
helps improve the boot up time of the hypervisor.
You can build the ACRN hypervisor, device model, and tools from
source, by following these steps.
Build the ACRN hypervisor, device model, and tools from source by following
these steps.
.. _install-build-tools-dependencies:
Install build tools and dependencies
************************************
Step 1: Install build tools and dependencies
********************************************
ACRN development is supported on popular Linux distributions,
each with their own way to install development tools:
ACRN development is supported on popular Linux distributions, each with
their own way to install development tools:
.. note::
ACRN uses ``menuconfig``, a python3 text-based user interface (TUI) for
configuring hypervisor options and using python's ``kconfiglib`` library.
* On a Clear Linux OS development system, install the necessary tools:
Install the necessary tools for the following systems:
* Clear Linux OS development system:
.. code-block:: none
$ sudo swupd bundle-add os-clr-on-clr os-core-dev python3-basic
$ pip3 install --user kconfiglib
* On a Ubuntu/Debian development system:
* Ubuntu/Debian development system:
.. code-block:: none
@ -92,9 +91,8 @@ each with their own way to install development tools:
$ sudo pip3 install kconfiglib
.. note::
You need to use ``gcc`` version 7.3.* or higher else you will run into issue
`#1396 <https://github.com/projectacrn/acrn-hypervisor/issues/1396>`_. Follow
these instructions to install the ``gcc-7`` package on Ubuntu 16.04:
Use ``gcc`` version 7.3.* or higher to avoid running into
issue `#1396 <https://github.com/projectacrn/acrn-hypervisor/issues/1396>`_. Follow these instructions to install the ``gcc-7`` package on Ubuntu 16.04:
.. code-block:: none
@ -104,11 +102,11 @@ each with their own way to install development tools:
$ sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 60 \
--slave /usr/bin/g++ g++ /usr/bin/g++-7
.. note::
ACRN development requires ``binutils`` version 2.27 (or higher). You can
verify your version of ``binutils`` with the command ``apt show binutils``.
While Ubuntu 18.04 has a new version of ``binutils`` the default version on
Ubuntu 16.04 needs updating (see issue `#1133
ACRN development requires ``binutils`` version 2.27 (or higher).
Verify your version of ``binutils`` with the command ``apt show binutils
``. While Ubuntu 18.04 has a new version of ``binutils``, the default
version on Ubuntu 16.04 must be updated (see issue `#1133
<https://github.com/projectacrn/acrn-hypervisor/issues/1133>`_).
.. code-block:: none
@ -119,11 +117,10 @@ each with their own way to install development tools:
$ make
$ sudo make install
.. note::
Ubuntu 14.04 requires ``libsystemd-journal-dev`` instead of ``libsystemd-dev``
as indicated above.
* On a Fedora/Redhat development system:
Ubuntu 14.04 requires ``libsystemd-journal-dev`` instead of ``libsystemd-dev`` as indicated above.
* Fedora/Redhat development system:
.. code-block:: none
@ -146,7 +143,7 @@ each with their own way to install development tools:
$ sudo pip3 install kconfiglib
* On a CentOS development system:
* CentOS development system:
.. code-block:: none
@ -168,47 +165,43 @@ each with their own way to install development tools:
$ sudo pip3 install kconfiglib
.. note::
You may need to install `EPEL <https://fedoraproject.org/wiki/EPEL>`_ for
installing python3 via yum for CentOS 7. For CentOS 6 you need to install
pip manually. Please refer to https://pip.pypa.io/en/stable/installing for
details.
You may need to install `EPEL <https://fedoraproject.org/wiki/EPEL>`_
for installing python3 via yum for CentOS 7. For CentOS 6, you need to
install pip manually. Refer to https://pip.pypa.io/en/stable/installing
for details.
Get the ACRN hypervisor source code
***********************************
Step 2: Get the ACRN hypervisor source code
*******************************************
The `acrn-hypervisor <https://github.com/projectacrn/acrn-hypervisor/>`_
repository has four main components in it:
repository contains four main components:
1. The ACRN hypervisor code located in the ``hypervisor`` directory
#. The EFI stub code located in the ``misc/efi-stub`` directory
#. The ACRN devicemodel code located in the ``devicemodel`` directory
#. The ACRN tools source code located in the ``misc/tools`` directory
1. The ACRN hypervisor code, located in the ``hypervisor`` directory.
#. The EFI stub code, located in the ``misc/efi-stub`` directory.
#. The ACRN device model code, located in the ``devicemodel`` directory.
#. The ACRN tools source code, located in the ``misc/tools`` directory.
Follow this step to get the acrn-hypervisor source code:
Enter the following to get the acrn-hypervisor source code:
.. code-block:: none
$ git clone https://github.com/projectacrn/acrn-hypervisor
Choose the ACRN scenario
************************
Step 3: Build with the ACRN scenario
************************************
.. note:: Documentation about the new ACRN use-case scenarios is a
work-in-progress on the master branch as we work towards the v1.2
release.
Currently ACRN hypervisor defines these typical usage scenarios:
Currently, the ACRN hypervisor defines these typical usage scenarios:
SDC:
The SDC (Software Defined Cockpit) scenario defines a simple
automotive use-case where there is one pre-launched Service VM and one
automotive use-case that includes one pre-launched Service VM and one
post-launched User VM.
SDC2:
SDC2 (Software Defined Cockpit 2) is an extended scenario for an
automotive SDC system. SDC2 defined one pre-launched Service VM and up
automotive SDC system. SDC2 defines one pre-launched Service VM and up
to three post-launched VMs.
LOGICAL_PARTITION:
@ -221,60 +214,61 @@ INDUSTRY:
control.
HYBRID:
This scenario defines a hybrid use-case with three VMs: one
This scenario defines a hybrid use case with three VMs: one
pre-launched VM, one pre-launched Service VM, and one post-launched
Standard VM.
You can select a build scenario by changing the default Kconfig name in
the choice block of **ACRN Scenario** in ``arch/x86/Kconfig``. The
corresponding VM configuration files in the corresponding
``scenarios/$SCENARIO_NAME/`` folder.
Assuming that you are at the top level of the acrn-hypervisor directory:
.. code-block:: none
:emphasize-lines: 7
* Build ``INDUSTRY`` scenario on ``nuc7i7dnb``:
$ cd acrn-hypervisor/hypervisor
$ sudo vim arch/x86/Kconfig
# <Fill the scenario name into below and save>
.. code-block:: none
choice
prompt "ACRN Scenario"
default SDC
$ make all BOARD=nuc7i7dnb SCENARIO=industry
See the :ref:`hardware` document for information about the platform
needs for each scenario.
* Build ``SDC`` scenario on ``nuc6cayh``:
.. code-block:: none
$ make all BOARD=nuc6cayh SCENARIO=sdc
See the :ref:`hardware` document for information about the platform needs
for each scenario.
.. _getting-started-hypervisor-configuration:
Modify the hypervisor configuration
***********************************
Step 4: Build the hypervisor configuration
******************************************
The ACRN hypervisor leverages Kconfig to manage configurations, powered by
Kconfiglib. A default configuration is generated based on the board you have
selected via the ``BOARD=`` command line parameter. You can make further
changes to that default configuration to adjust to your specific
Modify the hypervisor configuration
===================================
The ACRN hypervisor leverages Kconfig to manage configurations; it is
powered by ``Kconfiglib``. A default configuration is generated based on the
board you have selected via the ``BOARD=`` command line parameter. You can
make further changes to that default configuration to adjust to your specific
requirements.
To generate hypervisor configurations, you need to build the hypervisor
individually. The following steps generate a default but complete configuration,
based on the platform selected, assuming that you are under the top-level
directory of acrn-hypervisor. The configuration file, named ``.config``, can be
found under the target folder of your build.
To generate hypervisor configurations, you must build the hypervisor
individually. The following steps generate a default but complete
configuration, based on the platform selected, assuming that you are at the
top level of the acrn-hypervisor directory. The configuration file, named
``.config``, can be found under the target folder of your build.
.. code-block:: none
$ make defconfig BOARD=nuc6cayh
The BOARD specified is used to select a defconfig under
``arch/x86/configs/``. The other command-line based options (e.g. ``RELEASE``)
take no effects when generating a defconfig.
The BOARD specified is used to select a ``defconfig`` under
``arch/x86/configs/``. The other command line-based options (e.g. ``RELEASE``)
take no effect when generating a defconfig.
To modify the hypervisor configurations, you can either edit ``.config``
manually, or invoke a TUI-based menuconfig, powered by kconfiglib, by executing
``make menuconfig``. As an example, the following commands, assuming that you
are under the top-level directory of acrn-hypervisor, generate a default
configuration file for UEFI, allow you to modify some configurations and build
the hypervisor using the updated ``.config``.
manually, or invoke a TUI-based menuconfig, powered by kconfiglib, by
executing ``make menuconfig``. As an example, the following commands
(assuming that you are at the top level of the acrn-hypervisor directory)
generate a default configuration file for UEFI, allowing you to modify some
configurations and build the hypervisor using the updated ``.config``:
.. code-block:: none
@ -282,29 +276,29 @@ the hypervisor using the updated ``.config``.
$ cd ../ # Enter top-level folder of acrn-hypervisor source
$ make menuconfig -C hypervisor BOARD=kbl-nuc-i7 <select industry scenario>
.. note::
Menuconfig is python3 only.
Refer to the help on menuconfig for a detailed guide on the interface.
Note that ``menuconfig`` is python3 only.
Refer to the help on menuconfig for a detailed guide on the interface:
.. code-block:: none
$ pydoc3 menuconfig
Build the hypervisor, device model and tools
********************************************
Step 5: Build the hypervisor, device model, and tools
*****************************************************
Now you can build all these components in one go as follows:
Now you can build all these components at once as follows:
.. code-block:: none
$ make FIRMWARE=uefi # Build the UEFI hypervisor with the new .config
The build results are found in the ``build`` directory. You can specify
use a different Output folder by setting the ``O`` make parameter,
The build results are found in the ``build`` directory. You can specify
a different Output folder by setting the ``O`` ``make`` parameter,
for example: ``make O=build-nuc BOARD=nuc6cayh``.
If you only need the hypervisor, then use this command:
If you only need the hypervisor, use this command:
.. code-block:: none
@ -312,23 +306,19 @@ If you only need the hypervisor, then use this command:
$ make -C hypervisor
$ make -C misc/efi-stub HV_OBJDIR=$PWD/hypervisor/build EFI_OBJDIR=$PWD/hypervisor/build
The``acrn.efi`` will be generated in directory: ``./hypervisor/build/acrn.efi``
(`Slim bootloader
<https://www.intel.com/content/www/us/en/design/products-and-solutions/technologies/slim-bootloader/overview.html>`_)
hypervisor.
The ``acrn.efi`` will be generated in the ``./hypervisor/build/acrn.efi`` directory hypervisor.
As mentioned in :ref:`ACRN Configuration Tool <vm_config_workflow>`,
Board configuration and VM configuration could be imported from XML files.
If you want to build hypervisor with XML configuration files, please specify the
file location as follows:
As mentioned in :ref:`ACRN Configuration Tool <vm_config_workflow>`, the Board configuration and VM configuration can be imported from XML files.
If you want to build the hypervisor with XML configuration files, specify
the file location as follows:
.. code-block:: none
$ BOARD_FILE=/home/acrn-hypervisor/misc/acrn-config/xmls/board-xmls/apl-up2.xml
$ BOARD_FILE=/home/acrn-hypervisor/misc/acrn-config/xmls/board-xmls/apl-up2.xml
SCENARIO_FILE=/home/acrn-hypervisor/misc/acrn-config/xmls/config-xmls/apl-up2/sdc.xml FIRMWARE=uefi
.. note:: The file path must be absolute path. Both of the ``BOARD`` and ``SCENARIO``
parameters are not needed because the information could be got from XML.
Note that the file path must be absolute. Both of the ``BOARD`` and ``SCENARIO`` parameters are not needed because the information is retrieved from the XML file. Adjust the example above to your own environment path.
Follow the same instructions to boot and test the images you created from your build.

View File

@ -2,23 +2,26 @@
ACRN Configuration Tool
#######################
ACRN configuration tool is designed for System Integrators / Tier1s to customize
ACRN to meet their own needs. It consists of two tools. The ``Kconfig`` tool and the
``acrn-config`` tool. The latter allows users to provision VMs via a WebUI and configure
the hypervisor from XML files at build time.
ACRN Configurations Introduction
********************************
There are three types of configurations in ACRN: Hypervisor,
Board, and VM. We'll explore each of these in the following sections.
The ACRN configuration tool is designed for System Integrators / Tier 1s to
customize ACRN to meet their own needs. It consists of two tools, the
``Kconfig`` tool and the ``acrn-config`` tool. The latter allows users to provision
VMs via a web interface and configure the hypervisor from XML files at build time.
Introduction
************
ACRN includes three types of configurations: Hypervisor, Board, and VM. Each
are discussed in the following sections.
Hypervisor configuration
========================
Hypervisor configuration selects a working scenario and target
The hypervisor configuration selects a working scenario and target
board by configuring the hypervisor image features and capabilities such as
setting up the log and the serial port.
Hypervisor configuration is done using the ``Kconfig`` ``make
The hypervisor configuration uses the ``Kconfig`` ``make
menuconfig`` mechanism. The configuration file is located at::
acrn-hypervisor/hypervisor/arch/x86/configs/Kconfig
@ -27,29 +30,30 @@ A board-specific ``defconfig`` file, located at::
acrn-hypervisor/hypervisor/arch/x86/configs/$(BOARD)/$(BOARD).config
will be loaded first, as the default ``Kconfig`` for the specified board.
is loaded first; it is the default ``Kconfig`` for the specified board.
Board configuration
===================
The board configuration stores board-specific settings referenced by the
ACRN hypervisor. This includes *scenario-relevant* information such as
board settings, root device selection, and kernel cmdline, and
*scenario-irrelevant** hardware-specific information such as ACPI/PCI
and BDF information. The board configuration is organized as
ACRN hypervisor. This includes **scenario-relevant** information such as
board settings, root device selection, and the kernel cmdline. It also includes
**scenario-irrelevant** hardware-specific information such as ACPI/PCI
and BDF information. The board configuration is organized as
``*.c/*.h`` files located at::
acrn-hypervisor/hypervisor/arch/x86/$(BOARD)/
VM configuration
=================
VM configuration includes *scenario-based* VM configuration
information, used to describe the characteristics and attributes for VMs
on each user scenario, and *launch script-based* VM configuration, where
parameters are passed to the device model to launch post-launched User
VMs.
Scenario based VM configurations are organized
as ``*.c/*.h`` files located at::
VM configuration includes **scenario-based** VM configuration
information that is used to describe the characteristics and attributes for
VMs on each user scenario. It also includes **launch script-based** VM
configuration information, where parameters are passed to the device model
to launch post-launched User VMs.
Scenario based VM configurations are organized as ``*.c/*.h`` files located at::
acrn-hypervisor/hypervisor/scenarios/$(SCENARIO)/
@ -57,121 +61,147 @@ User VM launch script samples are located at::
acrn-hypervisor/devicemodel/samples/
ACRN Configuration XMLs
ACRN configuration XMLs
***********************
ACRN configuration introduced three kinds of XMLs for acrn-config usage:
**board**, **scenario** and **launch** XML.
All scenario-irrelevant hardware-specific information of Board configuration is
stored in **board** XML. The XML is generated by ``misc/acrn-config/target/board_parser.py``
which shall be run on target board.
The scenario-relevant Board configuration and scenario-based VM
configuration are stored in **scenario** XML. The launch script-based VM
configuration is stored in **launch** XML. These two XMLs could be customized
with WebUI tool at ``misc/acrn-config/config-app/app.py``. End users could load
their own configurations by importing customized XMLs or save the
The ACRN configuration includes three kinds of XMLs for acrn-config usage:
``board``, ``scenario``, and ``launch`` XML. All scenario-irrelevant
hardware-specific information for the board configuration is
stored in the ``board`` XML. The XML is generated by ``misc/acrn-config/target/board_parser.py``
which runs on the target board. The scenario-relevant board and
scenario-based VM
configurations are stored in the ``scenario`` XML. The launch script-based VM
configuration is stored in the ``launch`` XML. These two XMLs can be customized
by using the web inteface tool at ``misc/acrn-config/config-app/app.py``. End users can load
their own configurations by importing customized XMLs or by saving the
configurations by exporting XMLs.
Board XML format
================
Board XML has a root element of ``acrn-config`` with attribute of **board**, i.e.
The board XML has an ``acrn-config`` root element and a ``board`` attribute:
.. code-block:: xml
<acrn-config board=”BOARD”>
As an input for ``acrn-config`` tool, end users do not need to care about the
format of board XML and should not modify it.
As an input for the ``acrn-config`` tool, end users do not need to care about the format of board XML and should not modify it.
Scenario XML format
===================
Scenario XML has a root element of ``acrn-config`` with attributes of **board**
and **scenario**, i.e.
The scenario XML has an ``acrn-config`` root element as well as ``board`` and ``scenario`` attributes:
.. code-block:: xml
<acrn-config board=”BOARD” scenario=”SCENARIO”>
Below is the usage of other elements:
Additional scenario XML elements:
:``vm``: Specify the VM with VMID by its "id" attribute.
:``load_order``: Specify the VM by its load order:
PRE_LAUNCHED_VM, SOS_VM or POST_LAUNCHED_VM.
:``name`` under parent of ``vm``: Specify the VM name which will be
shown in hypervisor console command: vm_list.
:``uuid``: UUID of the VM. It is for internal use and not configurable.
:``guest_flags``: Select all applicable flags for the VM.
:``size`` under parent of ``epc_section``: SGX EPC section base, must be page aligned.
:``base`` under parent of ``epc_section``: SGX EPC section size in Bytes, must be page aligned.
:``clos``: Class of Service for Cache Allocation Technology. Please refer SDM 17.19.2
for details and use with caution.
:``start_hpa``: The start physical address in host for the VM.
:``size`` under parent of ``memory``: The memory size in Bytes for the VM
:``name`` under parent of ``os_config``: Specify the OS name of VM, currently it is not
referenced by hypervisor code.
:``kern_type``: Specify the kernel image type so that hypervisor could load it correctly.
Currently support KERNEL_BZIMAGE and KERNEL_ZEPHYR.
:``kern_mod``: The tag for kernel image which act as multiboot module, it must exactly match
the module tag in GRUB multiboot cmdline.
:``bootargs`` under parent of ``os_config``: It is for internal use and not configurable. Please
specify the kernel boot arguments in bootargs under parent of board_private.
:``vuart``: Specify the vuart(A.K.A COM) with vUART ID by its "id" attribute. Please refer
:ref:`vuart_config` for detailed vUART setting.
:``type`` under parent of ``vuart``: vUART(A.K.A COM) type, currently only support legacy PIO mode.
:``base`` under parent of ``vuart``: vUART(A.K.A COM) enabling switch. Enable by exposing its
COM_BASE(SOS_COM_BASE for Service VM), disable by returning INVALID_COM_BASE.
:``irq`` under parent of ``vuart``: vCOM irq.
:``target_vm_id``: COM2 is used for VM communications. When it is enabled, please specify which
target VM that current VM connect to.
:``target_uart_id``: target vUART ID that vCOM2 connect to.
:``pci_dev_num``: pci devices number of the VM, it is hard-coded for each scenario so is not configurable for now.
:``pci_devs``: PCI devices list of the VM, it is hard-coded for each scenario so is not configurable for now.
:``board_private``: Stores scenario-relevant Board configuration.
:``rootfs``: rootfs for Linux kernel.
:``console``: ttyS console for Linux kernel
:``bootargs`` under parent of ``board_private``: Specify kernel boot arguments.
``vm``: Specify the VM with VMID by its "id" attribute.
``load_order``: Specify the VM by its load order: PRE_LAUNCHED_VM, SOS_VM or POST_LAUNCHED_VM.
``name`` under parent of ``vm``: Specify the VM name which will be shown in the hypervisor console command: vm_list.
``uuid``: UUID of the VM. It is for internal use and is not configurable.
``guest_flags``: Select all applicable flags for the VM.
``size`` under parent of ``epc_section``: SGX EPC section base; must be page aligned.
``base`` under parent of ``epc_section``: SGX EPC section size in Bytes; must be page aligned.
``clos``: Class of Service for Cache Allocation Technology. Refer to the SDM 17.19.2 for details and use with caution.
``start_hpa``: The start physical address in host for the VM.
``size`` under parent of ``memory``: The memory size in Bytes for the VM.
``name`` under parent of ``os_config``: Specify the OS name of VM; currently, it is not referenced by the hypervisor code.
``kern_type``: Specify the kernel image type so that the hypervisor can load it correctly. Currently supports KERNEL_BZIMAGE and KERNEL_ZEPHYR.
``kern_mod``: The tag for the kernel image that acts as a multiboot module; it must exactly match the module tag in the GRUB multiboot cmdline.
``bootargs`` under parent of ``os_config``: For internal use and is not configurable. Specify the kernel boot arguments in bootargs under the parent of board_private.
``vuart``: Specify the vuart (A.K.A COM) with the vUART ID by its "id" attribute. Refer to :ref:`vuart_config` for detailed vUART settings.
``type`` under parent of ``vuart``: vUART (A.K.A COM) type, currently only supports the legacy PIO mode.
``base`` under parent of ``vuart``: vUART (A.K.A COM) enabling switch. Enable by exposing its COM_BASE (SOS_COM_BASE for Service VM); disable by returning INVALID_COM_BASE.
``irq`` under parent of ``vuart``: vCOM irq.
``target_vm_id``: COM2 is used for VM communications. When it is enabled, specify which target VM the current VM connects to.
``target_uart_id``: Target vUART ID that vCOM2 connects to.
``pci_dev_num``: PCI devices number of the VM; it is hard-coded for each scenario so it is not configurable for now.
``pci_devs``: PCI devices list of the VM; it is hard-coded for each scenario so it is not configurable for now.
``board_private``: Stores scenario-relevant board configuration.
``rootfs``: rootfs for the Linux kernel.
``console``: ttyS console for the Linux kernel.
``bootargs`` under parent of ``board_private``: Specify kernel boot arguments.
Launch XML format
=================
Launch XML has a root element of ``acrn-config`` with attributes of
**board**, **scenario** and **uos_launcher**, i.e.
The launch XML has an ``acrn-config`` root element as well as
``board``, ``scenario`` and ``uos_launcher`` attributes:
.. code-block:: xml
<acrn-config board="BOARD" scenario="SCENARIO" uos_launcher="UOS_NUMBER">
Attribute of **uos_launcher** specified the number of User VM that current scenario has:
Attributes of the ``uos_launcher`` specify the number of User VMs that the current scenario has:
:``uos``: Specify the User VM with its relative ID to Service VM by "id" attribute.
:``uos_type``: Specify the User VM type, like CLEARLINUX, ANDROID, or VXWORKS
:``rtos_type``: Specify User VM Realtime capability: Soft RT, Hard RT, or none of them.
:``cpu_num``: Specify max cpu number for the VM.
:``mem_size``: Specify User VM memory size in Mbyte.
:``gvt_args``: GVT argument for the VM.
:``vbootloader``: virtual bootloader type, currently only support OVMF.
:``rootfs_dev``: Which device where User VM rootfs located.
:``rootfs_img``: User VM rootfs image file including path.
:``console_type``: Specify User VM console is virtio or vUART, please refer
:ref:`vuart_config` for details.
:``poweroff_channel``: Specify User VM power off channel is through IOC or Powerbutton or vUART.
:``passthrough_devices``: select the passthrough device from lspci list, currently we
support selection for usb_xdci, audio, audio_codec, ipu, ipu_i2c,
cse, wifi, Bluetooth, sd_card, ethernet, wifi, sata and nvme.
``uos``: Specify the User VM with its relative ID to Service VM by the "id" attribute.
.. note:: Attribute of **configurable** and **readonly** are used to mark whether
the items is configurable for user. When ``configurable=”0”`` and ``readonly=”true”``,
the item is not configurable from WebUI. Particularly, the item would not be
shown on UI when ``configurable=“0”``.
``uos_type``: Specify the User VM type, such as CLEARLINUX, ANDROID, or VXWORKS.
``rtos_type``: Specify the User VM Realtime capability: Soft RT, Hard RT, or none of them.
``cpu_num``: Specify the max cpu number for the VM.
``mem_size``: Specify the User VM memory size in Mbyte.
``gvt_args``: GVT argument for the VM.
``vbootloader``: Virtual bootloader type; currently only supports OVMF.
``rootfs_dev``: The device where User VM rootfs located.
``rootfs_img``: User VM rootfs image file including path.
``console_type``: Specify whether the User VM console is virtio or vUART; refer to :ref:`vuart_config` for details.
``poweroff_channel``: Specify whether the User VM power off channel is through the IOC, Powerbutton, or vUART.
``passthrough_devices``: Select the passthrough device from the lspci list; currently we support: usb_xdci, audio, audio_codec, ipu, ipu_i2c, cse, wifi, Bluetooth, sd_card, ethernet, wifi, sata, and nvme.
.. note::
The ``configurable`` and ``readonly`` attributes are used to mark whether the items is configurable for users. When ``configurable=”0”`` and ``readonly=”true”``, the item is not configurable from the web interface. When ``configurable=“0”``. the item does not appear on the interface.
Configuration tool workflow
***************************
Hypervisor configuration workflow
==================================
Hypervisor configuration is based on the ``Kconfig`` ``make menuconfig``
mechanism. You begin by creating a board specific ``defconfig`` file to
The hypervisor configuration is based on the ``Kconfig`` ``make menuconfig``
mechanism. Begin by creating a board-specific ``defconfig`` file to
set up the default ``Kconfig`` values for the specified board.
Then you configure the hypervisor build options using the ``make
menuconfig`` graphical interface. The resulting ``.config`` file is
Next, configure the hypervisor build options using the ``make
menuconfig`` graphical interface. The resulting ``.config`` file is
used by the ACRN build process to create a configured scenario- and
board-specific hypervisor image.
@ -185,13 +215,15 @@ board-specific hypervisor image.
menuconfig interface sample
Please refer to the :ref:`getting-started-hypervisor-configuration` for
detailed steps.
Refer to :ref:`getting-started-hypervisor-configuration` for
detailed configuration steps.
.. _vm_config_workflow:
Board and VM configuration workflow
===================================
Python offline tools are provided to configure Board and VM configurations.
The tool source folder is located at::
@ -199,13 +231,13 @@ The tool source folder is located at::
Here is the offline configuration tool workflow:
#. Get board info.
#. Get the board info.
a. Set up native Linux environment on target board.
#. Copy ``target`` folder into target file system and then run
a. Set up a native Linux environment on the target board.
#. Copy the ``target`` folder into the target file system and then run the
``sudo python3 board_parser.py $(BOARD)`` command.
#. A $(BOARD).xml that includes all needed hardware-specific information
will be generated in the ``./out/`` folder. (Here ``$(BOARD)`` is the
is generated in the ``./out/`` folder. (Here ``$(BOARD)`` is the
specified board name)
| **Native Linux requirement:**
@ -215,197 +247,176 @@ Here is the offline configuration tool workflow:
#. Customize your needs.
a. Copy ``$(BOARD).xml`` to the host develop machine.
#. Run ``misc/acrn-config/config-app/app.py`` tool on the host machine
and import the ``$(BOARD).xml``, select your working scenario under
**Scenario Setting** and input the desired scenario settings. The tool
will do a sanity check on the input based on ``$(BOARD).xml``. The
customized settings could be exported to your own ``$(SCENARIO).xml``.
#. In the configuration tool UI, input the launch script parameters for the
post-launched User VM under **Launch Setting**. The tool will sanity
check the input based on both ``$(BOARD).xml`` and ``$(SCENARIO).xml``
and then export settings to your ``$(LAUNCH).xml``.
#. The user defined XMLs could be imported by acrn-config for modification.
a. Copy ``$(BOARD).xml`` to the host development machine.
#. Run the ``misc/acrn-config/config-app/app.py`` tool on the host machine and import the $(BOARD).xml. Select your working scenario under **Scenario Setting** and input the desired scenario settings. The tool will do a sanity check on the input based on the $(BOARD).xml. The customized settings can be exported to your own $(SCENARIO).xml.
#. In the configuration tool UI, input the launch script parameters for the post-launched User VM under **Launch Setting**. The tool will sanity check the input based on both the $(BOARD).xml and $(SCENARIO).xml and then export settings to your $(LAUNCH).xml.
#. The user defined XMLs can be imported by acrn-config for modification.
.. note:: Please refer :ref:`acrn_config_tool_ui` for more details on
.. note:: Refer to :ref:`acrn_config_tool_ui` for more details on
the configuration tool UI.
#. Auto generate code.
3. Auto generate the code.
Python tools are used to generate configurations in patch format.
The patches will be applied to your local ``acrn-hypervisor`` git tree
The patches are applied to your local ``acrn-hypervisor`` git tree
automatically.
a. Generate a patch for the board-related configuration with:
.. code-block:: console
a. Generate a patch for the board-related configuration::
cd misc/board_config
python3 board_cfg_gen.py --board $(BOARD).xml --scenario $(SCENARIO).xml
.. note:: This could be done by click **Generate Board SRC** in acrn-config UI.
Note that this can also be done by clicking **Generate Board SRC** in the acrn-config UI.
#. Generate a patch for scenario-based VM configuration with:
.. code-block:: console
#. Generate a patch for scenario-based VM configuration::
cd misc/scenario_config
python3 scenario_cfg_gen.py --board $(BOARD).xml --scenario $(SCENARIO).xml
python3 scenario_cfg_gen.py --board $(BOARD).xml --scenario
.. note:: This could be done by click **Generate Scenario SRC** in acrn-config UI.
#. Generate the launch script for the specified post-launched User VM with:
.. code-block:: console
#. Generate the launch script for the specified
post-launch User VM::
cd misc/launch_config
python3 launch_cfg_gen.py --board $(BOARD).xml --scenario $(SCENARIO).xml --launch $(LAUNCH).xml$
python3 launch_cfg_gen.py --board $(BOARD).xml --scenario $(SCENARIO).xml --launch $(LAUNCH_PARAM).xml$
.. note:: This could be done by click **Generate Launch Script** in acrn-config UI.
Note that this can also be done by clicking **Generate Launch Script** in the acrn-config UI.
#. Re-build the ACRN hypervisor. Please refer to the
:ref:`getting-started-building` to re-build ACRN hypervisor on host machine.
#. Re-build the ACRN hypervisor. Refer to
:ref:`getting-started-building` to re-build the ACRN hypervisor on the host machine.
#. Deploy VMs and run ACRN hypervisor on target board.
#. Deploy VMs and run ACRN hypervisor on the target board.
.. figure:: images/offline_tools_workflow.png
:align: center
offline tool workflow
Offline tool workflow
.. _acrn_config_tool_ui:
How to use ACRN configuration app
*********************************
ACRN configuration app is a web UI application to read board info, configure and validate
scenario setting, automatically generate a patch for board related configuration,
scenario-based VM configuration, configure and validate launch settings, generate
the launch scripts for the specified post-launched User VMs.
Use the ACRN configuration app
******************************
The ACRN configuration app is a web user interface application that performs the following:
- reads board info
- configures and validates scenario settings
- automatically generates patches for board-related configurations and
scenario-based VM configurations
- configures and validates launch settings
- generates launch scripts for the specified post-launched User VMs.
Prerequisites
=============
.. _get acrn repo guide:
https://projectacrn.github.io/latest/getting-started/building-from-source.html#get-the-acrn-hypervisor-source-code
- Follow the :ref:`instruction <getting-started-building>` to install the
ACRN hypervisor dependencies and tools on your development host.
- Follow the `get acrn repo guide`_ to download ACRN hypervisor repo to your host.
- Follow the `get acrn repo guide`_ to download the ACRN hypervisor repo to your host.
- Install ACRN configuration app dependencies:
.. code-block:: console
.. code-block:: none
$ cd ~/acrn-hypervisor/misc/acrn-config/config_app
$ sudo pip3 install -r requirements
How to use ACRN configuration app
=================================
#. Launch ACRN configuration app:
Instructions
============
.. code-block:: console
#. Launch the ACRN configuration app:
.. code-block:: none
$ python3 app.py
#. The browser should be launched and navigated to the website:
`<http://127.0.0.1:5001/>`_ automatically, or you may need to visit this website manually.
#. Open a browser and navigate to the website
`<http://127.0.0.1:5001/>`_ automatically, or you may need to visit this website manually. Make sure you can connect to open network from browser because the app needs to download some javascript files.
.. note:: Make sure you can connect to open network from browser because the app needs
to download some javascript files.
.. note:: The ACRN configuration app is supported on Chrome, Firefox, and MS Edge, do not use IE.
.. note:: The ACRN configuration app is supported on Chrome, Firefox or MS Edge, do not use IE.
The website as shown below:
The website is shown below:
.. figure:: images/config_app_main_menu.png
:align: center
:scale: 70%
:name: ACRN config tool main menu
#. Set the board info:
a. Click the button **Import Board info**.
a. Click **Import Board info**.
.. figure:: images/click_import_board_info_button.png
:align: center
:scale: 70%
#. Upload the board info you have generated by ACRN config tool.
#. Upload the board info you have generated from the ACRN config tool.
#. After board info uploaded, you will see the board name from the Board info list,
select the board name to be configured.
#. After board info is uploaded, you will see the board name from the Board info list. Select the board name to be configured.
.. figure:: images/select_board_info.png
:align: center
:scale: 70%
#. Choose a scenario from the “Scenario Setting” menu which lists all the scenarios
including default scenarios and user-defined scenarios for the board you selected
in the previous step. The scenario configuration xmls are located in
#. Choose a scenario from the **Scenario Setting** menu which lists all the scenarios,
including the efault scenarios and the user-defined scenarios for the board you selected
in the previous step. The scenario configuration xmls are located at
``acrn-hypervisor/misc/xmls/config-xmls/[board]/``.
.. figure:: images/choose_scenario.png
:align: center
:scale: 70%
#. It is also allowed to use a customized scenario xml by clicking the Import button.
The configuration app will automatically direct to the new scenario xml once the import is completed.
Note that you can also use a customized scenario xml by clicking **Import**.
The configuration app automatically directs to the new scenario xml once the import is complete.
#. The configurable items will be displayed after one scenario is selected. Here is
#. The configurable items display after one scenario is selected. Here is
the example of "SDC" scenario:
.. figure:: images/configure_scenario.png
:align: center
:scale: 70%
- You can edit those items directly in the text boxes, choose single or even multiple
- You can edit these items directly in the text boxes, cor you can choose single or even multiple
items from the drop down list.
- Read-only items are marked as grey.
- Hover the mouse pointer over the item to display the description.
#. Click **Export** button to save the scenario xml, you can rename it in the pop-up modal.
#. Click **Export** to save the scenario xml; you can rename it in the pop-up modal.
.. note:: All customized scenario xmls will be in user-defined groups which located in
``acrn-hypervisor/misc/xmls/config-xmls/[board]/user_defined/``.
Before saving the scenario xml, the configuration app will validate the configurable items;
If there are errors, the configuration app will list all wrong configurable items and show errors as below:
Before saving the scenario xml, the configuration app will validate the configurable items. If errors exist, the configuration app lists all wrong configurable items and shows the errors as below:
.. figure:: images/err_acrn_configuration.png
:align: center
:scale: 70%
After the scenario is saved, the page will automatically direct to the saved scenario xmls.
You can delete the configured scenario by click button **Export** -> **Remove**.
After the scenario is saved, the page automatically directs to the saved scenario xmls.
You can delete the configured scenario by clicking **Export** -> **Remove**.
#. Click **Generate Board SRC** to save current scenario setting and then generate
#. Click **Generate Board SRC** to save the current scenario setting and then generate
a patch for the board-related configuration source codes in
``acrn-hypervisor/hypervisor/arch/x86/configs/[board]/``.
#. Click **Generate Scenario SRC** to save current scenario setting and then generate
a patch for scenario-based VM configuration scenario source codes in
#. Click **Generate Scenario SRC** to save the current scenario setting and then generate
a patch for the scenario-based VM configuration scenario source codes in
``acrn-hypervisor/hypervisor/scenarios/[scenario]/``.
#. **Launch Setting** is quite similar with **Scenario Setting**:
The **Launch Setting** is quite similar to the **Scenario Setting**:
a. Upload board info or selecting one board as current board.
#. Upload board info or select one board as the current board.
#. Import your local launch setting xml by clicking button **Import** or selecting one launch
setting xml from menu.
#. Import your local launch setting xml by clicking **Import** or selecting one launch setting xml from the menu.
#. Select one scenario for current launch setting from the drop down box **Select Scenario**.
#. Select one scenario for the current launch setting from the **Select Scenario** drop down box.
#. Configure the items for current launch setting.
#. Configure the items for the current launch setting.
#. Save current launch setting to user defined xml files by clicking the button **Export**.
The configuration app will validate current configuration and will list all wrong configurable
items and show errors.
#. Save the current launch setting to the user-defined xml files by clicking **Export**. The configuration app validates the current configuration and lists all wrong configurable items and shows errors.
#. Click the button **Generate Launch Script** to save current launch setting and then
generate launch script.
#. Click **Generate Launch Script** to save the current launch setting and then generate the launch script.
.. figure:: images/generate_launch_script.png
:align: center
.. figure:: images/generate_launch_script.png
:align: center
:scale: 70%

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 17 KiB