doc: Update IVSHMEM tutorial

- Update overview, dependencies and constraints
- Update to match Configurator UI instead of manually editing XML files
- Remove architectural details and instead point to high-level design documentation

Signed-off-by: Reyes, Amy <amy.reyes@intel.com>
This commit is contained in:
Reyes, Amy 2022-06-24 15:27:50 -07:00 committed by David Kinder
parent c72cc1916f
commit 0380f8e907
4 changed files with 76 additions and 214 deletions

View File

@ -1,229 +1,90 @@
.. _enable_ivshmem:
Enable Inter-VM Communication Based on Ivshmem
##############################################
Enable Inter-VM Shared Memory Communication (IVSHMEM)
#####################################################
You can use inter-VM communication based on the ``ivshmem`` dm-land
solution or hv-land solution, according to the usage scenario needs.
(See :ref:`ivshmem-hld` for a high-level description of these solutions.)
While both solutions can be used at the same time, VMs using different
solutions cannot communicate with each other.
About Inter-VM Shared Memory Communication (IVSHMEM)
****************************************************
Enable Ivshmem Support
Inter-VM shared memory communication allows VMs to communicate with each other
via a shared memory mechanism.
As an example, users in the industrial segment can use a shared memory region to
exchange commands and responses between a Windows VM that is taking inputs from
operators and a real-time VM that is running real-time tasks.
The ACRN Device Model or hypervisor emulates a virtual PCI device (called an
IVSHMEM device) to expose this shared memory's base address and size.
* Device Model: The IVSHMEM device is emulated in the ACRN Device Model, and the
shared memory regions are reserved in the Service VM's memory space. This
solution only supports communication between post-launched User VMs.
* Hypervisor: The IVSHMEM device is emulated in the hypervisor, and the shared
memory regions are reserved in the hypervisor's memory space. This solution
works for both pre-launched and post-launched User VMs.
While both solutions can be used in the same ACRN configuration, VMs using
different solutions cannot communicate with each other.
Dependencies and Constraints
****************************
Consider the following dependencies and constraints:
* Inter-VM shared memory communication is a hardware-neutral feature.
* Guest OSes are required to have either of the following:
- An IVSHMEM driver, such as `virtio-WIN
<https://github.com/virtio-win/kvm-guest-drivers-windows>`__ for Windows and
`ivshmem APIs
<https://docs.zephyrproject.org/apidoc/latest/group__ivshmem.html>`__ in
Zephyr
- A mechanism granting user-space applications access to a PCI device, such as
the `Userspace I/O (UIO) driver
<https://www.kernel.org/doc/html/latest/driver-api/uio-howto.html>`__ in
Linux
Configuration Overview
**********************
The ``ivshmem`` solution is disabled by default in ACRN. You can enable
it using the :ref:`ACRN Configurator <acrn_configurator_tool>` with these
steps:
The :ref:`acrn_configurator_tool` lets you configure inter-VM shared memory
communication among VMs. The following documentation is a general overview of
the configuration process.
- Enable ``ivshmem`` via ACRN Configurator GUI.
To configure inter-VM shared memory communication among VMs, go to the
**Hypervisor Global Settings > Basic Parameters > InterVM shared memory**. Click
**+** to add the first shared memory region.
- Set ``hv.FEATURES.IVSHMEM.IVSHMEM_ENABLED`` to ``y``
.. image:: images/configurator-ivshmem01.png
:align: center
:class: drop-shadow
- Edit ``hv.FEATURES.IVSHMEM.IVSHMEM_REGION`` to specify the shared
memory name, size and
communication VMs. The ``IVSHMEM_REGION`` format is ``shm_name,shm_size,VM IDs``:
For the shared memory region:
- ``shm_name`` - Specify a shared memory name. The name needs to start
with the ``hv:/`` prefix for hv-land, or ``dm:/`` for dm-land.
For example, ``hv:/shm_region_0`` for hv-land and ``dm:/shm_region_0``
for dm-land.
#. Enter a name for the shared memory region.
#. Select the source of the emulation, either Hypervisor or Device Model.
#. Select the size of the shared memory region.
#. Select at least two VMs that can use the shared memory region.
#. Enter a virtual Board:Device.Function (BDF) address for each VM or leave it
blank. If the field is blank, the tool provides an address when the
configuration is saved.
#. Add more VMs to the shared memory region by clicking **+** on the right
side of an existing VM. Or click **-** to delete a VM.
- ``shm_size`` - Specify a shared memory size. The unit is megabyte. The
size ranges from 2 megabytes to 512 megabytes and must be a power of 2 megabytes.
For example, to set up a shared memory of 2 megabytes, use ``2``
instead of ``shm_size``.
To add another shared memory region, click **+** on the right side of an
existing region. Or click **-** to delete a region.
- ``VM IDs`` - Specify the VM IDs to use the same shared memory
communication and separate it with ``:``. For example, the
communication between VM0 and VM2, it can be written as ``0:2``
.. image:: images/configurator-ivshmem02.png
:align: center
:class: drop-shadow
- Build with the XML configuration, refer to :ref:`gsg`.
Learn More
**********
Ivshmem DM-Land Usage
*********************
ACRN supports multiple inter-VM communication methods. For a comparison, see
:ref:`inter-vm_communication`.
Follow `Enable Ivshmem Support`_ and
add below line as an ``acrn-dm`` boot parameter::
-s slot,ivshmem,shm_name,shm_size
where
- ``-s slot`` - Specify the virtual PCI slot number
- ``ivshmem`` - Virtual PCI device emulating the Shared Memory
- ``shm_name`` - Specify a shared memory name. This ``shm_name`` must be listed
in ``hv.FEATURES.IVSHMEM.IVSHMEM_REGION`` in `Enable Ivshmem Support`_ section and needs to start
with ``dm:/`` prefix.
- ``shm_size`` - Shared memory size of selected ``shm_name``.
There are two ways to insert the above boot parameter for ``acrn-dm``:
- Manually edit the launch script file. In this case, ensure that both
``shm_name`` and ``shm_size`` match those defined via the ACRN Configurator
tool.
- Use the following command to create a launch script, when IVSHMEM is enabled
and ``hv.FEATURES.IVSHMEM.IVSHMEM_REGION`` is properly configured via
the ACRN Configurator.
.. code-block:: none
:emphasize-lines: 5
python3 misc/config_tools/launch_config/launch_cfg_gen.py \
--board <path_to_your_board_xml> \
--scenario <path_to_your_scenario_xml> \
--launch <path_to_your_launch_script_xml> \
--user_vmid <desired_single_vmid_or_0_for_all_vmids>
.. note:: This device can be used with real-time VM (RTVM) as well.
.. _ivshmem-hv:
Ivshmem HV-Land Usage
*********************
Follow `Enable Ivshmem Support`_ to setup HV-Land Ivshmem support.
Ivshmem Notification Mechanism
******************************
Notification (doorbell) of ivshmem device allows VMs with ivshmem
devices enabled to notify (interrupt) each other following this flow:
Notification Sender (VM):
VM triggers the notification to target VM by writing target Peer ID
(Equals to VM ID of target VM) and vector index to doorbell register of
ivshmem device, the layout of doorbell register is described in
:ref:`ivshmem-hld`.
Hypervisor:
When doorbell register is programmed, hypervisor will search the
target VM by target Peer ID and inject MSI interrupt to the target VM.
Notification Receiver (VM):
VM receives MSI interrupt and forwards it to related application.
ACRN supports up to 8 (MSI-X) interrupt vectors for ivshmem device.
Guest VMs shall implement their own mechanism to forward MSI interrupts
to applications.
.. note:: Notification is supported only for HV-land ivshmem devices. (Future
support may include notification for DM-land ivshmem devices.)
Inter-VM Communication Examples
*******************************
DM-Land Example
===============
This example uses dm-land inter-VM communication between two
Linux-based post-launched VMs (VM1 and VM2).
.. note:: An ``ivshmem`` Windows driver exists and can be found
`here <https://github.com/virtio-win/kvm-guest-drivers-windows/tree/master/ivshmem>`_.
1. Add a new virtual PCI device for both VMs: the device type is
``ivshmem``, shared memory name is ``dm:/test``, and shared memory
size is 2MB. Both VMs must have the same shared memory name and size:
- VM1 Launch Script Sample
.. code-block:: none
:emphasize-lines: 6
acrn-dm -m $mem_size -s 0:0,hostbridge \
-s 5,virtio-console,@stdio:stdio_port \
-s 6,virtio-hyper_dmabuf \
-s 3,virtio-blk,/home/acrn/UserVM1.img \
-s 4,virtio-net,tap=tap0 \
-s 6,ivshmem,dm:/test,2 \
-s 7,virtio-rnd \
--ovmf /usr/share/acrn/bios/OVMF.fd \
$vm_name
- VM2 Launch Script Sample
.. code-block:: none
:emphasize-lines: 4
acrn-dm -m $mem_size -s 0:0,hostbridge \
-s 3,virtio-blk,/home/acrn/UserVM2.img \
-s 4,virtio-net,tap=tap0 \
-s 5,ivshmem,dm:/test,2 \
--ovmf /usr/share/acrn/bios/OVMF.fd \
$vm_name
2. Boot two VMs and use ``lspci | grep "shared memory"`` to verify that the virtual device is ready for each VM.
- For VM1, it shows ``00:06.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
- For VM2, it shows ``00:05.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
3. As recorded in the `PCI ID Repository <https://pci-ids.ucw.cz/read/PC/1af4>`_,
the ``ivshmem`` device vendor ID is ``1af4`` (Red Hat) and device ID is ``1110``
(Inter-VM shared memory). Use these commands to probe the device::
sudo modprobe uio
sudo modprobe uio_pci_generic
sudo echo "1af4 1110" > /sys/bus/pci/drivers/uio_pci_generic/new_id
.. note:: These commands are applicable to Linux-based guests with ``CONFIG_UIO`` and ``CONFIG_UIO_PCI_GENERIC`` enabled.
4. Finally, a user application can get the shared memory base address from
the ``ivshmem`` device BAR resource
(``/sys/class/uio/uioX/device/resource2``) and the shared memory size from
the ``ivshmem`` device config resource
(``/sys/class/uio/uioX/device/config``).
The ``X`` in ``uioX`` above, is a number that can be retrieved using the
``ls`` command:
- For VM1 use ``ls -lh /sys/bus/pci/devices/0000:00:06.0/uio``
- For VM2 use ``ls -lh /sys/bus/pci/devices/0000:00:05.0/uio``
HV-Land Example
===============
This example uses hv-land inter-VM communication between two
Linux-based VMs (VM0 is a pre-launched VM and VM2 is a post-launched VM).
1. Make a copy of the predefined hybrid_rt scenario on whl-ipc-i5 (available at
``acrn-hypervisor/misc/config_tools/data/whl-ipc-i5/hybrid_rt.xml``) and
configure shared memory for the communication between VM0 and VM2. The shared
memory name is ``hv:/shm_region_0``, and shared memory size is 2M bytes. The
resulting scenario XML should look like this:
.. code-block:: none
:emphasize-lines: 2,3
<IVSHMEM>
<IVSHMEM_ENABLED>y</IVSHMEM_ENABLED>
<IVSHMEM_REGION>hv:/shm_region_0, 2, 0:2</IVSHMEM_REGION>
</IVSHMEM>
2. Build ACRN based on the XML configuration for hybrid_rt scenario on whl-ipc-i5 board::
make BOARD=whl-ipc-i5 SCENARIO=<path/to/edited/scenario.xml>
3. Add a new virtual PCI device for VM2 (post-launched VM): the device type is
``ivshmem``, shared memory name is ``hv:/shm_region_0``, and shared memory
size is 2MB.
- VM2 Launch Script Sample
.. code-block:: none
:emphasize-lines: 4
acrn-dm -m $mem_size -s 0:0,hostbridge \
-s 3,virtio-blk,/home/acrn/UserVM2.img \
-s 4,virtio-net,tap=tap0 \
-s 5,ivshmem,hv:/shm_region_0,2 \
--ovmf /usr/share/acrn/bios/OVMF.fd \
$vm_name
4. Continue following the dm-land steps 2-4 and the ``ivshmem`` device BDF may be different
depending on the configuration.
For details on ACRN IVSHMEM high-level design, see :ref:`ivshmem-hld`.

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

View File

@ -86,6 +86,7 @@ background introductions).
- Applications need to implement protocols such as a handshake, data transfer, and data
integrity.
.. _inter-vm_communication_ivshmem_app:
How to implement an Ivshmem application on ACRN
***********************************************