doc: remove VBSK related content

VBSK is not supported anymore, clean the documents up.

Track-On: #6738
Signed-off-by: hangliu1 <hang1.liu@linux.intel.com>
This commit is contained in:
hangliu1 2021-11-12 05:01:09 -05:00 committed by David Kinder
parent 83466595c2
commit 195d3df4c6
4 changed files with 6 additions and 262 deletions

View File

@ -25,8 +25,6 @@ also find details about specific architecture topics.
developer-guides/sw_design_guidelines
developer-guides/trusty
developer-guides/l1tf
developer-guides/VBSK-analysis
Contribute Guides
*****************

View File

@ -1,144 +0,0 @@
.. _vbsk-overhead:
VBS-K Framework Virtualization Overhead Analysis
################################################
Introduction
************
The ACRN Hypervisor follows the Virtual I/O Device (virtio) specification to
realize I/O virtualization for many performance-critical devices supported in
the ACRN project. The hypervisor provides the virtio backend service (VBS)
APIs, which make it very straightforward to implement a virtio device in the
hypervisor. We can evaluate the virtio backend service in kernel-land (VBS-K)
framework overhead through a test virtual device called virtio-echo. The
total overhead of a frontend-backend application based on VBS-K consists of
VBS-K framework overhead and application-specific overhead. The
application-specific overhead depends on the specific frontend-backend design,
from microseconds to seconds. In our hardware case, the overall VBS-K
framework overhead is on the microsecond level, sufficient to meet the needs
of most applications.
Architecture of VIRTIO-ECHO
***************************
virtio-echo is a virtual device based on virtio, and designed for testing
ACRN virtio backend services in the kernel (VBS-K) framework. It includes a
virtio-echo frontend driver, a virtio-echo driver in ACRN device model (DM)
for initialization, and a virtio-echo driver based on VBS-K for data reception
and transmission. For more virtualization background introduction, refer to:
* :ref:`introduction`
* :ref:`virtio-hld`
virtio-echo is implemented as a virtio legacy device in the ACRN device
model (DM), and is registered as a PCI virtio device to the guest OS
(User VM). The virtio-echo software has three parts:
- **virtio-echo Frontend Driver**: This driver runs in the User VM. It
prepares the RXQ and notifies the backend for receiving incoming data when
the User VM starts. Second, it copies the received data from the RXQ to TXQ
and sends them to the backend. After receiving the message that the
transmission is completed, it starts again another round of reception
and transmission, and keeps running until a specified number of cycles
is reached.
- **virtio-echo Driver in DM**: This driver is used for initialization
configuration. It simulates a virtual PCI device for the frontend
driver use, and sets necessary information such as the device
configuration and virtqueue information to the VBS-K. After
initialization, all data exchange is taken over by the VBS-K
vbs-echo driver.
- **vbs-echo Backend Driver**: This driver sets all frontend RX buffers to
be a specific value and sends the data to the frontend driver. After
receiving the data in RXQ, the fronted driver copies the data to the
TXQ, and then sends them back to the backend. The backend driver then
notifies the frontend driver that the data in the TXQ has been successfully
received. In virtio-echo, the backend driver doesn't process or use the
received data.
:numref:`vbsk-virtio-echo-arch` shows the whole architecture of virtio-echo.
.. figure:: images/vbsk-image2.png
:width: 900px
:align: center
:name: vbsk-virtio-echo-arch
virtio-echo Architecture
Virtualization Overhead Analysis
********************************
Let's analyze the overhead of the VBS-K framework. As we know, the VBS-K
handles notifications in the Service VM kernel instead of in the Service VM
user space DM. This can avoid overhead from switching between kernel space
and user space. Virtqueues are allocated by User VM, and virtqueue
information is configured to VBS-K backend by the virtio-echo driver in DM;
thus virtqueues can be shared between User VM and Service VM. There is no
copy overhead in this sense. The overhead of VBS-K framework mainly contains
two parts: kick overhead and notify overhead.
- **Kick Overhead**: The User VM gets trapped when it executes sensitive
instructions that notify the hypervisor first. The notification is
assembled into an IOREQ, saved in a shared IO page, and then
forwarded to the HSM module by the hypervisor. The HSM notifies its
client for this IOREQ, in this case, the client is the vbs-echo
backend driver. Kick overhead is defined as the interval from the
beginning of User VM trap to a specific VBS-K driver, e.g. when
virtio-echo gets notified.
- **Notify Overhead**: After the data in virtqueue being processed by the
backend driver, vbs-echo calls the HSM module to inject an interrupt
into the frontend. The HSM then uses the hypercall provided by the
hypervisor, which causes a User VM VMEXIT. The hypervisor finally injects
an interrupt into the vLAPIC of the User VM and resumes it. The User VM
therefore receives the interrupt notification. Notify overhead is
defined as the interval from the beginning of the interrupt injection
to when the User VM starts interrupt processing.
The overhead of a specific application based on VBS-K includes two parts:
VBS-K framework overhead and application-specific overhead.
- **VBS-K Framework Overhead**: As defined above, VBS-K framework overhead
refers to kick overhead and notify overhead.
- **Application-Specific Overhead**: A specific virtual device has its own
frontend driver and backend driver. The application-specific overhead
depends on its own design.
:numref:`vbsk-virtio-echo-e2e` shows the overhead of one end-to-end
operation in virtio-echo. Overhead of steps marked as red are caused by
the virtualization scheme based on VBS-K framework. Costs of one "kick"
operation and one "notify" operation are both on a microsecond level.
Overhead of steps marked as blue depend on specific frontend and backend
virtual device drivers. For virtio-echo, the whole end-to-end process
(from step1 to step 9) costs about four dozen microseconds. That's
because virtio-echo performs small operations in its frontend and backend
driver that are just for testing, and there is very little process overhead.
.. figure:: images/vbsk-image1.png
:width: 600px
:align: center
:name: vbsk-virtio-echo-e2e
End to End Overhead of virtio-echo
:numref:`vbsk-virtio-echo-path` details the path of kick and notify
operation shown in :numref:`vbsk-virtio-echo-e2e`. The VBS-K framework
overhead is caused by operations through these paths. As we can see, all
these operations are processed in kernel mode, which avoids the extra
overhead of passing IOREQ to userspace processing.
.. figure:: images/vbsk-image3.png
:width: 900px
:align: center
:name: vbsk-virtio-echo-path
Path of VBS-K Framework Overhead
Conclusion
**********
Unlike VBS-U processing in user mode, VBS-K moves processing into the kernel
mode and can be used to accelerate processing. A virtual device virtio-echo
based on the VBS-K framework is used to evaluate the VBS-K framework overhead.
In our test, the VBS-K framework overhead (one kick operation and one
notify operation) is on the microsecond level, which can meet the needs of
most applications.

View File

@ -214,26 +214,6 @@ virtqueues, feature mechanisms, configuration space, and buses.
Virtio Frontend/Backend Layered Architecture
Virtio Framework Considerations
===============================
How to implement the virtio framework is specific to a
hypervisor implementation. In ACRN, the virtio framework implementations
can be classified into two types, virtio backend service in userland
(VBS-U) and virtio backend service in kernel-land (VBS-K), according to
where the virtio backend service (VBS) is located. Although different in BE
drivers, both VBS-U and VBS-K share the same FE drivers. The reason
behind the two virtio implementations is to meet the requirement of
supporting a large number of diverse I/O devices in ACRN project.
When developing a virtio BE device driver, the device owner should choose
carefully between the VBS-U and VBS-K. Generally VBS-U targets
non-performance-critical devices, but enables easy development and
debugging. VBS-K targets performance critical devices.
The next two sections introduce ACRN's two implementations of the virtio
framework.
Userland Virtio Framework
==========================
@ -266,49 +246,15 @@ virtqueue through the user-level vring service API helpers.
Kernel-Land Virtio Framework
============================
ACRN supports two kernel-land virtio frameworks:
ACRN supports one kernel-land virtio frameworks:
* VBS-K, designed from scratch for ACRN
* Vhost, compatible with Linux Vhost
VBS-K Framework
---------------
The architecture of ACRN VBS-K is shown in
:numref:`kernel-virtio-framework` below.
Generally VBS-K provides acceleration towards performance critical
devices emulated by VBS-U modules by handling the "data plane" of the
devices directly in the kernel. When VBS-K is enabled for certain
devices, the kernel-land vring service API helpers, instead of the
userland helpers, are used to access the virtqueues shared by the FE
driver. Compared to VBS-U, this eliminates the overhead of copying data
back-and-forth between userland and kernel-land within the Service VM, but
requires the extra implementation complexity of the BE drivers.
Except for the differences mentioned above, VBS-K still relies on VBS-U
for feature negotiations between FE and BE drivers. This means the
"control plane" of the virtio device still remains in VBS-U. When
feature negotiation is done, which is determined by the FE driver setting up
an indicative flag, the VBS-K module will be initialized by VBS-U.
Afterward, all request handling will be offloaded to the VBS-K in the
kernel.
Finally the FE driver is not aware of how the BE driver is implemented,
either in VBS-U or VBS-K. This saves engineering effort regarding FE
driver development.
.. figure:: images/virtio-hld-image54.png
:align: center
:name: kernel-virtio-framework
ACRN Kernel-Land Virtio Framework
Vhost Framework
---------------
Vhost is similar to VBS-K. Vhost is a common solution upstreamed in the
Linux kernel, with several kernel mediators based on it.
Vhost is a common solution upstreamed in the Linux kernel,
with several kernel mediators based on it.
Architecture
~~~~~~~~~~~~
@ -448,51 +394,6 @@ DM, and DM finds other key data structures through it. The ``struct
virtio_ops`` abstracts a series of virtio callbacks to be provided by the
device owner.
VBS-K Key Data Structures
=========================
The key data structures for VBS-K are listed as follows, and their
relationships are shown in :numref:`VBS-K-data`.
``struct vbs_k_rng``
In-kernel VBS-K component handling data plane of a
VBS-U virtio device, for example, virtio random_num_generator.
``struct vbs_k_dev``
In-kernel VBS-K component common to all VBS-K.
``struct vbs_k_vq``
In-kernel VBS-K component for working with kernel
vring service API helpers.
``struct vbs_k_dev_inf``
Virtio device information to be synchronized
from VBS-U to VBS-K kernel module.
``struct vbs_k_vq_info``
A single virtqueue information to be
synchronized from VBS-U to VBS-K kernel module.
``struct vbs_k_vqs_info``
Virtqueue information, of a virtio device,
to be synchronized from VBS-U to VBS-K kernel module.
.. figure:: images/virtio-hld-image8.png
:width: 900px
:align: center
:name: VBS-K-data
VBS-K Key Data Structures
In VBS-K, the struct vbs_k_xxx represents the in-kernel component
handling a virtio device's data plane. It presents a char device for VBS-U
to open and register device status after feature negotiation with the FE
driver.
The device status includes negotiated features, number of virtqueues,
interrupt information, and more. All these statuses will be synchronized
from VBS-U to VBS-K. In VBS-U, the ``struct vbs_k_dev_info`` and ``struct
vbs_k_vqs_info`` will collect all the information and notify VBS-K through
ioctls. In VBS-K, the ``struct vbs_k_dev`` and ``struct vbs_k_vq``, which are
common to all VBS-K modules, are the counterparts to preserve the
related information. The related information is necessary to kernel-land
vring service API helpers.
VHOST Key Data Structures
=========================
@ -547,8 +448,7 @@ VBS APIs
========
The VBS APIs are exported by VBS related modules, including VBS, DM, and
Service VM kernel modules. They can be classified into VBS-U and VBS-K APIs
listed as follows.
Service VM kernel modules.
VBS-U APIs
----------
@ -583,12 +483,6 @@ the virtio framework within DM will invoke them appropriately.
.. doxygenfunction:: virtio_config_changed
:project: Project ACRN
VBS-K APIs
----------
The VBS-K APIs are exported by VBS-K related modules. Users can use
the following APIs to implement their VBS-K modules.
APIs Provided by DM
~~~~~~~~~~~~~~~~~~~
@ -674,10 +568,7 @@ VQ APIs
The virtqueue APIs, or VQ APIs, are used by a BE device driver to
access the virtqueues shared by the FE driver. The VQ APIs abstract the
details of virtqueues so that users don't need to worry about the data
structures within the virtqueues. In addition, the VQ APIs are designed
to be identical between VBS-U and VBS-K, so that users don't need to
learn different APIs when implementing BE drivers based on VBS-U and
VBS-K.
structures within the virtqueues.
.. doxygenfunction:: vq_interrupt
:project: Project ACRN

View File

@ -601,8 +601,7 @@ arguments used for configuration. Here is a table describing these emulated dev
a FE GPIO, you can set a new name here.
* - ``virtio-rnd``
- Virtio random generater type device, with string ``kernel=on`` to
select the VBSK virtio backend. The VBSU virtio backend is used by default.
- Virtio random generator type device, the VBSU virtio backend is used by default.
* - ``virtio-rpmb``
- Virtio Replay Protected Memory Block (RPMB) type device, with