doc: remove doc dependency on kerneldoc and acrn-kernel repo

We no longer need to generate API documentation for the upstreamed
gvt-g kernel additions so we can remove the doc generation dependency on
the acrn-kernel repo (and all use of the kerneldoc extension). We also
remove GVT-g API documentation and porting guide that are obsolete with
ACRN v2.6 and referenced this API documentation.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder 2021-07-13 10:27:23 -07:00 committed by David Kinder
parent c4cb95f3b4
commit 24b555c75d
12 changed files with 5 additions and 2691 deletions

View File

@ -56,10 +56,6 @@ content:
$(Q)rsync -rt ../misc/config_tools/schema/*.xsd $(SOURCEDIR)/misc/config_tools/schema
$(Q)xsltproc -xinclude ./scripts/configdoc.xsl $(SOURCEDIR)/misc/config_tools/schema/config.xsd > $(SOURCEDIR)/reference/configdoc.txt
# Used to pull the acrn kernel source (for API docs)
pullsource:
$(Q)scripts/pullsource.sh
html: content doxy
@echo making HTML content

View File

@ -1,226 +0,0 @@
.. _GVT-g_api:
ACRN GVT-g APIs
###############
GVT-g is Intel's open source GPU virtualization solution and is up-streamed to
the Linux kernel. Its implementation over KVM is named KVMGT, over Xen it is
named XenGT, and over ACRN it is named AcrnGT. GVT-g can exports multiple
virtual GPU (vGPU) instances for virtual machine system (VM). A VM could be
assigned one vGPU instance. The guest OS graphic driver needs minor
modification to drive the vGPU adapter in a VM. Every vGPU instance will adopt
the full HW GPU's accelerate capability for 3D render and display.
In the following document, AcrnGT refers to the glue layer between ACRN
hypervisor and GVT-g core device model. It works as the agent of
hypervisor-related services. It is the only layer that needs to get rewritten
when porting GVT-g to another hypervisor. For simplicity, in the rest of this
document, GVT is used to refer to the core device model component of GVT-g,
specifically corresponding to ``gvt.ko`` when built as a module.
Core Driver Infrastructure
**************************
This section covers core driver infrastructure API used by both the display
and the `Graphics Execution Manager(GEM)`_ parts of `i915 driver`_.
.. _Graphics Execution Manager(GEM): https://lwn.net/Articles/283798/
.. _i915 driver: https://01.org/linuxgraphics/gfx-docs/drm/gpu/i915.html
Intel GVT-g Guest Support (vGPU)
================================
.. kernel-doc:: drivers/gpu/drm/i915/i915_vgpu.c
:doc: Intel GVT-g guest support
.. kernel-doc:: drivers/gpu/drm/i915/i915_vgpu.c
:internal:
Intel GVT-g Host Support (vGPU Device Model)
============================================
.. kernel-doc:: drivers/gpu/drm/i915/intel_gvt.c
:doc: Intel GVT-g host support
.. kernel-doc:: drivers/gpu/drm/i915/intel_gvt.c
:internal:
VHM APIs Called From AcrnGT
****************************
The Virtio and Hypervisor Service Module (VHM) is a kernel module in the
Service OS acting as a middle layer to support the device model. (See the
:ref:`ACRN-io-mediator` introduction for details.)
VHM requires an interrupt (vIRQ) number, and exposes some APIs to external
kernel modules such as GVT-g and the Virtio back-end (BE) service running in
kernel space. VHM exposes a ``char`` device node in user space, and only
interacts with DM. The DM routes I/O request and response from and to other
modules via the ``char`` device to and from VHM. DM may use VHM for hypervisor
service (including remote memory map). VHM may directly service the request
such as for the remote memory map, or invoke hypercall. VHM also sends I/O
responses to user space modules, notified by vIRQ injections.
.. kernel-doc:: include/linux/vhm/vhm_vm_mngt.h
:functions: put_vm
vhm_get_vm_info
vhm_inject_msi
vhm_vm_gpa2hpa
.. kernel-doc:: include/linux/vhm/acrn_vhm_ioreq.h
:internal:
.. kernel-doc:: include/linux/vhm/acrn_vhm_mm.h
:functions: acrn_hpa2gpa
map_guest_phys
unmap_guest_phys
add_memory_region
del_memory_region
write_protect_page
.. _MPT_interface:
AcrnGT Mediated Passthrough (MPT) Interface
*******************************************
AcrnGT receives request from GVT module through MPT interface. Refer to the
:ref:`Graphic_mediation` page.
A collection of function callbacks in the MPT module will be attached to GVT
host at the driver loading stage. AcrnGT MPT function callbacks are described
as below:
.. code-block:: c
struct intel_gvt_mpt acrn_gvt_mpt = {
.host_init = acrngt_host_init,
.host_exit = acrngt_host_exit,
.attach_vgpu = acrngt_attach_vgpu,
.detach_vgpu = acrngt_detach_vgpu,
.inject_msi = acrngt_inject_msi,
.from_virt_to_mfn = acrngt_virt_to_mfn,
.enable_page_track = acrngt_page_track_add,
.disable_page_track = acrngt_page_track_remove,
.read_gpa = acrngt_read_gpa,
.write_gpa = acrngt_write_gpa,
.gfn_to_mfn = acrngt_gfn_to_pfn,
.map_gfn_to_mfn = acrngt_map_gfn_to_mfn,
.dma_map_guest_page = acrngt_dma_map_guest_page,
.dma_unmap_guest_page = acrngt_dma_unmap_guest_page,
.set_trap_area = acrngt_set_trap_area,
.set_pvmmio = acrngt_set_pvmmio,
.dom0_ready = acrngt_dom0_ready,
};
EXPORT_SYMBOL_GPL(acrn_gvt_mpt);
GVT-g core logic will call these APIs through wrap functions with prefix
``intel_gvt_hypervisor_`` to request specific services from hypervisor through
VHM.
This section describes the wrap functions:
.. kernel-doc:: drivers/gpu/drm/i915/gvt/mpt.h
:functions: intel_gvt_hypervisor_host_init
intel_gvt_hypervisor_host_exit
intel_gvt_hypervisor_attach_vgpu
intel_gvt_hypervisor_detach_vgpu
intel_gvt_hypervisor_inject_msi
intel_gvt_hypervisor_virt_to_mfn
intel_gvt_hypervisor_enable_page_track
intel_gvt_hypervisor_disable_page_track
intel_gvt_hypervisor_read_gpa
intel_gvt_hypervisor_write_gpa
intel_gvt_hypervisor_gfn_to_mfn
intel_gvt_hypervisor_map_gfn_to_mfn
intel_gvt_hypervisor_dma_map_guest_page
intel_gvt_hypervisor_dma_unmap_guest_page
intel_gvt_hypervisor_set_trap_area
intel_gvt_hypervisor_set_pvmmio
intel_gvt_hypervisor_dom0_ready
.. _intel_gvt_ops_interface:
GVT-g intel_gvt_ops Interface
*****************************
This section contains APIs for GVT-g intel_gvt_ops interface. Sources are found
in the `ACRN kernel GitHub repo`_
.. _ACRN kernel GitHub repo: https://github.com/projectacrn/acrn-kernel/
.. code-block:: c
static const struct intel_gvt_ops intel_gvt_ops = {
.emulate_cfg_read = intel_vgpu_emulate_cfg_read,
.emulate_cfg_write = intel_vgpu_emulate_cfg_write,
.emulate_mmio_read = intel_vgpu_emulate_mmio_read,
.emulate_mmio_write = intel_vgpu_emulate_mmio_write,
.vgpu_create = intel_gvt_create_vgpu,
.vgpu_destroy = intel_gvt_destroy_vgpu,
.vgpu_reset = intel_gvt_reset_vgpu,
.vgpu_activate = intel_gvt_activate_vgpu,
.vgpu_deactivate = intel_gvt_deactivate_vgpu,
};
.. kernel-doc:: drivers/gpu/drm/i915/gvt/cfg_space.c
:functions: intel_vgpu_emulate_cfg_read
intel_vgpu_emulate_cfg_write
.. kernel-doc:: drivers/gpu/drm/i915/gvt/mmio.c
:functions: intel_vgpu_emulate_mmio_read
intel_vgpu_emulate_mmio_write
.. kernel-doc:: drivers/gpu/drm/i915/gvt/vgpu.c
:functions: intel_gvt_create_vgpn
intel_gvt_destroy_vgpu
intel_gvt_reset_vgpu
intel_gvt_activate_vgpu
intel_gvt_deactivate_vgpu
.. _sysfs_interface:
AcrnGT sysfs Interface
**********************
This section contains APIs for the AcrnGT sysfs interface. Sources are found
in the `ACRN kernel GitHub repo`_
sysfs Nodes
===========
In the following examples, all accesses to these interfaces are via bash command
``echo`` or ``cat``. This is a quick and easy way to get or control things. But
when these operations fail, it is impossible to get respective error code by
this way.
When accessing sysfs entries, use library functions such as
``read()`` or ``write()`` instead.
On **success**, the returned value of ``read()`` or ``write()`` indicates how
many bytes have been transferred. On **error**, the returned value is ``-1``
and the global ``errno`` will be set appropriately. This is the only way to
figure out what kind of error occurs.
- The ``/sys/kernel/gvt/`` class sub-directory belongs to AcrnGT and provides a
centralized sysfs interface for configuring vGPU properties.
- The ``/sys/kernel/gvt/control/`` sub-directory contains all the necessary
switches for different purposes.
- The ``/sys/kernel/gvt/control/create_gvt_instance`` node is used by ACRN-DM to
create/destroy a vGPU instance.
- After a VM is created, a new sub-directory ``/sys/kernel/GVT/vmN`` ("N" is the VM id) will be
created.
- The ``/sys/kernel/gvt/vmN/vgpu_id`` node is to get vGPU id from VM which id is
N.

View File

@ -16,4 +16,3 @@ about that API.
hypercall_api.rst
devicemodel_api.rst
GVT-g_api.rst

View File

@ -39,7 +39,7 @@ if "RELEASE" in os.environ:
sys.path.insert(0, os.path.join(os.path.abspath('.'), 'extensions'))
extensions = [
'breathe', 'sphinx.ext.graphviz', 'sphinx.ext.extlinks',
'kerneldoc', 'eager_only', 'html_redirects', 'link_roles',
'eager_only', 'html_redirects', 'link_roles',
'sphinx_tabs.tabs'
]
@ -49,13 +49,6 @@ extlinks = {'acrn-commit': ('https://github.com/projectacrn/acrn-hypervisor/comm
'acrn-issue': ('https://github.com/projectacrn/acrn-hypervisor/issues/%s', '')
}
# kernel-doc extension configuration for running Sphinx directly (e.g. by Read
# the Docs). In a normal build, these are supplied from the Makefile via command
# line arguments.
kerneldoc_bin = 'scripts/kernel-doc'
kerneldoc_srctree = '../../acrn-kernel'
graphviz_output_format='png'
graphviz_dot_args=[

View File

@ -23,7 +23,6 @@ also find details about specific architecture topics.
developer-guides/modularity
developer-guides/hld/index
developer-guides/sw_design_guidelines
developer-guides/GVT-g-porting
developer-guides/trusty
developer-guides/l1tf
developer-guides/VBSK-analysis

View File

@ -1,171 +0,0 @@
.. _GVT-g-porting:
GVT-g Enabling and Porting Guide
################################
Introduction
************
GVT-g is Intel's open-source GPU virtualization solution, up-streamed to
the Linux kernel. Its implementation over KVM is named KVMGT, over Xen
is named XenGT, and over ACRN is named AcrnGT. GVT-g can export multiple
virtual-GPU (vGPU) instances for a virtual machine (VM) system. A VM can be
assigned one instance of a vGPU. The guest OS graphic driver needs only
minor modifications to drive the vGPU adapter in a VM. Every vGPU instance
adopts the full HW GPU's acceleration capability for media, 3D rendering,
and display.
AcrnGT refers to the glue layer between the ACRN hypervisor and GVT-g
core device model. It works as the agent of hypervisor-related services.
It is the only layer that must be rewritten when porting GVT-g to other
specific hypervisors.
For simplicity, in the rest of this document, the term GVT is used to refer
to the core device model component of GVT-g, specifically corresponding to
``gvt.ko`` when built as a module.
Purpose of This Document
************************
This document explains the relationship between components of GVT-g in the
ACRN hypervisor, shows how to enable GVT-g on ACRN, and guides developers
porting GVT-g to work on other hypervisors.
This document describes:
- the overall components of GVT-g
- the interaction interface of each component
- the core interaction scenarios
APIs of each component interface can be found in the :ref:`GVT-g_api`
documentation.
Overall Components
******************
For the GVT-g solution for the ACRN hypervisor, there are two key modules:
AcrnGT and GVT.
AcrnGT module
Compiled from ``drivers/gpu/drm/i915/gvt/acrn_gvt.c``, the AcrnGT
module acts as a glue layer between the ACRN hypervisor and the
interface to the ACRN-DM in user space.
AcrnGT is the agent of hypervisor-related services, including I/O trap
request, IRQ injection, address translation, VM controls, etc. It also
listens to ACRN hypervisor in ``acrngt_emulation_thread``, and informs the
GVT module of I/O traps.
It calls into the GVT module's :ref:`intel_gvt_ops_interface` to invoke
Device Model's routines, and receives requests from the GVT module through
the :ref:`MPT_interface`.
User-space programs, such as ACRN-DM, communicate with AcrnGT through
the :ref:`sysfs_interface` by writing to sysfs node
``/sys/kernel/gvt/control/create_gvt_instance``.
This is the only module that must be rewritten when porting to another
embedded device hypervisor.
GVT module
This Device Model service is the central part of all the GVT-g components.
It receives workloads from each vGPU, shadows the workloads, and
dispatches the workloads to the Service VM's i915 module to deliver
workloads to real hardware. It also emulates the virtual display to each VM.
VHM module
This is a kernel module that requires an interrupt (vIRQ) number and
exposes APIs to external kernel modules such as GVT-g and the
virtIO BE service running in kernel space. It exposes a char device node
in user space, and interacts only with the DM. The DM routes I/O requests
and responses between other modules to and from the VHM module via the
char device. DM may use the VHM for hypervisor service (including remote
memory map), and VHM may directly service the request such as for the
remote memory map or invoking hypercall. It also sends I/O responses to
user-space modules, notified by vIRQ injections.
.. figure:: images/GVT-g-porting-image1.png
:width: 700px
:align: center
:name: GVT-g_components
GVT-g components and interfaces
Core Scenario Interaction Sequences
***********************************
vGPU Creation Scenario
======================
In this scenario, AcrnGT receives a create request from ACRN-DM. It calls
GVT's :ref:`intel_gvt_ops_interface` to inform GVT of vGPU creation. This
interface sets up all vGPU resources such as MMIO, GMA, PVINFO, GTT,
DISPLAY, and Execlists, and calls back to the AcrnGT module through the
:ref:`MPT_interface` ``attach_vgpu``. Then, the AcrnGT module sets up an
I/O request server and asks to trap the PCI configure space of the vGPU
(virtual device 0:2:0) via VHM's APIs. Finally, the AcrnGT module launches
an AcrnGT emulation thread to listen to I/O trap notifications from HVM and
ACRN hypervisor.
vGPU Destroy Scenario
=====================
In this scenario, AcrnGT receives a destroy request from ACRN-DM. It calls
GVT's :ref:`intel_gvt_ops_interface` to inform GVT of the vGPU destroy
request, and cleans up all vGPU resources.
vGPU PCI Configure Space Write Scenario
=======================================
ACRN traps the vGPU's PCI config space write, notifies AcrnGT's
``acrngt_emulation_thread``, which calls ``acrngt_hvm_pio_emulation`` to
handle all I/O trap notifications. This routine calls the GVT's
:ref:`intel_gvt_ops_interface` ``emulate_cfg_write`` to emulate the vGPU PCI
config space write:
#. If it is BAR0 (GTTMMIO) write, turn on/off GTTMMIO trap, according to
the write value.
#. If it is BAR1 (Aperture) write, maps/unmaps vGPU's aperture to its
corresponding part in the host's aperture.
#. Otherwise, write to the virtual PCI configuration space of the vGPU.
PCI Configure Space Read Scenario
=================================
The call sequence is almost the same as in the write scenario above, but
instead it calls the GVT's :ref:`intel_gvt_ops_interface`
``emulate_cfg_read`` to emulate the vGPU PCI config space read.
GGTT Read/Write Scenario
========================
GGTT's trap is set up in the PCI configure space write scenario above.
MMIO Read/Write Scenario
========================
MMIO's trap is set up in the PCI configure space write scenario above.
PPGTT Write-Protection Page Set/Unset Scenario
==============================================
The PPGTT write-protection page is set by calling ``acrn_ioreq_add_iorange``
with range type as ``REQ_WP`` and trapping its write to the device model
while allowing read without trap.
PPGTT write-protection page is unset by calling ``acrn_ioreq_del_range``.
PPGTT Write-Protection Page Write
=================================
In the VHM module, ioreq for PPGTT WP and MMIO trap is the same. It will
also be trapped into the routine ``intel_vgpu_emulate_mmio_write()``.
API Details
***********
APIs of each component interface can be found in the :ref:`GVT-g_api`
documentation.

View File

@ -677,7 +677,7 @@ Configuration Option Documentation
Most of the ACRN documentation is maintained in ``.rst`` files found in the
``doc/`` folder. API documentation is maintained as Doxygen comments in the C
header files (or as kerneldoc comments in the ``acrn-kernel`` repo headers),
header files,
along with some prose documentation in ``.rst`` files. The ACRN configuration
option documentation is created based on details maintained in schema definition
files (``.xsd``) in the ``misc/config_tools/schema`` folder. These schema

View File

@ -596,18 +596,6 @@ APIs Provided by DM
.. doxygenfunction:: vbs_kernel_stop
:project: Project ACRN
APIs Provided by VBS-K Modules in Service OS
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: include/linux/vbs/vbs.h
:functions: virtio_dev_init
virtio_dev_ioctl
virtio_vqs_ioctl
virtio_dev_register
virtio_dev_deregister
virtio_vqs_index_get
virtio_dev_reset
VHOST APIs
==========

View File

@ -73,7 +73,7 @@ New and updated reference documents are available, including:
.. rst-class:: rst-columns2
* :ref:`asa`
* :ref:`GVT-g-porting`
* GVT-g-porting (obsolete with v2.6))
* :ref:`vbsk-overhead`
* :ref:`asm_coding_guidelines`
* :ref:`c_coding_guidelines`

File diff suppressed because it is too large Load Diff

View File

@ -1,20 +0,0 @@
#!/bin/bash
# Copyright (C) 2019 Intel Corporation.
# SPDX-License-Identifier: BSD-3-Clause
#q="--quiet"
q=""
# get the latest acrn-kernel sources for the kernel-doc API processing
if [ ! -d "../../acrn-kernel" ]; then
echo Repo for acrn-kernel is missing.
exit -1
fi
# Assumes origin is the upstream repo
cd ../../acrn-kernel
git checkout $q master
git fetch $q origin
git reset $q --hard origin/master

View File

@ -35,8 +35,7 @@ The project's documentation contains the following items:
* Doxygen-generated material used to create all API-specific documents
found at http://projectacrn.github.io/latest/api/. The documentation build
process uses doxygen to scan source files in the hypervisor and
device-model folders, and from sources in the acrn-kernel repo (as
explained later).
device-model folders (as explained later).
.. image:: images/doc-gen-flow.png
:align: center
@ -69,12 +68,9 @@ recommended folder setup for documentation contributions and generation:
doc/
hypervisor/
misc/
acrn-kernel/
The parent ``projectacrn folder`` is there because, if you have repo publishing
rights, we'll also be creating a publishing area later in these steps. For API
documentation generation, we'll also need the ``acrn-kernel`` repo contents in a
sibling folder to the acrn-hypervisor repo contents.
rights, we'll also be creating a publishing area later in these steps.
It's best if the ``acrn-hypervisor`` folder is an ssh clone of your personal
fork of the upstream project repos (though ``https`` clones work too and won't
@ -108,19 +104,6 @@ require you to
After that, you'll have ``origin`` pointing to your cloned personal repo and
``upstream`` pointing to the project repo.
#. For API documentation generation we'll also need the ``acrn-kernel`` repo available
locally into the ``acrn-hypervisor`` folder:
.. code-block:: bash
cd ..
git clone git@github.com:projectacrn/acrn-kernel.git
.. note:: We assume for documentation generation that ``origin`` is pointed to
the upstream ``acrn-kernel`` repo. If you're a developer and have the acrn-kernel
repo already set up as a sibling folder to the acrn-hypervisor,
you can skip this clone step.
#. If you haven't done so already, be sure to configure git with your name
and email address for the ``signed-off-by`` line in your commit messages:
@ -220,11 +203,6 @@ The ``acrn-hypervisor/doc`` directory has all the ``.rst`` source files, extra
tools, and ``Makefile`` for generating a local copy of the ACRN technical
documentation. (Some additional ``.rst`` files and other material is extracted
or generated from the ``/misc`` folder as part of the ``Makefile``.)
For generating all the API documentation, there is a
dependency on having the ``acrn-kernel`` repo's contents available too
(as described previously). You'll get a sphinx warning if that repo is
not set up as described, but you can ignore that warning if you're
not planning to publish or show the API documentation.
.. code-block:: bash