Previously it is (falsely) assumed that the major_ver of 32-bit SMBIOS
entry point structure (which is called SMBIOS 2.1 in spec, or SMBIOS2 in code)
will have a value of 2 and major_ver of 64-bit SMBIOS (which is called SMBIOS
3.0 in spec, and SMBIOS3 in code) will have a value of 3. This turned out to be
wrong. This major_ver refers to the implemented doc revision, and 32-bit SMBIOS2
can have its major_ver to be 3 (current most recent implementation).
This patch removes the use of major_ver to distinguish between
SMBIOS2/3, and use a doc-defined anchor string instead.
Tracked-On: #6528
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
The locked btr instruction is expensive. This patch changes the
logic to ensure that the bitmap is non-zero before executing
bitmap_test_and_clear_lock().
The VMX transition time gets significant improvement. SOS running
on TGL, the CPUID roundtrip reduces from ~2400 cycles to ~2000 cycles.
Tracked-On: #6289
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently ACRN assumes firmware setup IA32_PAT correctly. This patch
explicitly initializes host IA32_PAT MSR according to ISDM Table 11-12.
Memory Type Setting of PAT Entries Following a Power-up or Reset.
ACRN creates host page tables based on PAT0 (WB) and PAT3 (UC).
Tracked-On: #6289
Signed-off-by: Zide Chen <zide.chen@intel.com>
is_lapic_pt_enabled() is called at least twice in one loop of the vCPU
thread, and it's called in vmexit_handler() frequently if LAPIC is not
pass-through. Thus the efficiency of this function has direct
impact to the system performance.
Since the LAPIC mode is not changed in run time, we don't have to
calculate it on the fly in is_lapic_pt_enabled().
BTW, removed the unused lapic_mask from struct acrn_vcpu_arch.
Tracked-On: #6289
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
For platforms that do not support XSAVES/XRSTORS instructions, like QEMU,
executing these instructions causes #UD.
This patch adds the check before the execution of XSAVES/XRSTORS instructions.
It also refines the logic inside rstore_xsave_area for the following reason:
If XSAVES/XRSTORS instructions are supported, restore XSAVE area if any of the
following conditions is met:
1. "vcpu->launched" is false (state initialization for guest)
2. "vcpu->arch.xsave_enabled" is true (state restoring for guest)
* Before vCPU is launched, condition 1 is satisfied.
* After vCPU is launched, condition 2 is satisfied because
is_valid_xsave_combination() guarantees that "vcpu->arch.xsave_enabled"
is consistent with pcpu_has_cap(X86_FEATURE_XSAVES).
Therefore, the check against "vcpu->launched" and "vcpu->arch.xsave_enabled"
can be eliminated here.
Tracked-On: #6481
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Use an unused MSR on host to save ACRN pcpu ID and avoid saving and
restoring TSC AUX MSR on VMX transitions.
Tracked-On: #6289
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
This feature is guarded under config CONFIG_SECURITY_VM_FIXUP, which
by default should be disabled.
This patch passthrough native SMBIOS information to prelaunched VM.
SMBIOS table contains a small entry point structure and a table, of which
the entry point structure will be put in 0xf0000-0xfffff region in guest
address space, and the table will be put in the ACPI_NVS region in guest
address space.
v2 -> v3:
uuid_is_equal moved to util.h as inline API
result -> pVendortable, in function efi_search_guid
recalc_checksum -> generate_checksum
efi_search_smbios -> efi_search_smbios_eps
scan_smbios_eps -> mem_search_smbios_eps
EFI GUID definition kept
Tracked-On: #6320
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
This patch renames the GUEST_FLAG_TPM2_FIXUP to
GUEST_FLAG_SECURITY_VM.
v2 -> v3:
The "FIXUP" suffix is removed.
Tracked-On: #6320
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
In ACRN RT VM if the lapic is passthrough to the guest, the ipi can't
trigger VM_EXIT and the vNMI is just for notification, it can't handle
the smp_call function. Modify vcpu_dumpreg function prompt user switch
to vLAPIC mode for vCPU register dump.
Tracked-On: #6473
Signed-off-by: Liu Long <long.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- remove vcpu->arch.nrexits which is useless.
- record full 32 bits of exit_reason to TRACE_2L(). Make the code simpler.
Tracked-On: #6289
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
GSI of hcall_set_irqline should be checked against target_vm's
total GSI count instead of SOS's total GSI count.
Tracked-On: #6357
Signed-off-by: Jian Jun Chen <jian.jun.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This helps to improve performance:
- Don't need to execute VMREAD in vcpu_get_efer(), which is frequently
called.
- VMX_EXIT_CTLS_SAVE_EFER can be removed from VM-Exit Controls.
- If the value of IA32_EFER MSR is identical between the host and guest
(highly likely), adjust the VMX controls not to load IA32_EFER on
VMExit and VMEntry.
It's convenient to continue use the exiting vcpu_s/get_efer() APIs,
other than the common vcpu_s/get_guest_msr().
Tracked-On: #6289
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
MAXIMUM_PA_WIDTH will be calculated from board information.
Tracked-On: #6357
Signed-off-by: Liang Yi <yi.liang@intel.com>
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Mask off support of 57-bit linear addresses and five-level paging.
ICX-D has LA57 but ACRN doesn't support 5-level paging yet.
Tracked-On: #6357
Signed-off-by: Liang Yi <yi.liang@intel.com>
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
It is used to specify the maximum number of EFI memmap entries.
On some platforms, like Tiger Lake, the number of EFI memmap entries
becomes 268 when the BIOS settings are changed.
The current value of MAX_EFI_MMAP_ENTRIES (256) defined in hypervisor
is not big enough to cover such cases.
As the number of EFI memmap entries depends on the platforms and the
BIOS settings, this patch introduces a new entry MAX_EFI_MMAP_ENTRIES
in configurations so that it can be adjusted for different cases.
Tracked-On: #6442
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
If SOS is using kernel 5.4, hypervisor got panic with #GP.
Here is an example on KBL showing how the panic occurs when kernel 5.4 is used:
Notes:
* Physical MSR_IA32_XSS[bit 8] is 1 when physical CPU boots up.
* vcpu_get_guest_msr(vcpu, MSR_IA32_XSS)[bit 8] is initialized to 0.
Following thread switches would happen at run time:
1. idle thread -> vcpu thread
context_switch_in happens and rstore_xsave_area is called.
At this moment, vcpu->arch.xsave_enabled is false as vcpu is not launched yet
and init_vmcs is not called yet (where xsave_enabled is set to true).
Thus, physical MSR_IA32_XSS is not updated with the value of guest MSR_IA32_XSS.
States at this point:
* Physical MSR_IA32_XSS[bit 8] is 1.
* vcpu_get_guest_msr(vcpu, MSR_IA32_XSS)[bit 8] is 0.
2. vcpu thread -> idle thread
context_switch_out happens and save_xsave_area is called.
At this moment, vcpu->arch.xsave_enabled is true. Processor state is saved
to memory with XSAVES instruction. As physical MSR_IA32_XSS[bit 8] is 1,
ectx->xs_area.xsave_hdr.hdr.xcomp_bv[bit 8] is set to 1 after the execution
of XSAVES instruction.
States at this point:
* Physical MSR_IA32_XSS[bit 8] is 1.
* vcpu_get_guest_msr(vcpu, MSR_IA32_XSS)[bit 8] is 0.
* ectx->xs_area.xsave_hdr.hdr.xcomp_bv[bit 8] is 1.
3. idle thread -> vcpu thread
context_switch_in happens and rstore_xsave_area is called.
At this moment, vcpu->arch.xsave_enabled is true. Physical MSR_IA32_XSS is
updated with the value of guest MSR_IA32_XSS, which is 0.
States at this point:
* Physical MSR_IA32_XSS[bit 8] is 0.
* vcpu_get_guest_msr(vcpu, MSR_IA32_XSS)[bit 8] is 0.
* ectx->xs_area.xsave_hdr.hdr.xcomp_bv[bit 8] is 1.
Processor state is restored from memory with XRSTORS instruction afterwards.
According to SDM Vol1 13.12 OPERATION OF XRSTORS, a #GP occurs if XCOMP_BV
sets a bit in the range 62:0 that is not set in XCR0 | IA32_XSS.
So, #GP occurs once XRSTORS instruction is executed.
Such issue does not happen with kernel 5.10. Because kernel 5.10 writes to
MSR_IA32_XSS during initialization, while kernel 5.4 does not do such write.
Once guest writes to MSR_IA32_XSS, it would be trapped to hypervisor, then,
physical MSR_IA32_XSS and the value of MSR_IA32_XSS in vcpu->arch.guest_msrs
are updated with the value specified by guest. So, in the point 2 above,
correct processor state is saved. And #GP would not happen in the point 3.
This patch initializes the XSAVE related processor state for guest.
If vcpu is not launched yet, the processor state is initialized according to
the initial value of vcpu_get_guest_msr(vcpu, MSR_IA32_XSS), ectx->xcr0,
and ectx->xs_area. With this approach, the physical processor state is
consistent with the one presented to guest.
Tracked-On: #6434
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
Reviewed-by: Li Fei1 <fei1.li@intel.com>
Currently init_vmx_msrs() emulates same value for the IA32_VMX_xxx_CTLS
and IA32_VMX_TRUE_xxx_CTLS MSRs.
But the value of physical MSRs could be different between the pair,
and we need to adjust the emulated value accordingly.
Tracked-On: #6289
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
check_vmx_permission() is called in vmresume_vmexit_handler() and
vmlaunch_vmexit_handler() already.
Tracked-On: #6289
Signed-off-by: Zide Chen <zide.chen@intel.com>
Currently the sched event handling may encounter data race problem, and
as a result some vcpus might be stalled forever.
One example can be wbinvd handling where more than 1 vcpus are doing
wbinvd concurrently. The following is a possible execution of 3 vcpus:
-------
0 1 2
req [Note: 0]
req bit0 set [Note: 1]
IPI -> 0
req bit2 set
IPI -> 2
VMExit
req bit2 cleared
wait
vcpu2 descheduled
VMExit
req bit0 cleared
wait
vcpu0 descheduled
signal 0
event0->set=true
wake 0
signal 2
event2->set=true [Note: 3]
wake 2
vcpu2 scheduled
event2->set=false
resume
req
req bit0 set
IPI -> 0
req bit1 set
IPI -> 1
(doesn't matter)
vcpu0 scheduled [Note: 4]
signal 0
event0->set=true
(no wake) [Note: 2]
event0->set=false (the rest doesn't matter)
resume
Any VMExit
req bit0 cleared
wait
idle running
(blocked forever)
Notes:
0: req: vcpu_make_request(vcpu, ACRN_REQUEST_WAIT_WBINVD).
1: req bit: Bit in pending_req_bits. Bit0 stands for bit for vcpu0.
2: In function signal_event, At this time the event->waiting_thread
is not NULL, so wake_thread will not execute
3: eventX: struct sched_event of vcpuX.
4: In function wait_event, the lock does not strictly cover the execution between
schedule() and event->set=false, so other threads may kick in.
-----
As shown in above example, before the last random VMExit, vcpu0 ended up
with request bit set but event->set==false, so blocked forever.
This patch proposes to change event->set from a boolean variable to an
integer. The semantic is very similar to a semaphore. The wait_event
will add 1 to this value, and block when this value is > 0, whereas signal_event
will decrease this value by 1.
It may happen that this value was decreased to a negative number but that
is OK. As long as the wait_event and signal_event are paired and
program order is observed (that is, wait_event always happens-before signal_event
on a single vcpu), this value will eventually be 0.
Tracked-On: #6405
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
This is a simply implement for the 32bit and 64bit elf loader.
The loading function first reads the image header, and finds the program
entries that are marked as PT_LOAD, then loads segments from elf file to
guest ram. After that, it finds the bss section in the elf section entries, and
clear the ram area it points to.
Limitations:
1. The e_type of the elf image must be ET_EXEC(executable). Relocatable or
dynamic code is not supported.
2. The loader only copies program segments that has a p_type of
PT_LOAD(loadable segment). Other segments are ignored.
3. The loader doesn’t support Sections that are relocatable
(sh_type is SHT_REL or SHT_RELA)
4. The 64bit elf’s entry address must below 4G.
5. The elf is assumed to be able to put segments to valid guest memory.
Tracked-On: #6323
Signed-off-by: Zhou, Wu <wu.zhou@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch adds a function elf_loader() to load elf image.
It checks the elf header, get its 32/64 bit type, then calls
the corresponding loading routines, which are empty, and
will be realized later.
Tracked-On: #6323
Signed-off-by: Zhou, Wu <wu.zhou@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Source: https://github.com/freebsd/freebsd-src/blob/main/sys/sys/elf_common.h
Trimed to meet the minimal requirements for the Zephyr elf file to be loaded
Also added elf file header data struct and program/section entry data structs.
Tracked-On: #6323
Signed-off-by: Zhou, Wu <wu.zhou@intel.com>
Reviewed-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In order to make better sense, vm_elf_loader, vm_bzimage_loader and
vm_rawimage_loader are changed to elf_loaer, bzimage_loaer and
rawimage_loader.
Tracked-On: #6323
Signed-off-by: Zhou, Wu <wu.zhou@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Remove the acpi loading function from elf_loader, rawimage_loaer and
bzimage_loader, and call it together in vm_sw_loader.
Now the vm_sw_loader's job is not just loading sw, so we rename it to
prepare_os_image.
Tracked-On: #6323
Signed-off-by: Zhou, Wu <wu.zhou@intel.com>
Reviewed-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
For the guest OS loaders, prapare_loading_xxx are not accurate for
what those functions actually do. Now they are changed to load_xxx:
load_rawimage, load_bzimage.
And the 'bsp' expression is confusing in the comments for
init_vcpu_protect_mode_regs, changed to a better way.
Tracked-On: #6323
Signed-off-by: Zhou, Wu <wu.zhou@intel.com>
Reviewed-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
vboot_info.h declares vm loader function also, so rename the file name to
vboot.h;
Tracked-On: #6323
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The patch splits the vm_load.c to three parts, the loader function of bzImage
kernel is moved to bzimage_loader.c, the loader function of raw image kernel
is moved to rawimage_loader.c, the stub is still stayed in vm_load.c to load
the corresponding kernel loader function. Each loader function could be
isolated by CONFIG_GUEST_KERNEL_XXX macro which generated by config tool.
Tracked-On: #6323
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Change if condition to switch in vm_sw_loader() so that the sw loader
could be compiled conditionally.
Tracked-On: #6323
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Rename KERNEL_ZEPHYR to KERNEL_RAWIMAGE. Added new type "KERNEL_ELF".
Add CONFIG_GUEST_KERNEL_RAWIMAGE, CONFIG_GUEST_KERNEL_ELF and/or
CONFIG_GUEST_KERNEL_BZIMAGE to config.h if it's configured.
Tracked-On: #6323
Signed-off-by: Yang,Yu-chu <yu-chu.yang@intel.com>
Reviewed-by: Victor Sun <victor.sun@intel.com>
Previously we only support loading raw format of zephyr image as prelaunched
Zephyr VM, this would cause guest F segment overridden issue because the zephyr
raw image covers memory space from 0x1000 to 0x100000 upper. To fix this issue,
we should support ELF format image loading so that parse and load the multiple
segments from ELF image directly.
Tracked-On: #6323
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
When pass-thru GPU to pre-launched Linux guest,
need to pass GPU OpRegion to the guest.
Here's the detailed steps:
1. reserve a memory region in ve820 table for GPU OpRegion
2. build EPT mapping for GPU OpRegion to pass-thru OpRegion to guest
3. emulate the pci config register for OpRegion
For the third step, here's detailed description:
The address of OpRegion locates on PCI config space offset 0xFC,
Normal Linux guest won't write this register,
so we can regard this register as read-only.
When guest reads this register, return the emulated value.
When guest writes this register, ignore the operation.
Tracked-On: #6387
Signed-off-by: Liu,Junming <junming.liu@intel.com>
ACRN does not support the variable range vMTRR. The default
memory type of vMTRR is UC. With this vMTRR emulation guest VM
such as Linux refuses to map the MMIO address space as WB. In
order to get better performance SHM BAR of ivshmem is mapped
with PAT ignored and memory type of SHM BAR is fixed to WB.
Tracked-On: #6389
Signed-off-by: Jian Jun Chen <jian.jun.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Create virtual acpi table of tpm2 based on the raw data if the TPM2
device is presented and the passthrough tpm2 is enabled.
Refine the arguments of bin_gen.py. The --board and --scenario take the
path to the XMLs as the argument. The allocation.xml is needed for
bin_gen.py to generate tpm2 acpi table.
Refine the condition of tpm2_acpi_gen. The tpm2 device "MSFT0101" can be
present in device id or compatible_id(CID). Check both attributes and
child node of tpm2 device.
Tracked-On: #6320
Signed-off-by: Yang,Yu-chu <yu-chu.yang@intel.com>
Relocate ACPI address to 0x7fe00000 and ACPI NVS to 0x7ff00000 correspondingly.
In this case, we could include TPM event log region [0x7ffb0000, 0x80000000)
into ACPI NVS.
Tracked-On: #6320
Signed-off-by: Fei Li <fei1.li@intel.com>
ACRN used to prepare the vTPM2 ACPI Table for pre-launched VM at the build stage
using config tools. This is OK if the TPM2 ACPI Table never changes. However,
TPM2 ACPI Table may be changed in some conditions: change BIOS configuration or
update BIOS.
This patch do TPM2 fixup to update the vTPM2 ACPI Table and TPM2 MMIO resource
configuration according to the physical TPM2 ACPI Table.
Tracked-On: #6366
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Signed-off-by: Fei Li <fei1.li@intel.com>
1. add a name field to indicate what the MMIO Device is.
2. add two more MMIO resource to the acrn_mmiodev data structure.
Tracked-On: #6366
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Signed-off-by: Fei Li <fei1.li@intel.com>
ACRN could run without XSAVE Capability. So remove XSAVE dependence to support
more (hardware or virtual) platforms.
Tracked-On: #6287
Signed-off-by: Fei Li <fei1.li@intel.com>
Check whether condition is met before check whether time is out after iommu_read32.
This is because iommu_read32 would cause time out on some virtual platform in
spite of the current DMAR status meets the pre_condition.
Tracked-On: #6371
Signed-off-by: Fei Li <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
If HV enable trigger #GP for uc-lock, and is about to emulate guest uc-lock
instructions, should trap guest #GP. Guest uc-lock instrucction trigger #GP,
cause vmexit for #GP, HV handle this vmexit and emulate uc-lock
instruction.
Tracked-On: #6299
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
User could use make targz-pkg command to generate tar package in
build directory,which could help user simplify the process
of installing acrn hypervisor in target board. user need to copy the
tarball package to target board,and extract it to "/" directory.
Tracked-On: #6355
Signed-off-by: liu hang1 <hang1.liu@intel.com>
Reviewed-by: VanCutsem, Geoffroy <geoffroy.vancutsem@intel.com>
Acked-by: Wang, Yu1 <yu1.wang@intel.com>
Currently the HV console does not support PCI UART with 64bit BAR, but in the
case that the BAR is in 64bit and the BAR space is below 4GB (i.e. the high
32bit address of the 64bit BAR is zero), HV should be able to support it.
Tracked-On: #6334
Signed-off-by: Victor Sun <victor.sun@intel.com>
When guest kernel has multiple loading segments like ELF format image, just
define one load address in sw_kernel_info struct is meaningless.
The patch removes kernel_load_addr member in struct sw_kernel_info, the load
address should be parsed in each specified format image processing.
Tracked-On: #6323
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
The previous code did not load bzImage start from protected mode part, result
in the protected mode part un-align with kernel_alignment field and then cause
kernel decompression start from a later aligned address. In this case we had
to enlarge the needed size of bzImage kernel to kernel_init_size plus double
size of kernel_alignment.
With loading issue of bzImage protected mode part fixed, the kernel needed size
is corrected in this patch.
Tracked-On: #6323
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
When LaaG boots with bzImage module file, only protected mode code need
to be loaded to guest space since the VM will boot from protected mode
directly. Futhermore, per Linux boot protocol the protected mode code
better to be aligned with kernel_alignment field in zeropage, otherwise
kernel will take time to do "rep movs" to the aligned address.
In previous code, the bzImage is loaded to the address where aligned with
kernel_alignment, this would make the protected mode code unalign with
kernel_alignment. If the kernel is configured with CONFIG_RELOCATABLE=n,
the guest would not boot. This patch fixed this issue.
Tracked-On: #6323
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
This patch moves get_bzimage_kernel_load_addr() from init_vm_sw_load() to
vm_sw_loader() stage so will set kernel load address of bzImage type kernel
in vm_bzimage_loader() in vm_load.c.
Tracked-On: #6323
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
This patch moves get_initrd_load_addr() API from init_vm_sw_load() to
vm_sw_loader() stage. The patch assumes that the kernel image have been
loaded to guest space already.
Tracked-On: #6323
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
In load_sw_modules() implementation, we always assuming the guest kernel
module has one load address and then the whole kernel image would be loaded
to guest space from its load address. This is not true when guest kernel
has multiple load addresses like ELF format kernel image.
This patch removes load_sw_modules() API, and the loading method of each
format of kernel image could be specified in prepare_loading_xxximage() API.
Tracked-On: #6323
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>