- Add PAGING_REQUEST_TYPE_MODIFY_MT memory map request type
- Update map_mem_region() to allow modifying the memory type related
fields in a page table entry
- Add ept_update_mt()
- add modify_mem_mt() for both EPT and MMU
Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- export start_cpus to start/online APs.
- Add stop_cpus to offline APs.
- Update cpu_dead to decrement running cpus number and do cleanup
for AP down
Signed-off-by: Yin Fegnwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
To handle cpu down/up dynamically, arcn needs to support vmx off/on
dynamically. Following changes is introduced:
vmx_off will be used when down AP. It does:
- vmclear the mapped vcpu
- off vmx.
exec_vmxon_instr is updated to handle start and up AP both. It does
- if vmx was on on AP, load the vmxon_region saved. Otherwise,
allocate vmxon_region.
- if there is mapped vcpu, vmptrld mapped vcpu.
Signed-off-by: Zheng Gen <gen.zheng@intel.com>
Signed-off-by: Yin Fegnwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
Per performance consideration, we don't flush vcpu context when doing
vcpu swithing (because it's only swithing between vcpu and idle).
But when enter S3, we need to call vmclear against all vcpus attached
to APs. We need to know which vcpu is attached with which pcpu.
This patch introduced API to get vcpu mapped to specific pcpu.
Signed-off-by: Yin Fegnwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
current_vcpu is not correct when there are multi vcpus in one VM,
using it is in-correct, so remove it.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Retrieve dseed from SeedList HOB(Hand-Off-Block).
SBL passes SeedList HOB to ACRN by MBI modules.
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
Reviewed-by: Zhu Bing <bing.zhu@intel.com>
Reviewed-by: Wang Kai <kai.z.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The current implementation of per_cpu relies on several non-c99 features,
and in additional involves arbitrary pointer arithmetic which is not MIS-
RA C friendly.
This patch introduces struct per_cpu_region which holds all the per_cpu
variables. Allocation of per_cpu data regions and access to per_cpu vari-
ables are greatly simplified, at the cost of making all per_cpu varaibl-
es accessible in files.
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Enable VMX vpid ctrl and assign an unique vpid to each vcpu
so that VMX transitions are not required to invalidate any
linear mappings or combined mappings.
SDM Vol 3 - 28.3.3.3
If EPT is in use, the logical processor associates all mappings
it creates with the value of bits 51:12 of current EPTP.
If a VMM uses different EPTP values for different guests, it may
use the same VPID for those guests. Doing so cannot result in one
guest using translations that pertain to the other.
In our UOS, the trusty world and normal world are using different
EPTP. So we can use the same VPID for it.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
We need know which tlb to flush: ept or vpid.
1. error handle for invept.
it's the same with invvpid error handle.
change its name to compatible with vpid.
2. the macro name for flush ept tlb request.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In current implementation, on sbl platform, vm0 bsp
starts from 64bit mode. And hv need to prepare init
page table for it.
In this patch series, on sbl platform, vm0 bsp starts
from non-paging protected mode.
This patch prepares an init gdt for vm0 bsp on sbl
platform.
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Translate gva2gpa in different paging modes.
Change the definition of gva2gpa.
- return value for error status
- Add a parameter for error code when paging fault.
Change the definition of vm_gva2gpa.
- return value for error status
- Add a parameter for error code when paing fault.
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Use # of paging level to identify paging mode
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
In current implemenation, cr0/cr4 host mask value are set
according to the value from fixed0/fixed1 values of cr0/cr4.
In fact, host mask can be set to the bits, which need to be trapped.
This patch, add code to support exiting long mode in CR0 write handling.
Add some check when modify CR0/CR4.
- CR0_PG, CR0_PE, CR0_WP, CR0_NE are trapped for CR0.
PG, PE are trapped to track vcpu mode switch.
WP is trapped for info of protection when paing walk.
NE is always on bit.
- CR4_PSE, CR4_PAE, CR4_VMXE are trapped for CR4.
PSE, PAE are trapped to track paging mode.
VMXE is always on bit.
- Reserved bits and always off bits are not allow to be set by guest.
If guest try to set these bits when vmexit, a #GP will be injected.
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
_Static_assert is supported in C11 standard.
Please see N1570(C11 mannual) 6.4.1.
replace _Static_assert with ASSERT.
Signed-off-by: huihuang shi <huihuang.shi@intel.com>
vm_attr_name, vm_state_info_privilege & vm_created are not used
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
vm_state_info in struct vm_arch is not used, remove it
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Guest CR3 read/write operations are not trapped.
Remove CR3 handling in cr_access_vmexit_handler.
Also remove unused API vmx_read_cr3.
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Move from vmexit.c to vmx.c
Declare the functions in vmx.h
Rename the functions' name with prefix vmx_.
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
When we create an UOS, we didn't indicate the vmid.
Thus we can't get the vm description for the vm
description array.
Instead we use a temporary vm description to save data to
fill the vm structure when crate an UOS. It's uselesss once
UOS has created. So we don't need to maintain vm description
array here for UOS.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Pointer arithmetic is currently used to calculate the address of a specific
Local Vector Table (LVT) register (except LVT_CMCI) in lapic, since the
registers are continuously placed with fixed padding in between. However each of
these registers are declared as a single uint32_t in struct lapic, resulting
pointer arithmetic on a non-array pointer which violates MISRA C requirements.
This patch refactors struct lapic by converting the LVT registers fields (again
except LVT_CMCI) to an array named lvt. The LVT indices are reordered to reflect
the order of the LVT registers on hardware, and reused to index this lvt array.
The code before and after the changes is semantically equivalent.
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
use func vcpu_queue_exception for vcpu_inject_gp and exception_vmexit_handler.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
add func vcpu_queue_exception to queue exception based on SDM Vol3 Table 6-5,
which may cause #DF or triple fault
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
the pending_intr is not only serving for interrupt but also for different
request including TLB & TMR updating, so change the function & variants
name accordingly.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
According to the syntax defined in C99, each struct/union field must have an
identifier. This patch adds names to the previously unnamed fields for C99
compatibility.
Here is a summary of the names (marked with a pair of *stars*) added.
struct trusty_mem:
union {
struct {
struct key_info key_info;
struct trusty_startup_param startup_param;
} *data*;
uint8_t page[CPU_PAGE_SIZE];
} first_page;
struct ptdev_remapping_info:
union {
struct ptdev_msi_info msi;
struct ptdev_intx_info intx;
} *ptdev_intr_info*;
union code_segment_descriptor:
uint64_t value;
struct {
union {
...
} low32;
union {
...
} high32;
} *fields*;
similar changes are made to the following structures.
* union data_segment_descriptor,
* union system_segment_descriptor,
* union tss_64_descriptor, and
* union idt_64_descriptor
struct trace_entry:
union {
struct {
uint32_t a, b, c, d;
} *fields_32*;
struct {
uint8_t a1, a2, a3, a4;
uint8_t b1, b2, b3, b4;
uint8_t c1, c2, c3, c4;
uint8_t d1, d2, d3, d4;
} *fields_8*;
struct {
uint64_t e;
uint64_t f;
} *fields_64*;
char str[16];
} *payload*;
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
- move vmexit handling into vmexit_handler
- add error handling, failure will inject #GP
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
there are data transfer between guest(GPA) & hv(HPA), especially for
hypercall from guest.
guest should make sure these GPAs are address continous, but hv cannot
assure HPAs which mapped to these GPAs are address continous, for example,
after enable hugetlb, a contious GPA range could come from two different
2M pages.
this patch is handling such case by doing gpa page walking during
copy_from_vm & copy_to_vm.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
rename intr_ctx to intr_excp_ctx
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
move intr_ctx to irq.h, delete intr_ctx.h
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Changes:
1. Move io request related functions from hypercall.c to io_request.c
since they are not hypercalls;
2. Remove acrn_insert_request_nowait() as it is never used;
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
allocated all pcpus to vm0 to handle possible AP wakeup flow for all cpus,
as we pass org ACPI table to VM0 - that means VM0 can see all CPUs.
SOS(VM0) start expected CPUs through "maxcpus=" kernel cmdline option.
During first hypercall from SOS, calling vm_fixup to free un-expect-enabled
vcpus from VM0.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
remove sipi_from_efi_boot_service_exit & efi_deferred_wakeup_pcpu workaround
for uefi boot flow
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Now one uint64_t type is used to obtain the corresponding descriptor_table
for GDT/IDT. This will cause the stack protect corruption under -O2.
So the descriptor_table struct is added to configure the GDT/IDT of VMCS.
V1->V2: Move the descriptor_table into vmx.h header file
And its type is renamed from dt_addr_t to descriptor_table.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Zheng Gen <gen.zheng@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The RFLAGS will be touched in some inline assembly.(exec_vmxon/
RFLAGS_RESTORE). The "cc" constraint should be added. Otherwise
it won't be handled under -O2 option.
And "%%XXX" register should also be added into constraints.
Otherwise it will be optimized incorrectly.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Zheng Gen <gen.zheng@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add vlapic_create_timer/vlapic_reset_timer to setup/reset a timer.
Add vlapic_update_lvtt to disarm timer when mode changes.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
make the timer list be ordered to speed up expried timer
process and next timer event finding.
Add timer would not schedule timer unless it's the next
timer event.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
V3->V4: Updated function/variable names for accurancy
V2->V3: Changed a few function/variable names to make it less confusing
V1->V2: removed the unneccesary cache flushing
- For UEFI boot, allocate memory for trampoline code in ACRN EFI,
and pass the pointer to HV through efi_ctx
- For other boot, scan E820 to allocate memory in HV run time
- update_trampoline_code_refs() updates all the references that need the
absolute PA with the actual load address
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
V2->V3: Fixed the booting issue on MRB board and removed the restriction
of allocate memory from address 0
1) Fix the booting from MRB issue
-#define CONFIG_LOW_RAM_SIZE 0x000CF000
+#define CONFIG_LOW_RAM_SIZE 0x00010000
2) changed e820_alloc_low_memory() to handle corner case of unaligned e820 entries
and enable it to allocate memory at address 0
+ a length = end > start ? (end - start) : 0;
- /* We don't want the first page */
- if ((length == size) && (start == 0))
- continue;
3) changed emalloc_for_low_mem() to enable to allocate memory at address 0
- /* We don't want the first page */
- if (start == 0)
- start = EFI_PAGE_SIZE;
V1->V2: moved e820_alloc_low_memory() to guest.c and added the logic to
handle unaligned E820 entries
emalloc_for_low_mem() is used if CONFIG_EFI_STUB is defined.
e820_alloc_low_memory() is used for other cases
In either case, the allocated memory will be marked with E820_TYPE_RESERVED
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong>
Acked-by: Xu, Anthony <anthony.xu@intel.com>