V3->V4: Updated function/variable names for accurancy
V2->V3: Changed a few function/variable names to make it less confusing
V1->V2: removed the unneccesary cache flushing
- For UEFI boot, allocate memory for trampoline code in ACRN EFI,
and pass the pointer to HV through efi_ctx
- For other boot, scan E820 to allocate memory in HV run time
- update_trampoline_code_refs() updates all the references that need the
absolute PA with the actual load address
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
V2->V3: Fixed the booting issue on MRB board and removed the restriction
of allocate memory from address 0
1) Fix the booting from MRB issue
-#define CONFIG_LOW_RAM_SIZE 0x000CF000
+#define CONFIG_LOW_RAM_SIZE 0x00010000
2) changed e820_alloc_low_memory() to handle corner case of unaligned e820 entries
and enable it to allocate memory at address 0
+ a length = end > start ? (end - start) : 0;
- /* We don't want the first page */
- if ((length == size) && (start == 0))
- continue;
3) changed emalloc_for_low_mem() to enable to allocate memory at address 0
- /* We don't want the first page */
- if (start == 0)
- start = EFI_PAGE_SIZE;
V1->V2: moved e820_alloc_low_memory() to guest.c and added the logic to
handle unaligned E820 entries
emalloc_for_low_mem() is used if CONFIG_EFI_STUB is defined.
e820_alloc_low_memory() is used for other cases
In either case, the allocated memory will be marked with E820_TYPE_RESERVED
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
V1->V2: removed CONFIG_LOW_RAM_START and added ".org 0" to
cpu_secondary.S
The assumption is trampoline code is relocated while HV is not, so:
trampoline code is built at address 0, and CS register is updated
by SIPI to reflect the correct vector
in real mode part, added extra pointers for page tables and long jump buffer
so it's possible for HV code to patch the relocation offset
in long mode part, use absolute addressing when referring HV symbols,
and use relative addressing for symbols within trampoline code
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Currently config_page_table_attr() treats MMU_MEM_ATTR_READ exactly as
MMU_MEM_ATTR_BIT_READ_WRITE for PTT_HOST, so even when MMU_MEM_ATTR_WRITE
is not used, the R/W bit in PTE is still being set
Signed-off-by: Zide Chen <zide.chen@intel.com>
- According to Intel SDM 24.9.2,Vol3, should check the
validity of "VM-exit interruption information" before
extracting the vector of interrupt.
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
- rename it to 'io_shared_page' to keep consistent
with ACRN HDL foils.
- update related code that reference this data structure.
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
- Add "DBG" or "REL" to indicate the DBG build or REL build explicityly;
- Change the build time format to "%F %T".
Example:
HV version 0.1-rc4-2018-04-28 14:20:32-b2d7282-dirty DBG build by like
Change-Id: Ib410064b0a6603e3c90f30dffa722237c07fc069
Signed-off-by: Yan, Like <like.yan@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Make assert on max px cnt of boot cpu data, since it shouldn't happen if
px data is properly initialized in boot process.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
The initial of iobitmap pointer should be moved out of loop since address
is sequentially incremented.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
with this patch guest could access idle io port and enter idle normally.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Each VM would have its own Cx data, for now we copy it from boot_cpu_info.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
The cx data is hardcoded within HV, load it to boot_cpu_data when HV boot.
The patch provide a3960 soc cx data for example.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
currently, pass-thru devices are managed by per-vm's remapping entries
which is virtual based:
- MSI entry is identified by virt_bdf+msix_index
- INTx entry is identified by virt_pin+vpin_src
it works but it's not a good design for physical resource management, for
example a physical IOAPIC pin could belong to different vm's INTx entries,
the Device Model then must make sure there is no resource conflict from
application's level.
This patch change the design from virtual to physical based:
- MSI entry is identified by phys_bdf+msix_index
- INTx entry is identified by phys_pin
The physical resource is directly managed in hypervisor, a miss adding
entry will be found by hypervisor and return error message with failure.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
add initialize_timer to initialize or reset a timer;
add_timer add timer to corresponding physical cpu timer list.
del_timer delete timer from corresponding physical cpu timer list.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Get tsc hz by cpuid 0x15 if we supported, otherwise
calibrate tsc by pit timer.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
If guest reboot is issued before trusty init hypercall is issued,
we shouldn't destroy ept fo trusty memory because the ept is not
created yet.
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Using eax will truncate the high 32bit part of 64bit virtual address.
And the type of sync is unsigned long, so using rbx instead of ebx.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Reviewed-by: Zhao Yakui <yakui.zhao@intel.com>
This is to do the clean-up of IOAPIC mmio-access. Use the same API to
access the IOAPIC register. At the same time it also helps to avoid the
optimization in direct access mode.(The volatile is already added in
mmio_read_long/mmio_write_long)
V1->V2: Follow Fengwei's suggestion to use the mmio_read/write_long
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Now just add some basic feature/capability detect (not all). Vapic
didn't add here for if we must support vapic then the code which
for vapic not supported must remove, like mmio apic r/w.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Split pm.c from cpu_state_tbl.c to put guest power management related
functions, keep cpu_state_tbl.c to store host cpu state table and
related functions.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
for timer list is operated by per-cpu; and no interrupt
service operates it too. So it's unnecessary for spinlock.
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
adding "hugepagesz=1G" and "hugepages=X" into SOS cmdline, for X, current
strategy is making it equal
e820_mem.total_mem_size -CONFIG_REMAIN_1G_PAGES
if CONFIG_REMAIN_1G_PAGES is not set, it will use 3 by default.
CONFIG_CMA is added to indicate using cma cmdline option for SOS kernel,
by default system will use hugetlb cmdline option if no CONFIG_CMA defined.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
change its input from map_params to page_table_type, and make it as a
public API.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
before just AP can run "rdtscp" intruction, if run it on BSP,
it will cause "illegal instruction"; now align BSP & AP.
also remove duplicated code.
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
1. exception vector and other information
can be extracted from 'VM-Exit Interrupt-Information'
field of VMCS only if bit31 (Valid) is set.
-Intel SDM 24.9.2, Vol3
2. Rename 'exit-interrupt_info' to 'idt_vectoring_info'
in 'struct vcpu_arch', which is consistent with
SDM 24.9.3, Vol3
3. 'IDT-vectoring information' in VMCS is 32bit
-Intel SDM 24.9.3, Vol3
Update the type of 'idt_vectoring_info' in
'struct vcpu_arch'from 'uint32_t' to 'uint64_t'.
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Before referencing to physical address of devs such as lapic, ioapic,
vtd, and uart, switch to virtual address.
Use a phisical address of pml4 to write CR3.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
SDM 24.9.1 Volume3:
- 'Exit reason' field in VMCS is 32 bits.
SDM 24.9.4 in Volume3
- 'VM-exit instruction length' field
in VMCS is 32 bits.
This patch is to redefine the data types of above fields
in 'struct vcpu_arch' and udpate the code using these
two fields.
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Since we get cpu feature/capability in boot_cpu_data at boot initialization,
then there no need to get this feature/capability using cpuid again.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Update X86_FEATURE_OSXSAVE when enabled and replace is_xsave_supported
with cpu_has_cap(X86_FEATURE_OSXSAVE).
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add cpu_has_cap API for cpu feature/capability detect instead of
add get_xxx_cap for each feature/capability detect.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
remove data defination of mmio_addr_t, vaddr_t, paddr_t,
and ioport_t.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
This address maybe invalid if a hostile address was set
in hypercall 'HC_SET_IOREQ_BUFFER'.it should be validated
before using.
Update:
-- save HVA to guest OS's request buffer in hyperviosr
-- change type of 'req_buf' from 'uint64_t' to 'void *'
-- remove HPA to HVA translation code when using this addr.
-- use error number instead of -1 when return error cases.
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
This is workaround patch to resolve Disti performance issue.
In kernel 4.14, PAT is skipped to initialize if MTRR is not enabled,
while graphics driver need set WC to GGTT memory to accelerate memcpy,
if PAT is not initialized, default PAT register will treat UC- as
uncacheable, which will impact gfx performance. Change PAT default
register value to treat UC- as WC to workaroud this problem.
Revert me when PAT/MTRR strong correlation is removed in kernel.
Signed-off-by: Fei Jiang <fei.jiang@intel.com>
Switch all the referenced virtual address to physical address
include ept mapping, vmcs field, vmxon, vmclear, and vmptrld.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Reviewed-by: Chen, Jason Cl <jason.cj.chen@intel.com>
Reviewed-by: Yakui, Zhao <yakui.zhao@intel.com>
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Before using a node of list, initialize it.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Reviewed-by: Yakui, Zhao <yakui.zhao@intel.com>
Reviewed-by: Chen, Jason Cl <jason.cj.chen@intel.com>
Guest OS rdtsc/rdtscp doesn't trap into hypervisor, so remove them.
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
if failed to allocate page structure for root_table or context_table,
ASSERT system and return.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
add necessary HPA2HVA/HVA2HPA transition for context_table_addr
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
add necessary HPA2HVA/HVA2HPA transition for root_table_addr
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
when host mmu got updated, it should invalidate TLB & page-struct cache.
currently, there is no mmu update will be done after any AP start, so the
simplest way(to avoid shootdown) is just do invlpg for BSP.
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
- change the input param of check_page_table_present from struct map_params
to page_table_type
- check EPT present bits misconfiguration in check_page_table_present
- change var "table_present" to more suitable name "entry_present"
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
check IA32_VMX_EPT_VPID_CAP MSR to see if ept execution only capability
is supported or not
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
cpu_halt actually mean cpu dead in current code, so change it with
more clear name.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
use pcpu_active_bitmap presents which cpu is active
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
APIC-access page which write into VMCS should be hpa
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
virtual-ACPI page which write into VMCS should be hpa
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Guest write tsc: cache the offset into run_context.tsc_offset;
Guest read tsc : use run_context.tsc_offset to calculate guest_tsc.
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Before this patch, guest temporary page tables were generated by hardcode
at compile time, HV will copy this page tables to guest before guest
launch.
This patch creates temporary page tables at runtime for the range of 0~4G,
and create page tables to cover new range(511G~511G+16M) with trusty
requirement.
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Per trusty design, trusty requires a monotonic increasing
secure tick(TSC) at run time. This secure tick will used
to mitigate password/pin force attack, control key expiration,
etc.
Currently, the TSC_OFFSET is enabled. And guest will got
(host_tsc + tsc_offset) when execute rdtsc/rdtscp/rdmsr to
aquire tsc value. The host_tsc is always keeping increasing
during the runtime.
So initialize tsc_offset of trusty to 0 will ensure the
secure tick feature.
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
eptp should be record as PA.
this patch changed nworld_eptp, sworld_eptp and m2p eptp to PA type,
necessary HPA2HVA/HVA2HPA transition is used for them after the change.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- read/write page table entries should use VA which defined as "void *"
- the address data in page table entries should us PA which defined as
"uint64_t"
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The cpu model name of "Intel(R) Celeron(R) CPU J3455 @ 1.50GHz" is used for
APL NUC which is in Acrn official suport list.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Currently acrn partitions cpus between SOS and UOS, so the default
policy is to allow guest managing CPU px state. However we would
not blindly passthrough perf_ctrl MSR to guest. Instead guest access
is always trapped and validated by acrn hypervisor before forwarding
to pcpu. Doing so leaves room for future power budget control in
hypervisor, e.g. limiting turbo percentage that a cpu can enter.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
We can use this interface for VHM to pass per-cpu power state data
to guest per its request.
For now the vcpu power state is per-vm, this could be changed if
per-cpu power state support is required in the future.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
The vm px info would be used for guest Pstate control.
Currently it is copied from host boot cpu.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
The patch takes Intel ATOM A3960 as example that hard code all Px info
which is needed for Px control into Acrn HV and load it in boot process.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
The cpu model name would be used to distinguish which hard coded data
need to be loaded to boot_cpu_data;
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
all callers for fetch_page_table_offset should already make sure
it will not come to an unknown table_leve, so just panic here.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
- walk_paging_struct should return sub_table_addr, if something wrong,
it return NULL
- update_page_table_entry should return adjusted_size, if something wrong
it return 0
the change is valid under release version, as at that time, ASSERT in
walk_paging_struct is empty.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- remove unused map_params in get_table_entry
- add error return for both, which is valid under release version,
as at that time, ASSERT in get_table_entry is empty.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
function break_page_table should return next_level_page_size, if
something wrong, it return 0.
the change is valid for release version, as at that time ASSERT()
in break_page_table is empty.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
function map_mem_region should return mapped_size, if something wrong,
it return 0.
the change is valid for release version, as at that time ASSERT()
in map_mem_region is empty.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
add error return for all, which is valid under release version,
as at that time, ASSERT in modify_paging is empty.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
malloc: allocate a block of memory, the contents of the block are undefined.
calloc: allocate a block of memory for an array of num elements and initializes all its bits to zero.
Signed-off-by: yechunliang <yechunliangcn@163.com>
Add check_continuous_hpa API:
when create secure world,if the physical
address is not continuous, will assert.
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch added API destroy_secure_world, which will do:
-- clear trusty memory space
-- restore memory to SOS ept mapping
It will be called when VM is destroyed, furthermore, ept of
Secure world will be destroyed as well.
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Remove TODO comments since it has been done below the comments.
Typo fix: startup_info --> startup_param.
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1.it doesn't support VMX for guest OS
2.for MSR out of control, inject GP to guest OS.
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
--add free_paging_struct api, used for free page tables
it will clear memory before free.
--add HPA2HVA translation when free ept memory
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Before this patch, HV accesses PML4E of secure world when the PML4
doesn't exist,will access null pointer.
Fix as follow:
Before the copy of PDPTE,will allocate memory and write PML4E,
then copy the PDPTE.
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Per SDM Vol. 2:
If CPUID.80000008H:EAX[7:0] is supported, the maximum physical address
number supported should come from this field.
This patch gets the maximum physical address number from CPUID leaf
0x80000008 and calculates the physical address mask when the leaf is
available.
Currently ACRN does not support platforms w/o this leaf and will panic
on such platforms.
Also call get_cpu_capabilities() earlier since the physical address mask
is required for initializing paging.
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Per SDM:
When CPUID executes with EAX set to 80000000H, the processor returns
the highest value the processor recognizes for returning extended
processor information. The value is returned in the EAX register and is
processor specific.
This patch caches this value in the global cpuinfo_x86.cpuid_leaves. This
value will be used to check the availability of any CPUID extended
function.
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
enable TSC offset in VMX, so if TSC MSR is changed by guest OS,
write a caculated value into TSC-offset, then host TSC will not be changed.
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Reviewed-by: Zhao Yakui <yakui.zhao@intel.com>
Reviewed-by: He, Min <min.he@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
enable Xsave feature and pass-through it to guests
update based on v2:
- enable host xsave before expose it to guests.
- add validation for the value to be set to 'xcr0' before call xsetbv
when handling xsetbv vmexit.
- tested in SOS guest, created two threads to do different
FP calculations,test code runs in user land of sos.
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
According to C99:
The empty list in a function declarator that is not part of a definition of
that function specifies that no information about the number or types of the
parameters is supplied.
This means gcc is happy with the following code, which is undesirable.
void foo(); /* declaration with an empty parameter list */
void bar() {
foo(); /* OK */
foo(1); /* OK */
foo(1, 2); /* OK */
}
This patch fixes declarations of functions with empty parameter lists by adding
an unnamed parameter of type void, which is the standard way to specify that a
function has no parameters. The following coccinelle script is used.
@@
type T;
identifier f;
@@
-T f();
+T f(void);
New compilation errors are fixed accordingly.
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
A bugfix for saving secure world memory info.
Maybe there are multiple UOS, each VM has its own secure
world and normal world, should save memory info into individual VM.
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
On some toolchain configurations direct struct assignments will
default to a memcpy operation which is not present in this
environment, so explicitly use the internal memcpy_s function.
Signed-of-by: Rusty Lynch <rusty.lynch@intel.com>
Save the pointer of efi_ctx in mi_drivers_addr field of
multiboot structure and pass to hypervisor, not by
saving in register RDX(the third default parameter in
64bit call function).
With this method, we can be compatible with the original
32bit boot parameters passing method and no need to
large the array size of boot_regs in hypervisor.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Currently, the serial log is printed through IO(0x3f8).
Secure World will print serial log by port 0x3f8. So
remove the ASSERT for Secure World booting.
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
For trusty bring-up, key_info is needed.
Currently, bootloader did not transfer key_info to hypervisor.
So in this patch, use dummy key_info temporarily.
Derive vSeed from dSeed before trusty startup, the vSeed will
bind with UUID of each VM.
Remove key_info from sworld_control structure.
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
UOS_Loader will trigger boot of Trusty-OS by HC_INITIALIZE_TRUSTY.
UOS_Loader will load trusty image and alloc runtime memory for
trusty. UOS_Loader will transfer these information include
trusty runtime memory base address, entry address and memory
size to hypervisor by trusty_boot_param structure.
In hypervisor, once HC_INITIALIZE_TRUSTY received, it will create
EPT for Secure World, save Normal World vCPU context, init
Secure World vCPU context and switch World state to Secure World.
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
For ARM, The SMC instruction is used to generate a synchronous
exception that is handled by Secure Monitor code running in EL3.
In the ARM architecture, synchronous control is transferred between
the normal Non-secure state and the Secure state through Secure
Monitor Call exceptions. SMC exceptions are generated by the SMC
instruction, and handled by the Secure Monitor.The operation of
the Secure Monitor is determined by the parameters that are passed
in through registers.
For ACRN, Hypervisor will simulate SMC by hypercall to switch vCPU
State between Normal World and Secure World.
There are 4 registers(RDI, RSI, RDX, RBX) reserved for paramters
passing between Normal World and Secure World.
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
for release version, the vuart is not be used - pin 4 then is not used
by hypervisor.
this patch adds check for vm0->vuart to distinguish it.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
functions ptdev_build_physical_rte & activate_physical_ioapic
doesn't need to get parameters like phys_irq, ptdev_intx_info or vector
from caller, instead they can derive from entry.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
this patch is a preparation for changing ptdev remapping entry from
virtual to physical based, it changes the ptdev_lock from per-vm to
global, as entries based on physical mode are global resource.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
this patch is a preparation for changing ptdev remapping entry from
virtual to physical based, it changes the ptdev_list from per-vm to
global, as entries based on physical mode are global resource.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Generate all common virtual cpuid entries for flexible support of
guest VCPUID emulation, by decoupling from PCPUID.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Chen, Jason CJ <jason.cj.chen@intel.com>
microcode update from UOS is disabled.
microcode version checking is available for both SOS and UOS.
There are two TODOs of this patch:
1. This patch only update the uCode on pCPUs SOS owned. For the
pCPUs not owned by SOS, the uCode is not updated. To handle
this gap, we will have SOS own all pCPUs at boot time. So
all pCPUs could have uCode updated. This will be handled
in the patch to enable SOS own all pCPUs at boot time.
2. gva2gpa now doesn't check possible page table walk failure.
Will add the failure check in gva2gpa in different patch.
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Anthony Xu (anthony.xu@intel.com)
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
Add a global boot_cpu_data to cache common cpu capbility/feature
for detect cpu capbility/feature.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
this patch is to detect and enable only APICv features which
are actually supported by the processor, instead fo tuning on
all features by default.
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
-If "virtual-interrupt delivery" VM-execution control is 0,
Processor will causes an APIC-write VM exit if page offset
is 0xB0 (EOI), SDM Vol3, Chapter 29.4.3
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
this patch save native lapic configuration and restore it to vm0's vlapic
before its running, then doing hpet timer interrupt injection through vlapic
interface -- this will not mess up vlapic and we can see hpet
timer interrupt coming continuously.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
we added uefi stub for hv, and want vm0 continue running under uefi env to
boot other uefi payload (osloader or bzImage).
during this, the uefi timer irq need be handled elegantly.
there are 3 types for uefi timer:
1. 8254 based on IRQ0 of PIC
2. HPET based on IOAPIC
3. HPET based on MSI
currently, we only support type 3 (HPET+MSI). But we are following a
in-correct flow to handle this timer interrupt:
- we set VMX_ENTRY_INT_INFO_FIELD directly if a timer interrupt happened
before vcpu launching, this will make its vlapic mess up, which finally
cause hpet timer stop.
this patch remove this in-correct approach, the new approach patch will
be followed by next patch.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
According to the explaination for pref_address
in Documentation/x86/boot.txt, a relocating bootloader
should attempt to load kernel at pref_address if possible.
But due to a non-relocatable kernel will unconditionally
move itself and to run at perf address, no need to copy
kernel to perf_address by bootloader.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
info->phys_pin need be used by ptdev_build_native_rte when updating entry
TODO: currently ptdev entry is virtual based, the better solution should
be physical based.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
It is possible that the vm-entry fails in vmresume instr under some scenarios.
It will pass to next instruction following vmresume. In such case it will call
the vmlaunch again.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
1. refine multiboot related code, move to /boot.
2. firmware files and ramdisk can be stitched in iasImage;
and they will be loaded as multiboot modules.
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Add 'CPU_PAGE_MASK' used for calculate address,
Change IA32E_REF_MASK from 0x7ffffffffffff000 to 0x000ffffffffff000
for MMU/EPT entry, bit62:52(ignore) bit63(VE/XD)
if we want to obtain the address from the MMU/EPT entry,need to clear
bit63:52 by IA32E_REF_MASK
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
This patch is prepared for enabling secure world feature.
this api will create new eptp for secure world, whose PDPT
entries are copied form normal world,the PML4/PDPT for secure
world are separated from Normal World, PD/PT are shared in the
Secure World's EPT and Normal World's EPT.Secure world can
access Normal World's memory, but Normal World can not access
Secure World's memory
This function implemented:
-- Unmap specific memory from guest ept mapping
-- Copy PDPT from Normal world to Secure world
-- Map specific memory for Secure world
-- Unmap specific memory from SOS ept mapping
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
On the APL NUC board (CPU family: 0x6 model: 92), the monitor is buggy.
We can't use it to wake up CPU core from mwait by memory monitor.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
add key info structure
add sworld_eptp in vm structure, and rename ept->nworld_eptp
add secure world control structure
Change-Id:
Tracked-On:220921
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>