Commit Graph

139 Commits

Author SHA1 Message Date
Yin Fengwei 0f9d9641d4 hv: add function to return to VM0
Emulate VM0 resume from S3 state:
 - reset BSP of VM0
 - set the BSP entry to saved VM0 wakeup vec and set BSP to real mode
 - start BSP

To match trampoline_spinlock release on ACRN Sx resume path, acquire
trampoline_spinlock if ACRN Sx enter fails.

Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-29 00:50:01 +08:00
Yin Fengwei d34700a1ae hv: prepare for Sx(S3/S5) support in ACRN.
Couple of small changes merged in this change:
 - export main_entry, trampoline_spinlock and stop_cpus.
 - change vm_resume() name to resume_vm()
 - change resume_console_enable() name to resume_console()
 - extend reset_vcpu to reset more fields of vcpu

Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-29 00:50:01 +08:00
Junjie Mao 392542310f HV: treewide: convert suffix ULL to UL
It is already assumed that ''long'' has 8-bytes, and thus there is no need to
use ULL to indicate a 8-byte unsigned constant.

This patch changes all ULL suffixes found in the hypervisor to UL.

Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-29 00:50:01 +08:00
Xiangyang Wu c585172492 Rename phy_cpu_num as phys_cpu_num
phys_cpu_num is more popular than phy_cpu_num, update them
through command.

Signed-off-by: Xiangyang Wu <xiangyang.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-22 16:12:52 +08:00
Yin Fengwei 3892bd0455 hv: refine the address used in sbl multiboot code
Update the structure definition to define the address type
(HVA vs HPA vs GPA) explicitly.

Convert address to HVA before access the GPA/HPA type of address.

Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
2018-06-22 16:12:24 +08:00
Li, Fei1 2e535855ce hv: remove config_page_table_attr
Before we set the page table, we should know the attribute. So
move configure the page table attribute outside of modify_paging.

Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-22 16:12:01 +08:00
Huihuang Shi 218a0a8b5d modified struct to fix "negative shift"
The member of width in struct e820_entries,can be declared to
uint32_t(the range of the member is bigger than 0) to avoid
negative shift.

Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-22 12:18:45 +08:00
Huihuang Shi 58672cb562 fix "negative shift"
MISRA C doesn't allowed negative shift, changed any potential signed value
to unsigned value.

Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-22 12:18:45 +08:00
Xiangyang Wu 3027bfab10 HV: treewide: enforce unsignedness of pcpu_id
In the hypervisor, physical cpu id is defined as "int" or "uint32_t"
type in the hypervisor. So there are some sign conversion issues
about  physical cpu id (pcpu_id) reported by static analysis tool.
Sign conversion violates the rules of MISRA C:2012.

In this patch, define physical cpu id as "uint16_t" type for all
modules in the hypervisor and change related codes. The valid
range of pcpu_id is 0~65534, INVALID_PCPU_ID is defined to the
invalid pcpu_id for error detection, BROADCAST_PCPU_ID is
broadcast pcpu_id used to notify all valid pcpu.

The type of pcpu_id in the struct vcpu and vcpu_id is "int" type,
this will be fixed in another patch.

V1-->V2:
    *  Change the type of pcpu_id from uint32_t to uint16_t;
    *  Define INVALID_PCPU_ID for error detection;
    *  Define BROADCAST_PCPU_ID to notify all valid pcpu.

V2-->V3:
    *  Update comments for INVALID_PCPU_ID and BROADCAST_PCPU_ID;
    *  Update addtional pcpu_id;
    *  Convert hexadecimals to unsigned to meet the type of pcpu_id;
    *  Clean up for MIN_PCPU_ID and MAX_PCPU_ID, they will be
       defined by configuration.
Note: fix bug in the init_lapic(), the pcpu_id shall be less than 8,
this is constraint by implement in the init_lapic().
Signed-off-by: Xiangyang Wu <xiangyang.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-21 16:59:21 +08:00
Junjie Mao aa505a28bb HV: treewide: convert hexadecimals used in bitops to unsigned
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
2018-06-21 13:12:39 +08:00
Junjie Mao cdd38d0bc3 HV: msr: convert hexadecimals used in bitops to unsigned
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
2018-06-21 13:12:39 +08:00
Junjie Mao 41a1035f9b HV: irq: convert hexadecimals used in bitops to unsigned
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
2018-06-21 13:12:39 +08:00
Yonghua Huang 32fccb2f43 HV: 'vlapic_set_local_intr()' code cleanup
change the argument 'cpu_id' to 'vcpu_id'

Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
2018-06-20 15:06:49 +08:00
Huihuang Shi cb56086239 HV:guest:fix "expression is not Boolean"
MISRA C explicit required expression should be boolean when
in branch statements (if,while...).

Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-20 14:19:47 +08:00
Huihuang Shi be0f5e6c16 HV:treewide:fix "expression is not Boolean"
MISRA C explicit required expression should be boolean when
in branch statements (if,while...).

Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-20 14:19:47 +08:00
Huihuang Shi fe0314e8c3 HV:header:fix "expression is not Boolean"
MISRA C explicit required expression should be boolean when
in branch statements (if,while...).

Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-20 14:19:47 +08:00
Kaige Fu 1f8f1a4ecb HV: fix unused warning at RELEASE version
We will get following warnings when build acrn as release version. This patch
fix those warnings.

No functional change.

...
arch/x86/cpu.c: In function ‘stop_cpus’:
arch/x86/cpu.c:727:20: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   if (get_cpu_id() == i) /* avoid offline itself */
                    ^~
arch/x86/vtd.c: In function ‘dmar_enable_translation’:
arch/x86/vtd.c:84:12: warning: unused variable ‘start’ [-Wunused-variable]
   uint64_t start = rdtsc();                       \
...
arch/x86/guest/instr_emul.c: In function ‘get_gla’:
arch/x86/guest/instr_emul.c:615:6: warning: variable ‘error’ set but not used [-Wunused-but-set-variable]
  int error;
...

Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-20 12:27:58 +08:00
Huihuang Shi 977c4b20b5 fix parted of "missing for discarded return value"
MISRA C required that return value should be used, missing for it should
add "(void)" prefix before the function call.
Some function can be declared without return value to avoid this problem.

Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-19 16:21:45 +08:00
Li, Fei1 46f64b55b4 hv: vlapic_timer: add vlapic one-shot/periodic timer support
Enable guest LAPIC one-shot/periodic timer support.

Change-Id: I368e28beaa81d6566de2626bbe26c9f8972f0891
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2018-06-15 17:10:28 +08:00
Yan, Like d8c8403561 hv: replace vlapic_init by vlapic_reset in vcpu_reset
This change is to fix a guest vm hang issue at vm reset, especially easy to
be seen when it's a watchdog timeout reset.
vlapic_init create and init vlapic.vlapic_timer without deleting the
timer from cpu_times list, which breaks the list, results in a timer remains
with callback points to an invalid location.

Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Yan, Like <like.yan@intel.com>
2018-06-14 15:44:09 +08:00
Kaige Fu 359b93f4cc HV: Remove misuesed __unused
There are some __unused attached to variables. But, those variables
are used by the function actually.

This patch remove them. No functional change.

Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
2018-06-14 13:42:42 +08:00
Victor Sun 719e07fb8f HV: fix a print typo in create_vcpu
*d is typo of %d, fix it.

Signed-off-by: Victor Sun <victor.sun@intel.com>
2018-06-14 13:42:11 +08:00
Victor Sun 9a56024b49 HV: load host pm S state data while create vm0
The pm S state data is from host ACPI info and needed for S3/S5
implementation.

Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-13 15:02:03 +08:00
Yin Fengwei 5414d57ac4 hv: Fix typo of trampline with trampoline
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-12 15:54:16 +08:00
Junjie Mao 8c4a5987e3 irq: convert irq/vector numbers to unsigned
Currently irq and vector numbers are used inconsistently.

    * Sometimes vector or irq ids is used in bit operations, indicating
      that they should be unsigned (which is required by MISRA C).

    * At the same time we use -1 to indicate an unknown irq (in
      common_register_handler()) or unavailable irq (in
      alloc_irq()). Also (irq < 0) or (vector < 0) are used for error
      checking. These indicate that irq or vector ids should be signed.

This patch converts irq and vector numbers to unsigned 32-bit integers, and
replace the previous -1 with IRQ_INVALID or VECTOR_INVALID. The branch
conditions are updated accordingly.

Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-12 10:21:58 +08:00
Zide Chen 2a1a6ad0af hv: Other preparation for trampoline code relocation
- For UEFI boot, allocate memory for trampoline code in ACRN EFI,
  and pass the pointer to HV through efi_ctx
- Correct LOW_RAM_SIZE and LOW_RAM_START in Kconfig and bsp_cfg.h
- use trampline_start16_paddr instead of the hardcoded
  CONFIG_LOW_RAM_START for initial guest GDT and page tables

Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
2018-06-11 12:15:28 +08:00
Zide Chen 77580edff0 hv: add memory allocation functions for trampoline code relocation
emalloc_for_low_mem() is used if CONFIG_EFI_STUB is defined.
e820_alloc_low_memory() is used for other cases

In either case, the allocated memory will be marked with E820_TYPE_RESERVED

Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
2018-06-11 12:15:28 +08:00
Jason Chen CJ 571fb33158 rename copy_from/to_vm to copy_from/to_gpa
the name copy_from/to_gpa should be more suitable.

Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-11 12:14:43 +08:00
Jason Chen CJ 8d35d8752b instr_emul: remove vm_gva2gpa
- vm_gva2gpa is same as gva2gpa, so replace it with gva2gpa directly.
- remove dead usage of vm_gva2gpa in emulate_movs.

Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-11 12:14:43 +08:00
Jason Chen CJ 51528d4a81 ucode: refine acrn_update_ucode with copy_from_gva
using copy_from_gva to refine function acrn_update_ucode

v2:
- inject #PF if copy_from_gva meet -EFAULT
- remove VCPU_RETAIN_RIP when inject #PF
- refine MACRO GET_DATA_SIZE

Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-11 12:14:43 +08:00
Jason Chen CJ 48de7efa26 instr_emul: remove vm_restart_instruction and use VCPU_RETAIN_RIP
there is no need to use wrap function vm_restart_instruction, we
can use VCPU_RETAIN_RIP directly

Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
2018-06-11 12:14:43 +08:00
Jason Chen CJ 0d6218f980 instr_emul: remove unnecessary params in __decode_instruction
removed unused vcpu & gla params

Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
2018-06-11 12:14:43 +08:00
Jason Chen CJ 570aef648a instr_emul: refine decode_instruction with copy_from_gva
use copy_from_gva in vie_init, if copy_from_gva meet -EFAULT, inject #PF.
And for decode_instruction, if return -EFAULT, the caller should keep return
path with successful status.

v2:
- remove vm_restart_instruction when inject #PF

Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
2018-06-11 12:14:43 +08:00
Jason Chen CJ 88758dfe57 add copy_from_gva/copy_to_gva functions
there are data transfer between guest virtual space(GVA) & hv(HVA), for
example, guest rip fetching during instruction decoding.

GVA is address continuous, but its GPA could be only 4K page address
continuous, this patch adds copy_from_gva & copy_to_gva functions by
doing page walking of GVA to avoid address breaking during accessing GVA.

v2:
- modify API interface based on new gva2gpa function, err_code added
- combine similar code with inline function _copy_gpa
- change API name from vcopy_from/to_vm to copy_from/to_gva

Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
2018-06-11 12:14:43 +08:00
Huihuang Shi 6be8283334 fix MISRA C:"Statement with no side effect"
V2->V3 modified the description
V1->V2 add __unused to handler_private_data

while misra c analyse callback function, it will dereference the pointer
plus an implicit getting address when extra parentheses with inner
star(example:(*foo)()). the first dereference should be removed.

Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
reviewed-by: Li,Fei1 <fei1.li@intel.com>
reviewed-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-11 12:14:16 +08:00
Yin Fengwei f3831cdc80 hv: don't combine the trampline code with AP start
Cleanup "cpu_secondary_xx" in the symbols/section/functions/variables
name in trampline code.

There is item left: the default C entry is Ap start c entry. Before
ACRN enter S3, the c entry will be updated to high level S3 C entry.
So s3 resume will go s3 resume path instead of AP startup path.

Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Signed-off-by: Zheng Gen <gen.zheng@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
2018-06-08 13:45:02 +08:00
Zide Chen 4bb5e60de5 hv: enable MTRR virtualization
- unmask MTRR from guest CPUID to enable MTRR
- MTRR virtualization can be disabled by commenting out CONFIG_MTRR_ENABLED

Signed-off-by: bliu11 <baohong.liu@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-08 12:06:15 +08:00
Zide Chen a41267e184 hv: change rdmsr/wrmsr policy for MTRR registers
-rdmsr: emulate all MTRR registers besides variable range MTRRs
-wrmsr: emulate all MTRR registers besides variable range MTRRs and MTRRCAP

Signed-off-by: bliu11 <baohong.liu@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-08 12:06:15 +08:00
Yin Fegnwei fbeafd500a hv: add API to get the vcpu mapped to specific pcpu.
Per performance consideration, we don't flush vcpu context when doing
vcpu swithing (because it's only swithing between vcpu and idle).

But when enter S3, we need to call vmclear against all vcpus attached
to APs. We need to know which vcpu is attached with which pcpu.

This patch introduced API to get vcpu mapped to specific pcpu.

Signed-off-by: Yin Fegnwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
2018-06-07 15:36:46 +08:00
Minggui Cao 66d283d0c4 add lock for vcpu state access
keep the global variables access exclusive in vcpu pause & resume.

Signed-]off-by: Minggui Cao <minggui.cao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-05 17:28:10 +08:00
Huihuang Shi e591315a65 HV:treewide:C99-friendly per_cpu implementation change the per_cpu method
The current implementation of per_cpu relies on several non-c99 features,
and in additional involves arbitrary pointer arithmetic which is not MIS-
RA C friendly.

This patch introduces struct per_cpu_region which holds all the per_cpu
variables. Allocation of per_cpu data regions and access to per_cpu vari-
ables are greatly simplified, at the cost of making all per_cpu varaibl-
es accessible in files.

Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
2018-06-05 17:09:00 +08:00
Li, Fei1 84f4cf3c1d hv: vmx: add vpid support
Enable VMX vpid ctrl and assign an unique vpid to each vcpu
so that VMX transitions are not required to invalidate any
linear mappings or combined mappings.

SDM Vol 3 - 28.3.3.3
If EPT is in use, the logical processor associates all mappings
it creates with the value of bits 51:12 of current EPTP.
If a VMM uses different EPTP values for different guests, it may
use the same VPID for those guests. Doing so cannot result in one
guest using translations that pertain to the other.

In our UOS, the trusty world and normal world are using different
EPTP. So we can use the same VPID for it.

Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-04 17:11:15 +08:00
Binbin Wu bed6f0b99e hv: set start mode of vcpu
In current code, sos/uos bsp can only start from 64bit mode.

For sbl platform:
This patch start sos bsp from protected mode by default.
CONFIG_START_VM0_BSP_64BIT is defined to allow start sos bsp
from 64bit mode. If a config CONFIG_START_VM0_BSP_64BIT
defined in config file, then sos bsp will start from 64bit mode.
This patch start uos bsp from real mode, which needs the integration
of virtual bootloader (vsbl).

For uefi platform:
This patch sets sos bsp vcpu mode according to the uefi context.
This patch starts uos bsp from protected mode, because vsbl is not ready
to publish for uefi platform yet. After vsbl is ready, can change to
start uos bsp from real mode.

Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
2018-06-01 19:14:13 +08:00
Binbin Wu 881eaa6104 hv: create gdt for guest to start from protected mode
In current implementation, on sbl platform, vm0 bsp
starts from 64bit mode. And hv need to prepare init
page table for it.

In this patch series, on sbl platform, vm0 bsp starts
from non-paging protected mode.
This patch prepares an init gdt for vm0 bsp on sbl
platform.

Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
2018-06-01 19:14:13 +08:00
Binbin Wu 9e7179c950 hv: support gva2gpa in different paging modes
Translate gva2gpa in different paging modes.
Change the definition of gva2gpa.
- return value for error status
- Add a parameter for error code when paging fault.
Change the definition of vm_gva2gpa.
- return value for error status
- Add a parameter for error code when paing fault.

Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
2018-06-01 19:14:13 +08:00
Binbin Wu dd14d8e1b0 hv: add API to get vcpu paging mode
Use # of paging level to identify paging mode

Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
2018-06-01 19:14:13 +08:00
Binbin Wu fb09f9daca hv: update vcpu mode when vmexit
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
2018-06-01 19:14:13 +08:00
Binbin Wu 5c7f120d96 hv: refine guest control register handling
In current implemenation, cr0/cr4 host mask value are set
according to the value from fixed0/fixed1 values of cr0/cr4.
In fact, host mask can be set to the bits, which need to be trapped.

This patch, add code to support exiting long mode in CR0 write handling.
Add some check when modify CR0/CR4.

- CR0_PG, CR0_PE, CR0_WP, CR0_NE are trapped for CR0.
  PG, PE are trapped to track vcpu mode switch.
  WP is trapped for info of protection when paing walk.
  NE is always on bit.
- CR4_PSE, CR4_PAE, CR4_VMXE are trapped for CR4.
  PSE, PAE are trapped to track paging mode.
  VMXE is always on bit.
- Reserved bits and always off bits are not allow to be set by guest.
  If guest try to set these bits when vmexit, a #GP will be injected.

Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
2018-06-01 19:14:13 +08:00
huihuang shi 14b2e1d395 fix "ISO C99 does not support '_Static_assert'"
_Static_assert is supported in C11 standard.
Please see N1570(C11 mannual) 6.4.1.
replace _Static_assert with ASSERT.

Signed-off-by: huihuang shi <huihuang.shi@intel.com>
2018-06-01 16:39:28 +08:00
David B. Kinder f4122d99c5 license: Replace license text with SPDX tag
Replace the BSD-3-Clause boiler plate license text with an SPDX tag.

Fixes: #189

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2018-06-01 10:43:06 +08:00