Commit Graph

209 Commits

Author SHA1 Message Date
Huihuang Shi 58672cb562 fix "negative shift"
MISRA C doesn't allowed negative shift, changed any potential signed value
to unsigned value.

Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-22 12:18:45 +08:00
Junjie Mao f52a25db7e HV: ptdev: convert vectors in msi_info to unsigned integers
Vectors are unsigned integers now. This patch converts the vectors in struct
ptdev_msi_info to uint32_t so that all variables representing interrupt vectors
are aligned.

No other changes needed except the type declarators since the other functions
manipulating vectors already takes/returns uint32_t.

Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-21 17:00:53 +08:00
Xiangyang Wu 3027bfab10 HV: treewide: enforce unsignedness of pcpu_id
In the hypervisor, physical cpu id is defined as "int" or "uint32_t"
type in the hypervisor. So there are some sign conversion issues
about  physical cpu id (pcpu_id) reported by static analysis tool.
Sign conversion violates the rules of MISRA C:2012.

In this patch, define physical cpu id as "uint16_t" type for all
modules in the hypervisor and change related codes. The valid
range of pcpu_id is 0~65534, INVALID_PCPU_ID is defined to the
invalid pcpu_id for error detection, BROADCAST_PCPU_ID is
broadcast pcpu_id used to notify all valid pcpu.

The type of pcpu_id in the struct vcpu and vcpu_id is "int" type,
this will be fixed in another patch.

V1-->V2:
    *  Change the type of pcpu_id from uint32_t to uint16_t;
    *  Define INVALID_PCPU_ID for error detection;
    *  Define BROADCAST_PCPU_ID to notify all valid pcpu.

V2-->V3:
    *  Update comments for INVALID_PCPU_ID and BROADCAST_PCPU_ID;
    *  Update addtional pcpu_id;
    *  Convert hexadecimals to unsigned to meet the type of pcpu_id;
    *  Clean up for MIN_PCPU_ID and MAX_PCPU_ID, they will be
       defined by configuration.
Note: fix bug in the init_lapic(), the pcpu_id shall be less than 8,
this is constraint by implement in the init_lapic().
Signed-off-by: Xiangyang Wu <xiangyang.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-21 16:59:21 +08:00
Cai Yulong 2922a657c9 hv: fix compile error
function definition in header file must be signed as static inline type

Signed-off-by: Cai Yulong <yulongc@hwtc.com.cn>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-21 13:13:04 +08:00
Junjie Mao aa505a28bb HV: treewide: convert hexadecimals used in bitops to unsigned
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
2018-06-21 13:12:39 +08:00
Junjie Mao cdd38d0bc3 HV: msr: convert hexadecimals used in bitops to unsigned
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
2018-06-21 13:12:39 +08:00
Junjie Mao d705970eb2 HV: vmx: convert hexadecimals used in bitops to unsigned
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
2018-06-21 13:12:39 +08:00
Junjie Mao 41a1035f9b HV: irq: convert hexadecimals used in bitops to unsigned
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
2018-06-21 13:12:39 +08:00
Junjie Mao f4bd0798e0 HV: mmu: convert hexadecimals used in bitops to unsigned
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
2018-06-21 13:12:39 +08:00
Junjie Mao 7b548e87db HV: cpu: convert hexadecimals used in bitops to unsigned
Per MISRA C, operands to bit-wise operations should have unsigned
types. However, C99 prioritizes to use signed integers for hexadecimal constants
without the 'U' suffixes, leading to tons of bit operations on signed integers.

This patch series add the 'U' suffixes to the constants which are used in bit
operations, and add the intended width of these integers when applicable
(i.e. the target value is at least 32-bit wide) to avoid functional differences
due to signed vs. unsigned extensions. The rule of thumb is:

    '0' for signed char/short/int
    '0U' for unsigned char/short/int
    '0L' for signed long (should be 64-bit)
    '0UL' for unsigned long (should be 64-bit)

Signed-off-by: Junjie Mao <junjie.mao@intel.com>
2018-06-21 13:12:39 +08:00
Yonghua Huang 32fccb2f43 HV: 'vlapic_set_local_intr()' code cleanup
change the argument 'cpu_id' to 'vcpu_id'

Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
2018-06-20 15:06:49 +08:00
Huihuang Shi fe0314e8c3 HV:header:fix "expression is not Boolean"
MISRA C explicit required expression should be boolean when
in branch statements (if,while...).

Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-20 14:19:47 +08:00
Jason Chen CJ e84d4dee19 trusty: init & switch world fix
- when init, cr0 & cr4 should read from VMCS
- when world switch, cr0/cr4 read shadow should also be save/restore

v2:
- use context->vmx_cr0/cr4 to save/restore VMX_GUEST_CR0/CR4
- use context->cr0/cr4 to save/restore VMX_CR0/CR4_READ_SHADOW

Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-20 11:30:28 +08:00
Huihuang Shi 977c4b20b5 fix parted of "missing for discarded return value"
MISRA C required that return value should be used, missing for it should
add "(void)" prefix before the function call.
Some function can be declared without return value to avoid this problem.

Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-19 16:21:45 +08:00
Yonghua Huang 098c2e6788 HV: enable SMEP in hypervisor
- this patch is to enable SMEP in hypervisor, SMEP protects
   guests' memory from supervisor-mode instruction fetches,
   in other words, hypervisor which operating in supervisor
   mode can't fetch instructions from (guests' memory)
   linear addresses that are accessible in user mode.

Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
2018-06-15 17:11:03 +08:00
Edwin Zhai 8202ba0a70 HV: move common stuff from assign.c
Move common stuff, like ptdev entry and softirq, to new ptdev.c

Signed-off-by: Edwin Zhai <edwin.zhai@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-15 17:10:41 +08:00
Yan, Like d8c8403561 hv: replace vlapic_init by vlapic_reset in vcpu_reset
This change is to fix a guest vm hang issue at vm reset, especially easy to
be seen when it's a watchdog timeout reset.
vlapic_init create and init vlapic.vlapic_timer without deleting the
timer from cpu_times list, which breaks the list, results in a timer remains
with callback points to an invalid location.

Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Yan, Like <like.yan@intel.com>
2018-06-14 15:44:09 +08:00
Yin Fengwei feed38f5ae hv: add suspend/resume callback for console
To handle s3 enter/exit for console

Signed-off-by: Zheng Gen <gen.zheng@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-14 13:41:45 +08:00
Yin Fengwei 8eaf4d2ab6 hv: Add suspend/resume callback for vtd
To handle S3 enter/exit for vtd.

Signed-off-by: Edwin Zhai <edwin.zhai@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-14 13:41:45 +08:00
Yin Fengwei d2ea4546c3 hv: Add suspend/resume callback for ioapic
These two functions will be called when ACRN enter/exit S3.

Signed-off-by: Zheng Gen <gen.zheng@intel.com>
Signed-off-by: Yan Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-14 13:41:45 +08:00
Yin Fengwei ddd03d6252 hv: add suspend/resume callback for lapic.
They will be called when acrn enter S3.
NOTE: it's only needed for native BSP because all APs are offline.

Signed-off-by: Zheng Gen <gen.zheng@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-14 13:41:45 +08:00
Zheng, Gen 8f3b36b224 HV: add volatile declaration to pointer parameter
Add a volatile declaration to pointer parameter to avoid compiler
to optimize it by using old value saved in register instead of
accessing system memory.

Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-14 13:41:12 +08:00
Victor Sun 4c5835673e HV: make cpu state table static const
The hardcoded CPU Px Cx table should be read only, so set them to static
and const for safety.

Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-14 13:39:59 +08:00
Victor Sun 9a56024b49 HV: load host pm S state data while create vm0
The pm S state data is from host ACPI info and needed for S3/S5
implementation.

Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-13 15:02:03 +08:00
Victor Sun 88e1c4975c HV: add bsp acpi info support
On some occations HV operates relying on host acpi info, we can use a
c file to store this data. The data could be hardcoded or use offline
tool that run on target first and then generate the file automatically.

Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-13 15:02:03 +08:00
Yin Fengwei 5414d57ac4 hv: Fix typo of trampline with trampoline
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-12 15:54:16 +08:00
Junjie Mao 8c4a5987e3 irq: convert irq/vector numbers to unsigned
Currently irq and vector numbers are used inconsistently.

    * Sometimes vector or irq ids is used in bit operations, indicating
      that they should be unsigned (which is required by MISRA C).

    * At the same time we use -1 to indicate an unknown irq (in
      common_register_handler()) or unavailable irq (in
      alloc_irq()). Also (irq < 0) or (vector < 0) are used for error
      checking. These indicate that irq or vector ids should be signed.

This patch converts irq and vector numbers to unsigned 32-bit integers, and
replace the previous -1 with IRQ_INVALID or VECTOR_INVALID. The branch
conditions are updated accordingly.

Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-12 10:21:58 +08:00
Mingqiang Chi 5e2c83f395 hv:replace unsigned long long with uint64_t
unsigned long long--> uint64_t
long long --> int64_t

Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
2018-06-12 10:21:19 +08:00
Wang, Hongbo f757d49ead
Merge pull request #322 from dbkinder/api-spell
doc: fix API documentation misspellings
2018-06-12 07:45:21 +08:00
Zide Chen 48b0894d3d hv: relocate trampoline code to the dynamically allocated memory
- Also update all the references that need the absolute HPA with the
  actual load addresses
- Save the trampoline code address to trampline_start16_paddr

Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
2018-06-11 12:15:28 +08:00
Zide Chen 2a1a6ad0af hv: Other preparation for trampoline code relocation
- For UEFI boot, allocate memory for trampoline code in ACRN EFI,
  and pass the pointer to HV through efi_ctx
- Correct LOW_RAM_SIZE and LOW_RAM_START in Kconfig and bsp_cfg.h
- use trampline_start16_paddr instead of the hardcoded
  CONFIG_LOW_RAM_START for initial guest GDT and page tables

Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
2018-06-11 12:15:28 +08:00
Zide Chen 40c8c4d3c3 hv: Prepare trampline.S trampoline code relocation
in real mode part, add extra pointers for page tables and long jump buffer
so it's possible for HV code to patch the relocation offset

in long mode part, use absolute addressing when referring HV symbols,
and use relative addressing for symbols within trampoline code

Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
2018-06-11 12:15:28 +08:00
Zide Chen 77580edff0 hv: add memory allocation functions for trampoline code relocation
emalloc_for_low_mem() is used if CONFIG_EFI_STUB is defined.
e820_alloc_low_memory() is used for other cases

In either case, the allocated memory will be marked with E820_TYPE_RESERVED

Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
2018-06-11 12:15:28 +08:00
Jason Chen CJ 571fb33158 rename copy_from/to_vm to copy_from/to_gpa
the name copy_from/to_gpa should be more suitable.

Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-11 12:14:43 +08:00
Jason Chen CJ 8d35d8752b instr_emul: remove vm_gva2gpa
- vm_gva2gpa is same as gva2gpa, so replace it with gva2gpa directly.
- remove dead usage of vm_gva2gpa in emulate_movs.

Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-11 12:14:43 +08:00
Jason Chen CJ 88758dfe57 add copy_from_gva/copy_to_gva functions
there are data transfer between guest virtual space(GVA) & hv(HVA), for
example, guest rip fetching during instruction decoding.

GVA is address continuous, but its GPA could be only 4K page address
continuous, this patch adds copy_from_gva & copy_to_gva functions by
doing page walking of GVA to avoid address breaking during accessing GVA.

v2:
- modify API interface based on new gva2gpa function, err_code added
- combine similar code with inline function _copy_gpa
- change API name from vcopy_from/to_vm to copy_from/to_gva

Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
2018-06-11 12:14:43 +08:00
Huihuang Shi 8940c896be fix MISRA C"Literal zero used in pointer context"
MISRC C required pointer to zero should be replace with NULL

Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
2018-06-11 12:13:43 +08:00
Junjie Mao c849bff850 HV: config: adapt to the generated config.h
This patch drops "#include <bsp_cfg.h>" and include the generated config.h in
CFLAGS for the configuration data.

Also make sure that all configuration data have the 'CONFIG_' prefix.

v4 -> v5:

    * No changes.

v3 -> v4:

    * Add '-include config.h' to hypervisor/bsp/uefi/efi/Makefile.
    * Update comments mentioning bsp_cfg.h.

v2 -> v3:

    * Include config.h on the command line instead of in any header or source to
      avoid including config.h multiple times.
    * Add config.h as an additional dependency for source compilation.

v1 -> v2:

    * No changes.

Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Zhao Yakui <yakui.zhao@intel.com>
2018-06-08 17:21:13 +08:00
Yin Fengwei f3831cdc80 hv: don't combine the trampline code with AP start
Cleanup "cpu_secondary_xx" in the symbols/section/functions/variables
name in trampline code.

There is item left: the default C entry is Ap start c entry. Before
ACRN enter S3, the c entry will be updated to high level S3 C entry.
So s3 resume will go s3 resume path instead of AP startup path.

Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Signed-off-by: Zheng Gen <gen.zheng@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
2018-06-08 13:45:02 +08:00
Zide Chen c2283743f0 hv: basic MTRR virtualization
Linux commit edfe63ec97ed ("x86/mtrr: Fix Xorg crashes in Qemu sessions")
disables PAT feature if MTRR is not enabled. This patch does partial
emulation of MTRR to prevent this from happening: enable fixed-range
MTRRs and disable virable range MTRRs

By default IA32_PAT MSR (SDM Vol3 11.12.4, Table 11-12) doesn't include
'WC' type. If MTRR is disabled from the guests, Linux doesn't allow
writing IA32_PAT MSR so WC type can't be enabled. This creates some
performance issues for certian applications that rely on WC memory type.

Implementation summary:
- Enable MTRR feature: MTRRdefType.E=1
- Enable fixed range MTRRs: MTRRCAP.fix=1, MTRRdefType.FE=1
- For simplicity, disable variable range MTRRs: MTRRCAP.vcnt=0.
  It's expected that this bit is honored by the guests and they won't
  change the guest memory type through variable MTRRs.

Signed-off-by: bliu11 <baohong.liu@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-08 12:06:15 +08:00
Zide Chen 5d2ab4d9ef hv: add APIs to allow updating EPT mem type
- Add PAGING_REQUEST_TYPE_MODIFY_MT memory map request type
- Update map_mem_region() to allow modifying the memory type related
  fields in a page table entry
- Add ept_update_mt()
- add modify_mem_mt() for both EPT and MMU

Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-08 12:06:15 +08:00
Yin Fegnwei f741b014f8 hv: prepare for down/up APs dynamically.
- export start_cpus to start/online APs.
- Add stop_cpus to offline APs.
- Update cpu_dead to decrement running cpus number and do cleanup
  for AP down

Signed-off-by: Yin Fegnwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
2018-06-07 15:36:46 +08:00
Yin Fegnwei 7a71422a6d hv: handle cpu offline request in idle thread
Change need_scheduled fileds of schedule context to flags because
it's not only used for need_schedule check.

Add two functions to request/handle cpu offline.

The reason we only handle cpu offline request in idle thread is
that we should pause the vcpu running on target pcpu. Then it's
only possible that target pcpu get cpu offline request in idle
thread.

Signed-off-by: Yin Fegnwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
2018-06-07 15:36:46 +08:00
Yin Fegnwei 08139c34f7 hv: add vmx_off and update exec_vmxon_instr
To handle cpu down/up dynamically, arcn needs to support vmx off/on
dynamically. Following changes is introduced:
  vmx_off will be used when down AP. It does:
    - vmclear the mapped vcpu
    - off vmx.

  exec_vmxon_instr is updated to handle start and up AP both. It does
    - if vmx was on on AP, load the vmxon_region saved. Otherwise,
      allocate vmxon_region.
    - if there is mapped vcpu, vmptrld mapped vcpu.

Signed-off-by: Zheng Gen <gen.zheng@intel.com>
Signed-off-by: Yin Fegnwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
2018-06-07 15:36:46 +08:00
Yin Fegnwei fbeafd500a hv: add API to get the vcpu mapped to specific pcpu.
Per performance consideration, we don't flush vcpu context when doing
vcpu swithing (because it's only swithing between vcpu and idle).

But when enter S3, we need to call vmclear against all vcpus attached
to APs. We need to know which vcpu is attached with which pcpu.

This patch introduced API to get vcpu mapped to specific pcpu.

Signed-off-by: Yin Fegnwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
2018-06-07 15:36:46 +08:00
Jason Chen CJ a9ee6da0d9 vm: remove current_vcpu from vm structure
current_vcpu is not correct when there are multi vcpus in one VM,
using it is in-correct, so remove it.

Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
2018-06-07 12:28:18 +08:00
Qi Yadong 03f5cbdd7a HV: Parse SeedList HOB
Retrieve dseed from SeedList HOB(Hand-Off-Block).
SBL passes SeedList HOB to ACRN by MBI modules.

Signed-off-by: Qi Yadong <yadong.qi@intel.com>
Reviewed-by: Zhu Bing <bing.zhu@intel.com>
Reviewed-by: Wang Kai <kai.z.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-07 09:55:29 +08:00
Huihuang Shi e591315a65 HV:treewide:C99-friendly per_cpu implementation change the per_cpu method
The current implementation of per_cpu relies on several non-c99 features,
and in additional involves arbitrary pointer arithmetic which is not MIS-
RA C friendly.

This patch introduces struct per_cpu_region which holds all the per_cpu
variables. Allocation of per_cpu data regions and access to per_cpu vari-
ables are greatly simplified, at the cost of making all per_cpu varaibl-
es accessible in files.

Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
2018-06-05 17:09:00 +08:00
Li, Fei1 84f4cf3c1d hv: vmx: add vpid support
Enable VMX vpid ctrl and assign an unique vpid to each vcpu
so that VMX transitions are not required to invalidate any
linear mappings or combined mappings.

SDM Vol 3 - 28.3.3.3
If EPT is in use, the logical processor associates all mappings
it creates with the value of bits 51:12 of current EPTP.
If a VMM uses different EPTP values for different guests, it may
use the same VPID for those guests. Doing so cannot result in one
guest using translations that pertain to the other.

In our UOS, the trusty world and normal world are using different
EPTP. So we can use the same VPID for it.

Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-04 17:11:15 +08:00
Li, Fei1 c34f72a0bc hv: monir modify for flush ept tlb to compatible with vpid
We need know which tlb to flush: ept or vpid.
1. error handle for invept.
  it's the same with invvpid error handle.
  change its name to compatible with vpid.
2. the macro name for flush ept tlb request.

Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2018-06-04 17:11:15 +08:00