zephyr/arch/x86_64/core/xuk-stub16.c

95 lines
2.8 KiB
C
Raw Normal View History

arch/x86_64: New architecture added This patch adds a x86_64 architecture and qemu_x86_64 board to Zephyr. Only the basic architecture support needed to run 64 bit code is added; no drivers are added, though a low-level console exists and is wired to printk(). The support is built on top of a "X86 underkernel" layer, which can be built in isolation as a unit test on a Linux host. Limitations: + Right now the SDK lacks an x86_64 toolchain. The build will fall back to a host toolchain if it finds no cross compiler defined, which is tested to work on gcc 8.2.1 right now. + No x87/SSE/AVX usage is allowed. This is a stronger limitation than other architectures where the instructions work from one thread even if the context switch code doesn't support it. We are passing -no-sse to prevent gcc from automatically generating SSE instructions for non-floating-point purposes, which has the side effect of changing the ABI. Future work to handle the FPU registers will need to be combined with an "application" ABI distinct from the kernel one (or just to require USERSPACE). + Paging is enabled (it has to be in long mode), but is a 1:1 mapping of all memory. No MMU/USERSPACE support yet. + We are building with -mno-red-zone for stack size reasons, but this is a valuable optimization. Enabling it requires automatic stack switching, which requires a TSS, which means it has to happen after MMU support. + The OS runs in 64 bit mode, but for compatibility reasons is compiled to the 32 bit "X32" ABI. So while the full 64 bit registers and instruction set are available, C pointers are 32 bits long and Zephyr is constrained to run in the bottom 4G of memory. Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-08-20 03:24:48 +08:00
/*
* Copyright (c) 2018 Intel Corporation
*
* SPDX-License-Identifier: Apache-2.0
*/
#include "serial.h"
#include "x86_64-hw.h"
#include "shared-page.h"
/*
* 16 bit boot stub. This code gets copied into a low memory page and
* used as the bootstrap code for SMP processors, which always start
* in real mode. It is compiled with gcc's -m16 switch, which is a
* wrapper around the assembler's .code16gcc directive which cleverly
* takes 32 bit assembly and "fixes" it with appropriate address size
* prefixes to run in real mode on a 386.
*
* It is just code! We have the .text segment and NOTHING ELSE. No
* static or global variables can be used, nor const read-only data.
* Neither is the linker run, so nothing can be relocated and all
* symbolic references need to be to addresses within this file. In
* fact, any relocations that do sneak in will be left at zero at
* runtime!
*/
__asm__(" cli\n"
" xor %ax, %ax\n"
" mov %ax, %ss\n"
" mov %ax, %ds\n"
" mov $80000, %esp\n" /* FIXME: put stack someplace officiallerish */
" jmp _start16\n");
void _start16(void)
{
#ifdef XUK_DEBUG
serial_putc('1'); serial_putc('6'); serial_putc('\n');
#endif
/* First, serialize on a simple spinlock. Note there's a
* theoretical flaw here in that we are on a shared stack with the
* other CPUs here and we don't *technically* know that "oldlock"
* does not get written to the (clobberable!) stack memory. But
* in practice the compiler does the right thing here and we spin
* in registers until exiting the loop, at which point we are the
* only users of the stack, and thus safe.
*/
int oldlock;
do {
__asm__ volatile("pause; mov $1, %%eax; xchg %%eax, (%1)"
: "=a"(oldlock) : "m"(_shared.smpinit_lock));
} while (oldlock);
/* Put a red banner at the top of the screen to announce our
* presence
*/
volatile unsigned short *vga = (unsigned short *)0xb8000;
for (int i = 0; i < 240; i++)
vga[i] = 0xcc20;
/* Spin again waiting on the BSP processor to give us a stack. We
* won't use it until the entry code of stub32, but we want to
* make sure it's there before we jump.
*/
while (!_shared.smpinit_stack) {
}
/* Load the GDT the CPU0 already prepared for us */
__asm__ volatile ("lgdtw (%0)\n" : : "r"(_shared.gdt16_addr));
/* Enter protected mode by setting the bottom bit of CR0 */
int cr0;
__asm__ volatile ("mov %%cr0, %0\n" : "=r"(cr0));
cr0 |= 1;
__asm__ volatile ("mov %0, %%cr0\n" : : "r"(cr0));
/* Set up data and stack segments */
short ds = GDT_SELECTOR(2);
__asm__ volatile ("mov %0, %%ds; mov %0, %%ss" : : "r"(ds));
/* Far jump to the 32 bit entry point, passing a cookie in EAX to
* tell it what we're doing
*/
int magic = BOOT_MAGIC_STUB16;
__asm__ volatile ("ljmpl $0x8,$0x100000" : : "a"(magic));
while (1) {
__asm__("hlt");
}
}