[PATCH] i386: load_tls() fix

Subtle fix: load_TLS has been moved after saving %fs and %gs segments to avoid
creating non-reversible segments.  This could conceivably cause a bug if the
kernel ever needed to save and restore fs/gs from the NMI handler.  It
currently does not, but this is the safest approach to avoiding fs/gs
corruption.  SMIs are safe, since SMI saves the descriptor hidden state.

Signed-off-by: Zachary Amsden <zach@vmware.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This commit is contained in:
Zachary Amsden 2005-09-03 15:56:39 -07:00 committed by Linus Torvalds
parent 2f2984eb4a
commit e7a2ff593c
1 changed files with 13 additions and 8 deletions

View File

@ -678,22 +678,27 @@ struct task_struct fastcall * __switch_to(struct task_struct *prev_p, struct tas
__unlazy_fpu(prev_p);
/*
* Reload esp0, LDT and the page table pointer:
* Reload esp0.
*/
load_esp0(tss, next);
/*
* Save away %fs and %gs. No need to save %es and %ds, as
* those are always kernel segments while inside the kernel.
* Doing this before setting the new TLS descriptors avoids
* the situation where we temporarily have non-reloadable
* segments in %fs and %gs. This could be an issue if the
* NMI handler ever used %fs or %gs (it does not today), or
* if the kernel is running inside of a hypervisor layer.
*/
savesegment(fs, prev->fs);
savesegment(gs, prev->gs);
/*
* Load the per-thread Thread-Local Storage descriptor.
*/
load_TLS(next, cpu);
/*
* Save away %fs and %gs. No need to save %es and %ds, as
* those are always kernel segments while inside the kernel.
*/
asm volatile("mov %%fs,%0":"=m" (prev->fs));
asm volatile("mov %%gs,%0":"=m" (prev->gs));
/*
* Restore %fs and %gs if needed.
*