acrn-kernel/kernel/sched
Peter Zijlstra 8620933c3c sched: Fix stop_one_cpu_nowait() vs hotplug
[ Upstream commit f0498d2a54e7966ce23cd7c7ff42c64fa0059b07 ]

Kuyo reported sporadic failures on a sched_setaffinity() vs CPU
hotplug stress-test -- notably affine_move_task() remains stuck in
wait_for_completion(), leading to a hung-task detector warning.

Specifically, it was reported that stop_one_cpu_nowait(.fn =
migration_cpu_stop) returns false -- this stopper is responsible for
the matching complete().

The race scenario is:

	CPU0					CPU1

					// doing _cpu_down()

  __set_cpus_allowed_ptr()
    task_rq_lock();
					takedown_cpu()
					  stop_machine_cpuslocked(take_cpu_down..)

					<PREEMPT: cpu_stopper_thread()
					  MULTI_STOP_PREPARE
					  ...
    __set_cpus_allowed_ptr_locked()
      affine_move_task()
        task_rq_unlock();

  <PREEMPT: cpu_stopper_thread()\>
    ack_state()
					  MULTI_STOP_RUN
					    take_cpu_down()
					      __cpu_disable();
					      stop_machine_park();
						stopper->enabled = false;
					 />
   />
	stop_one_cpu_nowait(.fn = migration_cpu_stop);
          if (stopper->enabled) // false!!!

That is, by doing stop_one_cpu_nowait() after dropping rq-lock, the
stopper thread gets a chance to preempt and allows the cpu-down for
the target CPU to complete.

OTOH, since stop_one_cpu_nowait() / cpu_stop_queue_work() needs to
issue a wakeup, it must not be ran under the scheduler locks.

Solve this apparent contradiction by keeping preemption disabled over
the unlock + queue_stopper combination:

	preempt_disable();
	task_rq_unlock(...);
	if (!stop_pending)
	  stop_one_cpu_nowait(...)
	preempt_enable();

This respects the lock ordering contraints while still avoiding the
above race. That is, if we find the CPU is online under rq-lock, the
targeted stop_one_cpu_nowait() must succeed.

Apply this pattern to all similar stop_one_cpu_nowait() invocations.

Fixes: 6d337eab04 ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()")
Reported-by: "Kuyo Chang (張建文)" <Kuyo.Chang@mediatek.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: "Kuyo Chang (張建文)" <Kuyo.Chang@mediatek.com>
Link: https://lkml.kernel.org/r/20231010200442.GA16515@noisy.programming.kicks-ass.net
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-11-20 11:51:50 +01:00
..
Makefile
autogroup.c
autogroup.h
build_policy.c
build_utility.c
clock.c
completion.c
core.c sched: Fix stop_one_cpu_nowait() vs hotplug 2023-11-20 11:51:50 +01:00
core_sched.c
cpuacct.c
cpudeadline.c
cpudeadline.h
cpufreq.c
cpufreq_schedutil.c cpufreq: schedutil: Update next_freq when cpufreq_limits change 2023-10-25 12:03:11 +02:00
cpupri.c sched/rt: Fix live lock between select_fallback_rq() and RT push 2023-10-06 14:57:02 +02:00
cpupri.h
cputime.c
deadline.c sched: Fix stop_one_cpu_nowait() vs hotplug 2023-11-20 11:51:50 +01:00
debug.c
fair.c sched: Fix stop_one_cpu_nowait() vs hotplug 2023-11-20 11:51:50 +01:00
features.h
idle.c kernel/sched: Modify initial boot task idle setup 2023-10-06 14:57:02 +02:00
isolation.c
loadavg.c
membarrier.c
pelt.c
pelt.h
psi.c sched/psi: use kernfs polling functions for PSI trigger polling 2023-07-27 08:50:38 +02:00
rt.c sched: Fix stop_one_cpu_nowait() vs hotplug 2023-11-20 11:51:50 +01:00
sched-pelt.h
sched.h sched/deadline: Create DL BW alloc, free & check overflow interface 2023-08-30 16:11:11 +02:00
smp.h
stats.c
stats.h
stop_task.c
swait.c
topology.c
wait.c
wait_bit.c