mm/kmemleak: skip unlikely objects in kmemleak_scan() without taking lock
There are 3 RCU-based object iteration loops in kmemleak_scan(). Because of the need to take RCU read lock, we can't insert cond_resched() into the loop like other parts of the function. As there can be millions of objects to be scanned, it takes a while to iterate all of them. The kmemleak functionality is usually enabled in a debug kernel which is much slower than a non-debug kernel. With sufficient number of kmemleak objects, the time to iterate them all may exceed 22s causing soft lockup. watchdog: BUG: soft lockup - CPU#3 stuck for 22s! [kmemleak:625] In this particular bug report, the soft lockup happen in the 2nd iteration loop. In the 2nd and 3rd loops, most of the objects are checked and then skipped under the object lock. Only a selected fews are modified. Those objects certainly need lock protection. However, the lock/unlock operation is slow especially with interrupt disabling and enabling included. We can actually do some basic check like color_white() without taking the lock and skip the object accordingly. Of course, this kind of check is racy and may miss objects that are being modified concurrently. The cost of missed objects, however, is just that they will be discovered in the next scan instead. The advantage of doing so is that iteration can be done much faster especially with LOCKDEP enabled in a debug kernel. With a debug kernel running on a 2-socket 96-thread x86-64 system (HZ=1000), the 2nd and 3rd iteration loops speedup with this patch on the first kmemleak_scan() call after bootup is shown in the table below. Before patch After patch Loop # # of objects Elapsed time # of objects Elapsed time ------ ------------ ------------ ------------ ------------ 2 2,599,850 2.392s 2,596,364 0.266s 3 2,600,176 2.171s 2,597,061 0.260s This patch reduces loop iteration times by about 88%. This will greatly reduce the chance of a soft lockup happening in the 2nd or 3rd iteration loops. Even though the first loop runs a little bit faster, it can still be problematic if many kmemleak objects are there. As the object count has to be modified in every object, we cannot avoid taking the object lock. So other way to prevent soft lockup will be needed. Link: https://lkml.kernel.org/r/20220614220359.59282-3-longman@redhat.com Signed-off-by: Waiman Long <longman@redhat.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Muchun Song <songmuchun@bytedance.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
00c155066e
commit
64977918c2
|
@ -1576,6 +1576,13 @@ static void kmemleak_scan(void)
|
|||
*/
|
||||
rcu_read_lock();
|
||||
list_for_each_entry_rcu(object, &object_list, object_list) {
|
||||
/*
|
||||
* This is racy but we can save the overhead of lock/unlock
|
||||
* calls. The missed objects, if any, should be caught in
|
||||
* the next scan.
|
||||
*/
|
||||
if (!color_white(object))
|
||||
continue;
|
||||
raw_spin_lock_irq(&object->lock);
|
||||
if (color_white(object) && (object->flags & OBJECT_ALLOCATED)
|
||||
&& update_checksum(object) && get_object(object)) {
|
||||
|
@ -1603,6 +1610,13 @@ static void kmemleak_scan(void)
|
|||
*/
|
||||
rcu_read_lock();
|
||||
list_for_each_entry_rcu(object, &object_list, object_list) {
|
||||
/*
|
||||
* This is racy but we can save the overhead of lock/unlock
|
||||
* calls. The missed objects, if any, should be caught in
|
||||
* the next scan.
|
||||
*/
|
||||
if (!color_white(object))
|
||||
continue;
|
||||
raw_spin_lock_irq(&object->lock);
|
||||
if (unreferenced_object(object) &&
|
||||
!(object->flags & OBJECT_REPORTED)) {
|
||||
|
|
Loading…
Reference in New Issue