net: fec: Use page_pool_put_full_page when freeing rx buffers
The page_pool_release_page was used when freeing rx buffers, and this
function just unmaps the page (if mapped) and does not recycle the page.
So after hundreds of down/up the eth0, the system will out of memory.
For more details, please refer to the following reproduce steps and
bug logs. To solve this issue and refer to the doc of page pool, the
page_pool_put_full_page should be used to replace page_pool_release_page.
Because this API will try to recycle the page if the page refcnt equal to
1. After testing 20000 times, the issue can not be reproduced anymore
(about testing 391 times the issue will occur on i.MX8MN-EVK before).
Reproduce steps:
Create the test script and run the script. The script content is as
follows:
LOOPS=20000
i=1
while [ $i -le $LOOPS ]
do
echo "TINFO:ENET $curface up and down test $i times"
org_macaddr=$(cat /sys/class/net/eth0/address)
ifconfig eth0 down
ifconfig eth0 hw ether $org_macaddr up
i=$(expr $i + 1)
done
sleep 5
if cat /sys/class/net/eth0/operstate | grep 'up';then
echo "TEST PASS"
else
echo "TEST FAIL"
fi
Bug detail logs:
TINFO:ENET up and down test 391 times
[ 850.471205] Qualcomm Atheros AR8031/AR8033 30be0000.ethernet-1:00: attached PHY driver (mii_bus:phy_addr=30be0000.ethernet-1:00, irq=POLL)
[ 853.535318] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 853.541694] fec 30be0000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx
[ 870.590531] page_pool_release_retry() stalled pool shutdown 199 inflight 60 sec
[ 931.006557] page_pool_release_retry() stalled pool shutdown 199 inflight 120 sec
TINFO:ENET up and down test 392 times
[ 991.426544] page_pool_release_retry() stalled pool shutdown 192 inflight 181 sec
[ 1051.838531] page_pool_release_retry() stalled pool shutdown 170 inflight 241 sec
[ 1093.751217] Qualcomm Atheros AR8031/AR8033 30be0000.ethernet-1:00: attached PHY driver (mii_bus:phy_addr=30be0000.ethernet-1:00, irq=POLL)
[ 1096.446520] page_pool_release_retry() stalled pool shutdown 308 inflight 60 sec
[ 1096.831245] fec 30be0000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx
[ 1096.839092] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1112.254526] page_pool_release_retry() stalled pool shutdown 103 inflight 302 sec
[ 1156.862533] page_pool_release_retry() stalled pool shutdown 308 inflight 120 sec
[ 1172.674516] page_pool_release_retry() stalled pool shutdown 103 inflight 362 sec
[ 1217.278532] page_pool_release_retry() stalled pool shutdown 308 inflight 181 sec
TINFO:ENET up and down test 393 times
[ 1233.086535] page_pool_release_retry() stalled pool shutdown 103 inflight 422 sec
[ 1277.698513] page_pool_release_retry() stalled pool shutdown 308 inflight 241 sec
[ 1293.502525] page_pool_release_retry() stalled pool shutdown 86 inflight 483 sec
[ 1338.110518] page_pool_release_retry() stalled pool shutdown 308 inflight 302 sec
[ 1353.918540] page_pool_release_retry() stalled pool shutdown 32 inflight 543 sec
[ 1361.179205] Qualcomm Atheros AR8031/AR8033 30be0000.ethernet-1:00: attached PHY driver (mii_bus:phy_addr=30be0000.ethernet-1:00, irq=POLL)
[ 1364.255298] fec 30be0000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx
[ 1364.263189] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1371.998532] page_pool_release_retry() stalled pool shutdown 310 inflight 60 sec
[ 1398.530542] page_pool_release_retry() stalled pool shutdown 308 inflight 362 sec
[ 1414.334539] page_pool_release_retry() stalled pool shutdown 16 inflight 604 sec
[ 1432.414520] page_pool_release_retry() stalled pool shutdown 310 inflight 120 sec
[ 1458.942523] page_pool_release_retry() stalled pool shutdown 308 inflight 422 sec
[ 1474.750521] page_pool_release_retry() stalled pool shutdown 16 inflight 664 sec
TINFO:ENET up and down test 394 times
[ 1492.830522] page_pool_release_retry() stalled pool shutdown 310 inflight 181 sec
[ 1519.358519] page_pool_release_retry() stalled pool shutdown 308 inflight 483 sec
[ 1535.166545] page_pool_release_retry() stalled pool shutdown 2 inflight 724 sec
[ 1537.090278] eth_test2.sh invoked oom-killer: gfp_mask=0x400dc0(GFP_KERNEL_ACCOUNT|__GFP_ZERO), order=0, oom_score_adj=0
[ 1537.101192] CPU: 3 PID: 2379 Comm: eth_test2.sh Tainted: G C 6.1.1+g56321e101aca #1
[ 1537.110249] Hardware name: NXP i.MX8MNano EVK board (DT)
[ 1537.115561] Call trace:
[ 1537.118005] dump_backtrace.part.0+0xe0/0xf0
[ 1537.122289] show_stack+0x18/0x40
[ 1537.125608] dump_stack_lvl+0x64/0x80
[ 1537.129276] dump_stack+0x18/0x34
[ 1537.132592] dump_header+0x44/0x208
[ 1537.136083] oom_kill_process+0x2b4/0x2c0
[ 1537.140097] out_of_memory+0xe4/0x594
[ 1537.143766] __alloc_pages+0xb68/0xd00
[ 1537.147521] alloc_pages+0xac/0x160
[ 1537.151013] __get_free_pages+0x14/0x40
[ 1537.154851] pgd_alloc+0x1c/0x30
[ 1537.158082] mm_init+0xf8/0x1d0
[ 1537.161228] mm_alloc+0x48/0x60
[ 1537.164368] alloc_bprm+0x7c/0x240
[ 1537.167777] do_execveat_common.isra.0+0x70/0x240
[ 1537.172486] __arm64_sys_execve+0x40/0x54
[ 1537.176502] invoke_syscall+0x48/0x114
[ 1537.180255] el0_svc_common.constprop.0+0xcc/0xec
[ 1537.184964] do_el0_svc+0x2c/0xd0
[ 1537.188280] el0_svc+0x2c/0x84
[ 1537.191340] el0t_64_sync_handler+0xf4/0x120
[ 1537.195613] el0t_64_sync+0x18c/0x190
[ 1537.199334] Mem-Info:
[ 1537.201620] active_anon:342 inactive_anon:10343 isolated_anon:0
[ 1537.201620] active_file:54 inactive_file:112 isolated_file:0
[ 1537.201620] unevictable:0 dirty:0 writeback:0
[ 1537.201620] slab_reclaimable:2620 slab_unreclaimable:7076
[ 1537.201620] mapped:1489 shmem:2473 pagetables:466
[ 1537.201620] sec_pagetables:0 bounce:0
[ 1537.201620] kernel_misc_reclaimable:0
[ 1537.201620] free:136672 free_pcp:96 free_cma:129241
[ 1537.240419] Node 0 active_anon:1368kB inactive_anon:41372kB active_file:216kB inactive_file:5052kB unevictable:0kB isolated(anon):0kB isolated(file):0kB s
[ 1537.271422] Node 0 DMA free:541636kB boost:0kB min:30000kB low:37500kB high:45000kB reserved_highatomic:0KB active_anon:1368kB inactive_anon:41372kB actiB
[ 1537.300219] lowmem_reserve[]: 0 0 0 0
[ 1537.303929] Node 0 DMA: 1015*4kB (UMEC) 743*8kB (UMEC) 417*16kB (UMEC) 235*32kB (UMEC) 116*64kB (UMEC) 25*128kB (UMEC) 4*256kB (UC) 2*512kB (UC) 0*1024kBB
[ 1537.323938] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[ 1537.332708] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=32768kB
[ 1537.341292] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 1537.349776] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=64kB
[ 1537.358087] 2939 total pagecache pages
[ 1537.361876] 0 pages in swap cache
[ 1537.365229] Free swap = 0kB
[ 1537.368147] Total swap = 0kB
[ 1537.371065] 516096 pages RAM
[ 1537.373959] 0 pages HighMem/MovableOnly
[ 1537.377834] 17302 pages reserved
[ 1537.381103] 163840 pages cma reserved
[ 1537.384809] 0 pages hwpoisoned
[ 1537.387902] Tasks state (memory values in pages):
[ 1537.392652] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
[ 1537.401356] [ 201] 993 201 1130 72 45056 0 0 rpcbind
[ 1537.409772] [ 202] 0 202 4529 1640 77824 0 -250 systemd-journal
[ 1537.418861] [ 222] 0 222 4691 801 69632 0 -1000 systemd-udevd
[ 1537.427787] [ 248] 994 248 20914 130 65536 0 0 systemd-timesyn
[ 1537.436884] [ 497] 0 497 620 31 49152 0 0 atd
[ 1537.444938] [ 500] 0 500 854 77 53248 0 0 crond
[ 1537.453165] [ 503] 997 503 1470 160 49152 0 -900 dbus-daemon
[ 1537.461908] [ 505] 0 505 633 24 40960 0 0 firmwared
[ 1537.470491] [ 513] 0 513 2507 180 61440 0 0 ofonod
[ 1537.478800] [ 514] 990 514 69640 137 81920 0 0 parsec
[ 1537.487120] [ 533] 0 533 599 39 40960 0 0 syslogd
[ 1537.495518] [ 534] 0 534 4546 148 65536 0 0 systemd-logind
[ 1537.504560] [ 535] 0 535 690 24 45056 0 0 tee-supplicant
[ 1537.513564] [ 540] 996 540 2769 168 61440 0 0 systemd-network
[ 1537.522680] [ 566] 0 566 3878 228 77824 0 0 connmand
[ 1537.531168] [ 645] 998 645 1538 133 57344 0 0 avahi-daemon
[ 1537.540004] [ 646] 998 646 1461 64 57344 0 0 avahi-daemon
[ 1537.548846] [ 648] 992 648 781 41 45056 0 0 rpc.statd
[ 1537.557415] [ 650] 64371 650 590 23 45056 0 0 ninfod
[ 1537.565754] [ 653] 61563 653 555 24 45056 0 0 rdisc
[ 1537.573971] [ 655] 0 655 374569 2999 290816 0 -999 containerd
[ 1537.582621] [ 658] 0 658 1311 20 49152 0 0 agetty
[ 1537.590922] [ 663] 0 663 1529 97 49152 0 0 login
[ 1537.599138] [ 666] 0 666 3430 202 69632 0 0 wpa_supplicant
[ 1537.608147] [ 667] 0 667 2344 96 61440 0 0 systemd-userdbd
[ 1537.617240] [ 677] 0 677 2964 314 65536 0 100 systemd
[ 1537.625651] [ 679] 0 679 3720 646 73728 0 100 (sd-pam)
[ 1537.634138] [ 687] 0 687 1289 403 45056 0 0 sh
[ 1537.642108] [ 789] 0 789 970 93 45056 0 0 eth_test2.sh
[ 1537.650955] [ 2355] 0 2355 2346 94 61440 0 0 systemd-userwor
[ 1537.660046] [ 2356] 0 2356 2346 94 61440 0 0 systemd-userwor
[ 1537.669137] [ 2358] 0 2358 2346 95 57344 0 0 systemd-userwor
[ 1537.678258] [ 2379] 0 2379 970 93 45056 0 0 eth_test2.sh
[ 1537.687098] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-0.slice/user@0.service,tas0
[ 1537.703009] Out of memory: Killed process 679 ((sd-pam)) total-vm:14880kB, anon-rss:2584kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:72kB oom_score_ad0
[ 1553.246526] page_pool_release_retry() stalled pool shutdown 310 inflight 241 sec
Fixes: 95698ff617
("net: fec: using page pool to manage RX buffers")
Signed-off-by: Wei Fang <wei.fang@nxp.com>
Reviewed-by: shenwei wang <Shenwei.wang@nxp.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
parent
71ab9c3e22
commit
e38553bdc3
|
@ -3191,7 +3191,7 @@ static void fec_enet_free_buffers(struct net_device *ndev)
|
|||
for (q = 0; q < fep->num_rx_queues; q++) {
|
||||
rxq = fep->rx_queue[q];
|
||||
for (i = 0; i < rxq->bd.ring_size; i++)
|
||||
page_pool_release_page(rxq->page_pool, rxq->rx_skb_info[i].page);
|
||||
page_pool_put_full_page(rxq->page_pool, rxq->rx_skb_info[i].page, false);
|
||||
|
||||
for (i = 0; i < XDP_STATS_TOTAL; i++)
|
||||
rxq->stats[i] = 0;
|
||||
|
|
Loading…
Reference in New Issue