bnxt: fill data page pool with frags if PAGE_SIZE > BNXT_RX_PAGE_SIZE

The data page pool always fills the HW rx ring with pages. On arm64 with
64K pages, this will waste _at least_ 32K of memory per entry in the rx
ring.

Fix by fragmenting the pages if PAGE_SIZE > BNXT_RX_PAGE_SIZE. This
makes the data page pool the same as the header pool.

Tested with iperf3 with a small (64 entries) rx ring to encourage buffer
circulation.

Fixes: cd1fafe7da ("eth: bnxt: add support rx side device memory TCP")
Reviewed-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David Wei <dw@davidwei.uk>
Link: https://patch.msgid.link/20250812182907.1540755-1-dw@davidwei.uk
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
David Wei 2025-08-12 11:29:07 -07:00 committed by Jakub Kicinski
parent b2cafefaf0
commit 39f8fcda20

View File

@ -926,15 +926,21 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping,
static netmem_ref __bnxt_alloc_rx_netmem(struct bnxt *bp, dma_addr_t *mapping,
struct bnxt_rx_ring_info *rxr,
unsigned int *offset,
gfp_t gfp)
{
netmem_ref netmem;
netmem = page_pool_alloc_netmems(rxr->page_pool, gfp);
if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) {
netmem = page_pool_alloc_frag_netmem(rxr->page_pool, offset, BNXT_RX_PAGE_SIZE, gfp);
} else {
netmem = page_pool_alloc_netmems(rxr->page_pool, gfp);
*offset = 0;
}
if (!netmem)
return 0;
*mapping = page_pool_get_dma_addr_netmem(netmem);
*mapping = page_pool_get_dma_addr_netmem(netmem) + *offset;
return netmem;
}
@ -1029,7 +1035,7 @@ static int bnxt_alloc_rx_netmem(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
dma_addr_t mapping;
netmem_ref netmem;
netmem = __bnxt_alloc_rx_netmem(bp, &mapping, rxr, gfp);
netmem = __bnxt_alloc_rx_netmem(bp, &mapping, rxr, &offset, gfp);
if (!netmem)
return -ENOMEM;