aboutsummaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
diff options
context:
space:
mode:
authorMaciej Fijalkowski <maciej.fijalkowski@intel.com>2022-01-25 17:04:41 +0100
committerDaniel Borkmann <daniel@iogearbox.net>2022-01-27 17:25:32 +0100
commit3876ff525de70ae850f3aa9b7c295e9cf7253b0e (patch)
tree42ccc09ea9d1d08c8d13fb996e24c1ed2a0be870 /drivers/net/ethernet/intel/ice/ice_txrx_lib.c
parent296f13ff3854535009a185aaf8e3603266d39d94 (diff)
downloadlinux-3876ff525de70ae850f3aa9b7c295e9cf7253b0e.tar.gz
linux-3876ff525de70ae850f3aa9b7c295e9cf7253b0e.tar.bz2
linux-3876ff525de70ae850f3aa9b7c295e9cf7253b0e.zip
ice: xsk: Handle SW XDP ring wrap and bump tail more often
Currently, if ice_clean_rx_irq_zc() processed the whole ring and next_to_use != 0, then ice_alloc_rx_buf_zc() would not refill the whole ring even if the XSK buffer pool would have enough free entries (either from fill ring or the internal recycle mechanism) - it is because ring wrap is not handled. Improve the logic in ice_alloc_rx_buf_zc() to address the problem above. Do not clamp the count of buffers that is passed to xsk_buff_alloc_batch() in case when next_to_use + buffer count >= rx_ring->count, but rather split it and have two calls to the mentioned function - one for the part up until the wrap and one for the part after the wrap. Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Link: https://lore.kernel.org/bpf/20220125160446.78976-4-maciej.fijalkowski@intel.com
Diffstat (limited to 'drivers/net/ethernet/intel/ice/ice_txrx_lib.c')
0 files changed, 0 insertions, 0 deletions