diff options
author | Maxim Mikityanskiy <maximmi@nvidia.com> | 2022-09-30 09:28:50 -0700 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2022-10-01 13:30:19 -0700 |
commit | a064c609849bf71adc7484b030539568cd2a5155 (patch) | |
tree | 6d1c22fd363997a3c8a17e96d7d6b151ff87ede8 /tools/perf/util/scripting-engines/trace-event-python.c | |
parent | 8cbcafcee1910ece54990f9aebae78fcbdb93913 (diff) | |
download | linux-a064c609849bf71adc7484b030539568cd2a5155.tar.gz linux-a064c609849bf71adc7484b030539568cd2a5155.tar.bz2 linux-a064c609849bf71adc7484b030539568cd2a5155.zip |
net/mlx5e: Introduce wqe_index_mask for legacy RQ
When fragments of different WQEs share the same page, mlx5e_post_rx_wqes
must wait until the old WQE stops using the page, only then the new WQE
can allocate the new page. Essentially, it means that if WQE index i is
still in use, the allocation must stop before `i % bulk`, where bulk is
the number of WQEs that may share the same page.
As bulk is always a power of two, `i % bulk = i & (bulk - 1)`, and the
new wqe_index_mask field will be equal to `bulk - 1`.
At the same time, wqe_bulk remains for optimization purposes and stores
`max(bulk, 8)`, which allows to skip the allocation until we have at
least 8 WQEs free.
Signed-off-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'tools/perf/util/scripting-engines/trace-event-python.c')
0 files changed, 0 insertions, 0 deletions