diff options
author | Alexei Starovoitov <ast@kernel.org> | 2023-02-15 15:40:06 -0800 |
---|---|---|
committer | Alexei Starovoitov <ast@kernel.org> | 2023-02-15 15:40:06 -0800 |
commit | 3538a0fbbd81bc131afe48b4cf02895735944359 (patch) | |
tree | c36bec94b00c337da6266687758715ec9eaa0f92 /tools/testing/selftests/bpf/prog_tests/recursion.c | |
parent | b2d9002ee9a65aa0eaf01ae494876fe16389d535 (diff) | |
parent | f88da2d46cc9a19b0c233285339659cae36c5d9a (diff) | |
download | linux-3538a0fbbd81bc131afe48b4cf02895735944359.tar.gz linux-3538a0fbbd81bc131afe48b4cf02895735944359.tar.bz2 linux-3538a0fbbd81bc131afe48b4cf02895735944359.zip |
Merge branch 'Use __GFP_ZERO in bpf memory allocator'
Hou Tao says:
====================
From: Hou Tao <houtao1@huawei.com>
Hi,
The patchset tries to fix the hard-up problem found when checking how htab
handles element reuse in bpf memory allocator. The immediate reuse of
freed elements will reinitialize special fields (e.g., bpf_spin_lock) in
htab map value and it may corrupt lookup procedure with BFP_F_LOCK flag
which acquires bpf-spin-lock during value copying, and lead to hard-lock
as shown in patch #2. Patch #1 fixes it by using __GFP_ZERO when allocating
the object from slab and the behavior is similar with the preallocated
hash-table case. Please see individual patches for more details. And comments
are always welcome.
Regards,
Change Log:
v1:
* Use __GFP_ZERO instead of ctor to avoid retpoline overhead (from Alexei)
* Add comments for check_and_init_map_value() (from Alexei)
* split __GFP_ZERO patches out of the original patchset to unblock
the development work of others.
RFC: https://lore.kernel.org/bpf/20221230041151.1231169-1-houtao@huaweicloud.com
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'tools/testing/selftests/bpf/prog_tests/recursion.c')
0 files changed, 0 insertions, 0 deletions