Skip to content

Commit 3174603

Browse files
Alexei Starovoitovanakryiko
authored andcommitted
bpf: Introduce bpf_arena.
Introduce bpf_arena, which is a sparse shared memory region between the bpf program and user space. Use cases: 1. User space mmap-s bpf_arena and uses it as a traditional mmap-ed anonymous region, like memcached or any key/value storage. The bpf program implements an in-kernel accelerator. XDP prog can search for a key in bpf_arena and return a value without going to user space. 2. The bpf program builds arbitrary data structures in bpf_arena (hash tables, rb-trees, sparse arrays), while user space consumes it. 3. bpf_arena is a "heap" of memory from the bpf program's point of view. The user space may mmap it, but bpf program will not convert pointers to user base at run-time to improve bpf program speed. Initially, the kernel vm_area and user vma are not populated. User space can fault in pages within the range. While servicing a page fault, bpf_arena logic will insert a new page into the kernel and user vmas. The bpf program can allocate pages from that region via bpf_arena_alloc_pages(). This kernel function will insert pages into the kernel vm_area. The subsequent fault-in from user space will populate that page into the user vma. The BPF_F_SEGV_ON_FAULT flag at arena creation time can be used to prevent fault-in from user space. In such a case, if a page is not allocated by the bpf program and not present in the kernel vm_area, the user process will segfault. This is useful for use cases 2 and 3 above. bpf_arena_alloc_pages() is similar to user space mmap(). It allocates pages either at a specific address within the arena or allocates a range with the maple tree. bpf_arena_free_pages() is analogous to munmap(), which frees pages and removes the range from the kernel vm_area and from user process vmas. bpf_arena can be used as a bpf program "heap" of up to 4GB. The speed of bpf program is more important than ease of sharing with user space. This is use case 3. In such a case, the BPF_F_NO_USER_CONV flag is recommended. It will tell the verifier to treat the rX = bpf_arena_cast_user(rY) instruction as a 32-bit move wX = wY, which will improve bpf prog performance. Otherwise, bpf_arena_cast_user is translated by JIT to conditionally add the upper 32 bits of user vm_start (if the pointer is not NULL) to arena pointers before they are stored into memory. This way, user space sees them as valid 64-bit pointers. Diff llvm/llvm-project#84410 enables LLVM BPF backend generate the bpf_addr_space_cast() instruction to cast pointers between address_space(1) which is reserved for bpf_arena pointers and default address space zero. All arena pointers in a bpf program written in C language are tagged as __attribute__((address_space(1))). Hence, clang provides helpful diagnostics when pointers cross address space. Libbpf and the kernel support only address_space == 1. All other address space identifiers are reserved. rX = bpf_addr_space_cast(rY, /* dst_as */ 1, /* src_as */ 0) tells the verifier that rX->type = PTR_TO_ARENA. Any further operations on PTR_TO_ARENA register have to be in the 32-bit domain. The verifier will mark load/store through PTR_TO_ARENA with PROBE_MEM32. JIT will generate them as kern_vm_start + 32bit_addr memory accesses. The behavior is similar to copy_from_kernel_nofault() except that no address checks are necessary. The address is guaranteed to be in the 4GB range. If the page is not present, the destination register is zeroed on read, and the operation is ignored on write. rX = bpf_addr_space_cast(rY, 0, 1) tells the verifier that rX->type = unknown scalar. If arena->map_flags has BPF_F_NO_USER_CONV set, then the verifier converts such cast instructions to mov32. Otherwise, JIT will emit native code equivalent to: rX = (u32)rY; if (rY) rX |= clear_lo32_bits(arena->user_vm_start); /* replace hi32 bits in rX */ After such conversion, the pointer becomes a valid user pointer within bpf_arena range. The user process can access data structures created in bpf_arena without any additional computations. For example, a linked list built by a bpf program can be walked natively by user space. Signed-off-by: Alexei Starovoitov <[email protected]> Signed-off-by: Andrii Nakryiko <[email protected]> Reviewed-by: Barret Rhoden <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
1 parent 365c2b3 commit 3174603

File tree

9 files changed

+635
-2
lines changed

9 files changed

+635
-2
lines changed

include/linux/bpf.h

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@ struct perf_event;
3737
struct bpf_prog;
3838
struct bpf_prog_aux;
3939
struct bpf_map;
40+
struct bpf_arena;
4041
struct sock;
4142
struct seq_file;
4243
struct btf;
@@ -528,8 +529,8 @@ void bpf_list_head_free(const struct btf_field *field, void *list_head,
528529
struct bpf_spin_lock *spin_lock);
529530
void bpf_rb_root_free(const struct btf_field *field, void *rb_root,
530531
struct bpf_spin_lock *spin_lock);
531-
532-
532+
u64 bpf_arena_get_kern_vm_start(struct bpf_arena *arena);
533+
u64 bpf_arena_get_user_vm_start(struct bpf_arena *arena);
533534
int bpf_obj_name_cpy(char *dst, const char *src, unsigned int size);
534535

535536
struct bpf_offload_dev;
@@ -2215,6 +2216,8 @@ int generic_map_delete_batch(struct bpf_map *map,
22152216
struct bpf_map *bpf_map_get_curr_or_next(u32 *id);
22162217
struct bpf_prog *bpf_prog_get_curr_or_next(u32 *id);
22172218

2219+
int bpf_map_alloc_pages(const struct bpf_map *map, gfp_t gfp, int nid,
2220+
unsigned long nr_pages, struct page **page_array);
22182221
#ifdef CONFIG_MEMCG_KMEM
22192222
void *bpf_map_kmalloc_node(const struct bpf_map *map, size_t size, gfp_t flags,
22202223
int node);

include/linux/bpf_types.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -132,6 +132,7 @@ BPF_MAP_TYPE(BPF_MAP_TYPE_STRUCT_OPS, bpf_struct_ops_map_ops)
132132
BPF_MAP_TYPE(BPF_MAP_TYPE_RINGBUF, ringbuf_map_ops)
133133
BPF_MAP_TYPE(BPF_MAP_TYPE_BLOOM_FILTER, bloom_filter_map_ops)
134134
BPF_MAP_TYPE(BPF_MAP_TYPE_USER_RINGBUF, user_ringbuf_map_ops)
135+
BPF_MAP_TYPE(BPF_MAP_TYPE_ARENA, arena_map_ops)
135136

136137
BPF_LINK_TYPE(BPF_LINK_TYPE_RAW_TRACEPOINT, raw_tracepoint)
137138
BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING, tracing)

include/uapi/linux/bpf.h

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1009,6 +1009,7 @@ enum bpf_map_type {
10091009
BPF_MAP_TYPE_BLOOM_FILTER,
10101010
BPF_MAP_TYPE_USER_RINGBUF,
10111011
BPF_MAP_TYPE_CGRP_STORAGE,
1012+
BPF_MAP_TYPE_ARENA,
10121013
__MAX_BPF_MAP_TYPE
10131014
};
10141015

@@ -1396,6 +1397,12 @@ enum {
13961397

13971398
/* BPF token FD is passed in a corresponding command's token_fd field */
13981399
BPF_F_TOKEN_FD = (1U << 16),
1400+
1401+
/* When user space page faults in bpf_arena send SIGSEGV instead of inserting new page */
1402+
BPF_F_SEGV_ON_FAULT = (1U << 17),
1403+
1404+
/* Do not translate kernel bpf_arena pointers to user pointers */
1405+
BPF_F_NO_USER_CONV = (1U << 18),
13991406
};
14001407

14011408
/* Flags for BPF_PROG_QUERY. */
@@ -1467,6 +1474,9 @@ union bpf_attr {
14671474
* BPF_MAP_TYPE_BLOOM_FILTER - the lowest 4 bits indicate the
14681475
* number of hash functions (if 0, the bloom filter will default
14691476
* to using 5 hash functions).
1477+
*
1478+
* BPF_MAP_TYPE_ARENA - contains the address where user space
1479+
* is going to mmap() the arena. It has to be page aligned.
14701480
*/
14711481
__u64 map_extra;
14721482

kernel/bpf/Makefile

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,9 @@ obj-${CONFIG_BPF_LSM} += bpf_inode_storage.o
1515
obj-$(CONFIG_BPF_SYSCALL) += disasm.o mprog.o
1616
obj-$(CONFIG_BPF_JIT) += trampoline.o
1717
obj-$(CONFIG_BPF_SYSCALL) += btf.o memalloc.o
18+
ifeq ($(CONFIG_MMU)$(CONFIG_64BIT),yy)
19+
obj-$(CONFIG_BPF_SYSCALL) += arena.o
20+
endif
1821
obj-$(CONFIG_BPF_JIT) += dispatcher.o
1922
ifeq ($(CONFIG_NET),y)
2023
obj-$(CONFIG_BPF_SYSCALL) += devmap.o

0 commit comments

Comments
 (0)