Skip to content

Commit b2eed9b

Browse files
Ard Biesheuvelwildea01
authored andcommitted
arm64/kernel: kaslr: reduce module randomization range to 2 GB
The following commit 7290d58 ("module: use relative references for __ksymtab entries") updated the ksymtab handling of some KASLR capable architectures so that ksymtab entries are emitted as pairs of 32-bit relative references. This reduces the size of the entries, but more importantly, it gets rid of statically assigned absolute addresses, which require fixing up at boot time if the kernel is self relocating (which takes a 24 byte RELA entry for each member of the ksymtab struct). Since ksymtab entries are always part of the same module as the symbol they export, it was assumed at the time that a 32-bit relative reference is always sufficient to capture the offset between a ksymtab entry and its target symbol. Unfortunately, this is not always true: in the case of per-CPU variables, a per-CPU variable's base address (which usually differs from the actual address of any of its per-CPU copies) is allocated in the vicinity of the ..data.percpu section in the core kernel (i.e., in the per-CPU reserved region which follows the section containing the core kernel's statically allocated per-CPU variables). Since we randomize the module space over a 4 GB window covering the core kernel (based on the -/+ 4 GB range of an ADRP/ADD pair), we may end up putting the core kernel out of the -/+ 2 GB range of 32-bit relative references of module ksymtab entries that refer to per-CPU variables. So reduce the module randomization range a bit further. We lose 1 bit of randomization this way, but this is something we can tolerate. Cc: <[email protected]> # v4.19+ Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Will Deacon <[email protected]>
1 parent 969f5ea commit b2eed9b

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

arch/arm64/kernel/kaslr.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -145,15 +145,15 @@ u64 __init kaslr_early_init(u64 dt_phys)
145145

146146
if (IS_ENABLED(CONFIG_RANDOMIZE_MODULE_REGION_FULL)) {
147147
/*
148-
* Randomize the module region over a 4 GB window covering the
148+
* Randomize the module region over a 2 GB window covering the
149149
* kernel. This reduces the risk of modules leaking information
150150
* about the address of the kernel itself, but results in
151151
* branches between modules and the core kernel that are
152152
* resolved via PLTs. (Branches between modules will be
153153
* resolved normally.)
154154
*/
155-
module_range = SZ_4G - (u64)(_end - _stext);
156-
module_alloc_base = max((u64)_end + offset - SZ_4G,
155+
module_range = SZ_2G - (u64)(_end - _stext);
156+
module_alloc_base = max((u64)_end + offset - SZ_2G,
157157
(u64)MODULES_VADDR);
158158
} else {
159159
/*

arch/arm64/kernel/module.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ void *module_alloc(unsigned long size)
5656
* can simply omit this fallback in that case.
5757
*/
5858
p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
59-
module_alloc_base + SZ_4G, GFP_KERNEL,
59+
module_alloc_base + SZ_2G, GFP_KERNEL,
6060
PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
6161
__builtin_return_address(0));
6262

0 commit comments

Comments
 (0)