linux-mainline/include
Marco Elver aebc7b0d8d list: Introduce CONFIG_LIST_HARDENED
Numerous production kernel configs (see [1, 2]) are choosing to enable
CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened
configs [3]. The motivation behind this is that the option can be used
as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025
are mitigated by the option [4]).

The feature has never been designed with performance in mind, yet common
list manipulation is happening across hot paths all over the kernel.

Introduce CONFIG_LIST_HARDENED, which performs list pointer checking
inline, and only upon list corruption calls the reporting slow path.

To generate optimal machine code with CONFIG_LIST_HARDENED:

  1. Elide checking for pointer values which upon dereference would
     result in an immediate access fault (i.e. minimal hardening
     checks).  The trade-off is lower-quality error reports.

  2. Use the __preserve_most function attribute (available with Clang,
     but not yet with GCC) to minimize the code footprint for calling
     the reporting slow path. As a result, function size of callers is
     reduced by avoiding saving registers before calling the rarely
     called reporting slow path.

     Note that all TUs in lib/Makefile already disable function tracing,
     including list_debug.c, and __preserve_most's implied notrace has
     no effect in this case.

  3. Because the inline checks are a subset of the full set of checks in
     __list_*_valid_or_report(), always return false if the inline
     checks failed.  This avoids redundant compare and conditional
     branch right after return from the slow path.

As a side-effect of the checks being inline, if the compiler can prove
some condition to always be true, it can completely elide some checks.

Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the
Kconfig variables are changed to reflect that: DEBUG_LIST selects
LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on
DEBUG_LIST.

Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with
"preserve_most") shows throughput improvements, in my case of ~7% on
average (up to 20-30% on some test cases).

Link: https://r.android.com/1266735 [1]
Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2]
Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3]
Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4]
Signed-off-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20230811151847.1594958-3-elver@google.com
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-15 14:57:25 -07:00
..
acpi More ACPI updates for 6.5-rc1 2023-07-06 22:25:06 -07:00
asm-generic vmlinux.lds.h: Remove a reference to no longer used sections .text..refcount 2023-07-11 14:29:43 -07:00
clocksource
crypto
drm Linux 6.5-rc1 2023-07-11 09:23:20 +02:00
dt-bindings
keys
kunit
kvm
linux list: Introduce CONFIG_LIST_HARDENED 2023-08-15 14:57:25 -07:00
math-emu
media
memory
misc
net net/sched: make psched_mtu() RTNL-less safe 2023-07-12 15:59:33 -07:00
pcmcia
ras
rdma
rv
scsi
soc net: dsa: felix: make vsc9959_tas_guard_bands_update() visible to ocelot->ops 2023-07-06 19:10:22 -07:00
sound
target
trace
uapi Merge branch '6.5/scsi-staging' into 6.5/scsi-fixes 2023-07-11 12:15:15 -04:00
ufs Merge branch '6.5/scsi-staging' into 6.5/scsi-fixes 2023-07-11 12:15:15 -04:00
vdso
video
xen