mm/slab: remove HAVE_HARDENED_USERCOPY_ALLOCATOR

With SLOB removed, both remaining allocators support hardened usercopy,
so remove the config and associated #ifdef.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Lorenzo Stoakes <lstoakes@gmail.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
This commit is contained in:
Vlastimil Babka 2023-05-23 09:26:35 +02:00
parent 8040cbf5e1
commit d2e527f0d8
3 changed files with 0 additions and 18 deletions

View File

@ -221,7 +221,6 @@ choice
config SLAB config SLAB
bool "SLAB" bool "SLAB"
depends on !PREEMPT_RT depends on !PREEMPT_RT
select HAVE_HARDENED_USERCOPY_ALLOCATOR
help help
The regular slab allocator that is established and known to work The regular slab allocator that is established and known to work
well in all environments. It organizes cache hot objects in well in all environments. It organizes cache hot objects in
@ -229,7 +228,6 @@ config SLAB
config SLUB config SLUB
bool "SLUB (Unqueued Allocator)" bool "SLUB (Unqueued Allocator)"
select HAVE_HARDENED_USERCOPY_ALLOCATOR
help help
SLUB is a slab allocator that minimizes cache line usage SLUB is a slab allocator that minimizes cache line usage
instead of managing queues of cached objects (SLAB approach). instead of managing queues of cached objects (SLAB approach).

View File

@ -832,16 +832,8 @@ struct kmem_obj_info {
void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab); void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab);
#endif #endif
#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
void __check_heap_object(const void *ptr, unsigned long n, void __check_heap_object(const void *ptr, unsigned long n,
const struct slab *slab, bool to_user); const struct slab *slab, bool to_user);
#else
static inline
void __check_heap_object(const void *ptr, unsigned long n,
const struct slab *slab, bool to_user)
{
}
#endif
#ifdef CONFIG_SLUB_DEBUG #ifdef CONFIG_SLUB_DEBUG
void skip_orig_size_check(struct kmem_cache *s, const void *object); void skip_orig_size_check(struct kmem_cache *s, const void *object);

View File

@ -127,16 +127,8 @@ config LSM_MMAP_MIN_ADDR
this low address space will need the permission specific to the this low address space will need the permission specific to the
systems running LSM. systems running LSM.
config HAVE_HARDENED_USERCOPY_ALLOCATOR
bool
help
The heap allocator implements __check_heap_object() for
validating memory ranges against heap object sizes in
support of CONFIG_HARDENED_USERCOPY.
config HARDENED_USERCOPY config HARDENED_USERCOPY
bool "Harden memory copies between kernel and userspace" bool "Harden memory copies between kernel and userspace"
depends on HAVE_HARDENED_USERCOPY_ALLOCATOR
imply STRICT_DEVMEM imply STRICT_DEVMEM
help help
This option checks for obviously wrong memory regions when This option checks for obviously wrong memory regions when