2024-06-12 13:13:20 +08:00
|
|
|
// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* Copyright (C) 2017-2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
2005-04-17 06:25:56 +08:00
|
|
|
* Copyright Matt Mackall <mpm@selenic.com>, 2003, 2004, 2005
|
2024-06-12 13:13:20 +08:00
|
|
|
* Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All rights reserved.
|
|
|
|
*
|
|
|
|
* This driver produces cryptographically secure pseudorandom data. It is divided
|
|
|
|
* into roughly six sections, each with a section header:
|
|
|
|
*
|
|
|
|
* - Initialization and readiness waiting.
|
|
|
|
* - Fast key erasure RNG, the "crng".
|
|
|
|
* - Entropy accumulation and extraction routines.
|
|
|
|
* - Entropy collection routines.
|
|
|
|
* - Userspace reader/writer interfaces.
|
|
|
|
* - Sysctl interface.
|
|
|
|
*
|
|
|
|
* The high level overview is that there is one input pool, into which
|
|
|
|
* various pieces of data are hashed. Prior to initialization, some of that
|
|
|
|
* data is then "credited" as having a certain number of bits of entropy.
|
|
|
|
* When enough bits of entropy are available, the hash is finalized and
|
|
|
|
* handed as a key to a stream cipher that expands it indefinitely for
|
|
|
|
* various consumers. This key is periodically refreshed as the various
|
|
|
|
* entropy collectors, described below, add data to the input pool.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
#include <linux/utsname.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/major.h>
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/fcntl.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/random.h>
|
|
|
|
#include <linux/poll.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/fs.h>
|
|
|
|
#include <linux/genhd.h>
|
|
|
|
#include <linux/interrupt.h>
|
2008-07-24 12:28:13 +08:00
|
|
|
#include <linux/mm.h>
|
2016-07-30 22:23:08 +08:00
|
|
|
#include <linux/nodemask.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/spinlock.h>
|
2014-06-15 11:38:36 +08:00
|
|
|
#include <linux/kthread.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/percpu.h>
|
2012-07-02 19:52:16 +08:00
|
|
|
#include <linux/ptrace.h>
|
2013-10-03 13:08:15 +08:00
|
|
|
#include <linux/workqueue.h>
|
2013-08-30 15:39:53 +08:00
|
|
|
#include <linux/irq.h>
|
2018-04-25 13:12:32 +08:00
|
|
|
#include <linux/ratelimit.h>
|
random: introduce getrandom(2) system call
The getrandom(2) system call was requested by the LibreSSL Portable
developers. It is analoguous to the getentropy(2) system call in
OpenBSD.
The rationale of this system call is to provide resiliance against
file descriptor exhaustion attacks, where the attacker consumes all
available file descriptors, forcing the use of the fallback code where
/dev/[u]random is not available. Since the fallback code is often not
well-tested, it is better to eliminate this potential failure mode
entirely.
The other feature provided by this new system call is the ability to
request randomness from the /dev/urandom entropy pool, but to block
until at least 128 bits of entropy has been accumulated in the
/dev/urandom entropy pool. Historically, the emphasis in the
/dev/urandom development has been to ensure that urandom pool is
initialized as quickly as possible after system boot, and preferably
before the init scripts start execution.
This is because changing /dev/urandom reads to block represents an
interface change that could potentially break userspace which is not
acceptable. In practice, on most x86 desktop and server systems, in
general the entropy pool can be initialized before it is needed (and
in modern kernels, we will printk a warning message if not). However,
on an embedded system, this may not be the case. And so with this new
interface, we can provide the functionality of blocking until the
urandom pool has been initialized. Any userspace program which uses
this new functionality must take care to assure that if it is used
during the boot process, that it will not cause the init scripts or
other portions of the system startup to hang indefinitely.
SYNOPSIS
#include <linux/random.h>
int getrandom(void *buf, size_t buflen, unsigned int flags);
DESCRIPTION
The system call getrandom() fills the buffer pointed to by buf
with up to buflen random bytes which can be used to seed user
space random number generators (i.e., DRBG's) or for other
cryptographic uses. It should not be used for Monte Carlo
simulations or other programs/algorithms which are doing
probabilistic sampling.
If the GRND_RANDOM flags bit is set, then draw from the
/dev/random pool instead of the /dev/urandom pool. The
/dev/random pool is limited based on the entropy that can be
obtained from environmental noise, so if there is insufficient
entropy, the requested number of bytes may not be returned.
If there is no entropy available at all, getrandom(2) will
either block, or return an error with errno set to EAGAIN if
the GRND_NONBLOCK bit is set in flags.
If the GRND_RANDOM bit is not set, then the /dev/urandom pool
will be used. Unlike using read(2) to fetch data from
/dev/urandom, if the urandom pool has not been sufficiently
initialized, getrandom(2) will block (or return -1 with the
errno set to EAGAIN if the GRND_NONBLOCK bit is set in flags).
The getentropy(2) system call in OpenBSD can be emulated using
the following function:
int getentropy(void *buf, size_t buflen)
{
int ret;
if (buflen > 256)
goto failure;
ret = getrandom(buf, buflen, 0);
if (ret < 0)
return ret;
if (ret == buflen)
return 0;
failure:
errno = EIO;
return -1;
}
RETURN VALUE
On success, the number of bytes that was filled in the buf is
returned. This may not be all the bytes requested by the
caller via buflen if insufficient entropy was present in the
/dev/random pool, or if the system call was interrupted by a
signal.
On error, -1 is returned, and errno is set appropriately.
ERRORS
EINVAL An invalid flag was passed to getrandom(2)
EFAULT buf is outside the accessible address space.
EAGAIN The requested entropy was not available, and
getentropy(2) would have blocked if the
GRND_NONBLOCK flag was not set.
EINTR While blocked waiting for entropy, the call was
interrupted by a signal handler; see the description
of how interrupted read(2) calls on "slow" devices
are handled with and without the SA_RESTART flag
in the signal(7) man page.
NOTES
For small requests (buflen <= 256) getrandom(2) will not
return EINTR when reading from the urandom pool once the
entropy pool has been initialized, and it will return all of
the bytes that have been requested. This is the recommended
way to use getrandom(2), and is designed for compatibility
with OpenBSD's getentropy() system call.
However, if you are using GRND_RANDOM, then getrandom(2) may
block until the entropy accounting determines that sufficient
environmental noise has been gathered such that getrandom(2)
will be operating as a NRBG instead of a DRBG for those people
who are working in the NIST SP 800-90 regime. Since it may
block for a long time, these guarantees do *not* apply. The
user may want to interrupt a hanging process using a signal,
so blocking until all of the requested bytes are returned
would be unfriendly.
For this reason, the user of getrandom(2) MUST always check
the return value, in case it returns some error, or if fewer
bytes than requested was returned. In the case of
!GRND_RANDOM and small request, the latter should never
happen, but the careful userspace code (and all crypto code
should be careful) should check for this anyway!
Finally, unless you are doing long-term key generation (and
perhaps not even then), you probably shouldn't be using
GRND_RANDOM. The cryptographic algorithms used for
/dev/urandom are quite conservative, and so should be
sufficient for all purposes. The disadvantage of GRND_RANDOM
is that it can block, and the increased complexity required to
deal with partially fulfilled getrandom(2) requests.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Zach Brown <zab@zabbo.net>
2014-07-17 16:13:05 +08:00
|
|
|
#include <linux/syscalls.h>
|
|
|
|
#include <linux/completion.h>
|
2016-05-21 08:01:00 +08:00
|
|
|
#include <linux/uuid.h>
|
2024-06-12 13:13:20 +08:00
|
|
|
#include <linux/uaccess.h>
|
|
|
|
#include <linux/siphash.h>
|
|
|
|
#include <linux/uio.h>
|
2018-11-17 09:26:21 +08:00
|
|
|
#include <crypto/chacha.h>
|
2024-06-12 13:13:20 +08:00
|
|
|
#include <crypto/blake2s.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <asm/processor.h>
|
|
|
|
#include <asm/irq.h>
|
2012-07-02 19:52:16 +08:00
|
|
|
#include <asm/irq_regs.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <asm/io.h>
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*********************************************************************
|
|
|
|
*
|
|
|
|
* Initialization and readiness waiting.
|
|
|
|
*
|
|
|
|
* Much of the RNG infrastructure is devoted to various dependencies
|
|
|
|
* being able to wait until the RNG has collected enough entropy and
|
|
|
|
* is ready for safe consumption.
|
|
|
|
*
|
|
|
|
*********************************************************************/
|
2014-06-15 09:43:13 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* crng_init is protected by base_crng->lock, and only increases
|
|
|
|
* its value (from empty->early->ready).
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
static enum {
|
|
|
|
CRNG_EMPTY = 0, /* Little to no entropy collected */
|
|
|
|
CRNG_EARLY = 1, /* At least POOL_EARLY_BITS collected */
|
|
|
|
CRNG_READY = 2 /* Fully initialized with POOL_READY_BITS collected */
|
|
|
|
} crng_init __read_mostly = CRNG_EMPTY;
|
|
|
|
#define crng_ready() (likely(crng_init >= CRNG_READY))
|
|
|
|
/* Various types of waiters for crng_init->CRNG_READY transition. */
|
|
|
|
static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
|
|
|
|
static struct fasync_struct *fasync;
|
|
|
|
static DEFINE_SPINLOCK(random_ready_chain_lock);
|
|
|
|
static RAW_NOTIFIER_HEAD(random_ready_chain);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/* Control how we warn userspace. */
|
|
|
|
static struct ratelimit_state urandom_warning =
|
|
|
|
RATELIMIT_STATE_INIT_FLAGS("urandom_warning", HZ, 3, RATELIMIT_MSG_ON_RELEASE);
|
|
|
|
static int ratelimit_disable __read_mostly =
|
|
|
|
IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM);
|
|
|
|
module_param_named(ratelimit_disable, ratelimit_disable, int, 0644);
|
|
|
|
MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression");
|
2012-07-28 10:26:08 +08:00
|
|
|
|
2013-09-11 11:16:17 +08:00
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* Returns whether or not the input pool has been seeded and thus guaranteed
|
|
|
|
* to supply cryptographically secure random numbers. This applies to: the
|
|
|
|
* /dev/urandom device, the get_random_bytes function, and the get_random_{u32,
|
|
|
|
* ,u64,int,long} family of functions.
|
random: account for entropy loss due to overwrites
When we write entropy into a non-empty pool, we currently don't
account at all for the fact that we will probabilistically overwrite
some of the entropy in that pool. This means that unless the pool is
fully empty, we are currently *guaranteed* to overestimate the amount
of entropy in the pool!
Assuming Shannon entropy with zero correlations we end up with an
exponentally decaying value of new entropy added:
entropy <- entropy + (pool_size - entropy) *
(1 - exp(-add_entropy/pool_size))
However, calculations involving fractional exponentials are not
practical in the kernel, so apply a piecewise linearization:
For add_entropy <= pool_size/2 then
(1 - exp(-add_entropy/pool_size)) >= (add_entropy/pool_size)*0.7869...
... so we can approximate the exponential with
3/4*add_entropy/pool_size and still be on the
safe side by adding at most pool_size/2 at a time.
In order for the loop not to take arbitrary amounts of time if a bad
ioctl is received, terminate if we are within one bit of full. This
way the loop is guaranteed to terminate after no more than
log2(poolsize) iterations, no matter what the input value is. The
vast majority of the time the loop will be executed exactly once.
The piecewise linearization is very conservative, approaching 3/4 of
the usable input value for small inputs, however, our entropy
estimation is pretty weak at best, especially for small values; we
have no handle on correlation; and the Shannon entropy measure (Rényi
entropy of order 1) is not the correct one to use in the first place,
but rather the correct entropy measure is the min-entropy, the Rényi
entropy of infinite order.
As such, this conservatism seems more than justified.
This does introduce fractional bit values. I have left it to have 3
bits of fraction, so that with a pool of 2^12 bits the multiply in
credit_entropy_bits() can still fit into an int, as 2*(3+12) < 31. It
is definitely possible to allow for more fractional accounting, but
that multiply then would have to be turned into a 32*32 -> 64 multiply.
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: DJ Johnston <dj.johnston@intel.com>
2013-09-11 11:16:17 +08:00
|
|
|
*
|
2024-06-12 13:13:20 +08:00
|
|
|
* Returns: true if the input pool has been seeded.
|
|
|
|
* false if the input pool has not been seeded.
|
2013-09-11 11:16:17 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
bool rng_is_initialized(void)
|
|
|
|
{
|
|
|
|
return crng_ready();
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(rng_is_initialized);
|
2013-09-11 11:16:17 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/* Used by wait_for_random_bytes(), and considered an entropy collector, below. */
|
|
|
|
static void try_to_generate_entropy(void);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* Wait for the input pool to be seeded and thus guaranteed to supply
|
|
|
|
* cryptographically secure random numbers. This applies to: the /dev/urandom
|
|
|
|
* device, the get_random_bytes function, and the get_random_{u32,u64,int,long}
|
|
|
|
* family of functions. Using any of these functions without first calling
|
|
|
|
* this function forfeits the guarantee of security.
|
|
|
|
*
|
|
|
|
* Returns: 0 if the input pool has been seeded.
|
|
|
|
* -ERESTARTSYS if the function was interrupted by a signal.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
int wait_for_random_bytes(void)
|
|
|
|
{
|
|
|
|
while (!crng_ready()) {
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
try_to_generate_entropy();
|
|
|
|
ret = wait_event_interruptible_timeout(crng_init_wait, crng_ready(), HZ);
|
|
|
|
if (ret)
|
|
|
|
return ret > 0 ? 0 : ret;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(wait_for_random_bytes);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* Add a callback function that will be invoked when the input
|
|
|
|
* pool is initialised.
|
2013-09-23 04:04:19 +08:00
|
|
|
*
|
2024-06-12 13:13:20 +08:00
|
|
|
* returns: 0 if callback is successfully added
|
|
|
|
* -EALREADY if pool is already initialised (callback not called)
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
int __cold register_random_ready_notifier(struct notifier_block *nb)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
int ret = -EALREADY;
|
|
|
|
|
|
|
|
if (crng_ready())
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&random_ready_chain_lock, flags);
|
|
|
|
if (!crng_ready())
|
|
|
|
ret = raw_notifier_chain_register(&random_ready_chain, nb);
|
|
|
|
spin_unlock_irqrestore(&random_ready_chain_lock, flags);
|
|
|
|
return ret;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* Delete a previously registered readiness callback function.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
int __cold unregister_random_ready_notifier(struct notifier_block *nb)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
int ret;
|
2015-06-09 18:19:39 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
spin_lock_irqsave(&random_ready_chain_lock, flags);
|
|
|
|
ret = raw_notifier_chain_unregister(&random_ready_chain, nb);
|
|
|
|
spin_unlock_irqrestore(&random_ready_chain_lock, flags);
|
|
|
|
return ret;
|
|
|
|
}
|
2016-06-13 06:13:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static void __cold process_random_ready_list(void)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
2016-06-13 06:13:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
spin_lock_irqsave(&random_ready_chain_lock, flags);
|
|
|
|
raw_notifier_call_chain(&random_ready_chain, 0, NULL);
|
|
|
|
spin_unlock_irqrestore(&random_ready_chain_lock, flags);
|
|
|
|
}
|
2018-04-25 13:12:32 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
#define warn_unseeded_randomness() \
|
|
|
|
if (IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM) && !crng_ready()) \
|
|
|
|
printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n", \
|
|
|
|
__func__, (void *)_RET_IP_, crng_init)
|
2018-04-25 13:12:32 +08:00
|
|
|
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*********************************************************************
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
2024-06-12 13:13:20 +08:00
|
|
|
* Fast key erasure RNG, the "crng".
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
2024-06-12 13:13:20 +08:00
|
|
|
* These functions expand entropy from the entropy extractor into
|
|
|
|
* long streams for external consumption using the "fast key erasure"
|
|
|
|
* RNG described at <https://blog.cr.yp.to/20170723-random.html>.
|
|
|
|
*
|
|
|
|
* There are a few exported interfaces for use by other drivers:
|
|
|
|
*
|
|
|
|
* void get_random_bytes(void *buf, size_t len)
|
|
|
|
* u32 get_random_u32()
|
|
|
|
* u64 get_random_u64()
|
|
|
|
* unsigned int get_random_int()
|
|
|
|
* unsigned long get_random_long()
|
|
|
|
*
|
|
|
|
* These interfaces will return the requested number of random bytes
|
|
|
|
* into the given buffer or as a return value. This is equivalent to
|
|
|
|
* a read from /dev/urandom. The u32, u64, int, and long family of
|
|
|
|
* functions may be higher performance for one-off random integers,
|
|
|
|
* because they do a bit of buffering and do not invoke reseeding
|
|
|
|
* until the buffer is emptied.
|
|
|
|
*
|
|
|
|
*********************************************************************/
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
enum {
|
|
|
|
CRNG_RESEED_START_INTERVAL = HZ,
|
|
|
|
CRNG_RESEED_INTERVAL = 60 * HZ
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static struct {
|
|
|
|
u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long));
|
|
|
|
unsigned long birth;
|
|
|
|
unsigned long generation;
|
|
|
|
spinlock_t lock;
|
|
|
|
} base_crng = {
|
|
|
|
.lock = __SPIN_LOCK_UNLOCKED(base_crng.lock)
|
|
|
|
};
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
struct crng {
|
|
|
|
u8 key[CHACHA_KEY_SIZE];
|
|
|
|
unsigned long generation;
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static DEFINE_PER_CPU(struct crng, crngs) = {
|
|
|
|
.generation = ULONG_MAX
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/* Used by crng_reseed() and crng_make_state() to extract a new seed from the input pool. */
|
|
|
|
static void extract_entropy(void *buf, size_t len);
|
2012-07-02 19:52:16 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/* This extracts a new crng key from the input pool. */
|
|
|
|
static void crng_reseed(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
unsigned long flags;
|
|
|
|
unsigned long next_gen;
|
|
|
|
u8 key[CHACHA_KEY_SIZE];
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
extract_entropy(key, sizeof(key));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*
|
|
|
|
* We copy the new key into the base_crng, overwriting the old one,
|
|
|
|
* and update the generation counter. We avoid hitting ULONG_MAX,
|
|
|
|
* because the per-cpu crngs are initialized to ULONG_MAX, so this
|
|
|
|
* forces new CPUs that come online to always initialize.
|
|
|
|
*/
|
|
|
|
spin_lock_irqsave(&base_crng.lock, flags);
|
|
|
|
memcpy(base_crng.key, key, sizeof(base_crng.key));
|
|
|
|
next_gen = base_crng.generation + 1;
|
|
|
|
if (next_gen == ULONG_MAX)
|
|
|
|
++next_gen;
|
|
|
|
WRITE_ONCE(base_crng.generation, next_gen);
|
|
|
|
WRITE_ONCE(base_crng.birth, jiffies);
|
|
|
|
if (!crng_ready())
|
|
|
|
crng_init = CRNG_READY;
|
|
|
|
spin_unlock_irqrestore(&base_crng.lock, flags);
|
|
|
|
memzero_explicit(key, sizeof(key));
|
2012-07-05 04:19:30 +08:00
|
|
|
}
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*
|
|
|
|
* This generates a ChaCha block using the provided key, and then
|
|
|
|
* immediately overwites that key with half the block. It returns
|
|
|
|
* the resultant ChaCha state to the user, along with the second
|
|
|
|
* half of the block containing 32 bytes of random data that may
|
|
|
|
* be used; random_data_len may not be greater than 32.
|
|
|
|
*
|
|
|
|
* The returned ChaCha state contains within it a copy of the old
|
|
|
|
* key value, at index 4, so the state should always be zeroed out
|
|
|
|
* immediately after using in order to maintain forward secrecy.
|
|
|
|
* If the state cannot be erased in a timely manner, then it is
|
|
|
|
* safer to set the random_data parameter to &chacha_state[4] so
|
|
|
|
* that this function overwrites it before returning.
|
|
|
|
*/
|
|
|
|
static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE],
|
|
|
|
u32 chacha_state[CHACHA_BLOCK_SIZE / sizeof(u32)],
|
|
|
|
u8 *random_data, size_t random_data_len)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
u8 first_block[CHACHA_BLOCK_SIZE];
|
|
|
|
|
|
|
|
BUG_ON(random_data_len > 32);
|
2012-07-04 22:38:30 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
chacha_init_consts(chacha_state);
|
|
|
|
memcpy(&chacha_state[4], key, CHACHA_KEY_SIZE);
|
|
|
|
memset(&chacha_state[12], 0, sizeof(u32) * 4);
|
|
|
|
chacha20_block(chacha_state, first_block);
|
|
|
|
|
|
|
|
memcpy(key, first_block, CHACHA_KEY_SIZE);
|
|
|
|
memcpy(random_data, first_block + CHACHA_KEY_SIZE, random_data_len);
|
|
|
|
memzero_explicit(first_block, sizeof(first_block));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*
|
|
|
|
* Return whether the crng seed is considered to be sufficiently old
|
|
|
|
* that a reseeding is needed. This happens if the last reseeding
|
|
|
|
* was CRNG_RESEED_INTERVAL ago, or during early boot, at an interval
|
|
|
|
* proportional to the uptime.
|
|
|
|
*/
|
|
|
|
static bool crng_has_old_seed(void)
|
|
|
|
{
|
|
|
|
static bool early_boot = true;
|
|
|
|
unsigned long interval = CRNG_RESEED_INTERVAL;
|
|
|
|
|
|
|
|
if (unlikely(READ_ONCE(early_boot))) {
|
|
|
|
time64_t uptime = ktime_get_seconds();
|
|
|
|
if (uptime >= CRNG_RESEED_INTERVAL / HZ * 2)
|
|
|
|
WRITE_ONCE(early_boot, false);
|
|
|
|
else
|
|
|
|
interval = max_t(unsigned int, CRNG_RESEED_START_INTERVAL,
|
|
|
|
(unsigned int)uptime / 2 * HZ);
|
|
|
|
}
|
|
|
|
return time_is_before_jiffies(READ_ONCE(base_crng.birth) + interval);
|
|
|
|
}
|
2012-07-02 19:52:16 +08:00
|
|
|
|
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* This function returns a ChaCha state that you may use for generating
|
|
|
|
* random data. It also returns up to 32 bytes on its own of random data
|
|
|
|
* that may be used; random_data_len may not be greater than 32.
|
2012-07-02 19:52:16 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
static void crng_make_state(u32 chacha_state[CHACHA_BLOCK_SIZE / sizeof(u32)],
|
|
|
|
u8 *random_data, size_t random_data_len)
|
2012-07-02 19:52:16 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
unsigned long flags;
|
|
|
|
struct crng *crng;
|
2014-06-15 09:43:13 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
BUG_ON(random_data_len > 32);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For the fast path, we check whether we're ready, unlocked first, and
|
|
|
|
* then re-check once locked later. In the case where we're really not
|
|
|
|
* ready, we do fast key erasure with the base_crng directly, extracting
|
|
|
|
* when crng_init is CRNG_EMPTY.
|
|
|
|
*/
|
|
|
|
if (!crng_ready()) {
|
|
|
|
bool ready;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&base_crng.lock, flags);
|
|
|
|
ready = crng_ready();
|
|
|
|
if (!ready) {
|
|
|
|
if (crng_init == CRNG_EMPTY)
|
|
|
|
extract_entropy(base_crng.key, sizeof(base_crng.key));
|
|
|
|
crng_fast_key_erasure(base_crng.key, chacha_state,
|
|
|
|
random_data, random_data_len);
|
|
|
|
}
|
|
|
|
spin_unlock_irqrestore(&base_crng.lock, flags);
|
|
|
|
if (!ready)
|
|
|
|
return;
|
|
|
|
}
|
2014-06-15 09:43:13 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*
|
|
|
|
* If the base_crng is old enough, we reseed, which in turn bumps the
|
|
|
|
* generation counter that we check below.
|
|
|
|
*/
|
|
|
|
if (unlikely(crng_has_old_seed()))
|
|
|
|
crng_reseed();
|
2014-06-15 09:43:13 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
local_irq_save(flags);
|
|
|
|
crng = raw_cpu_ptr(&crngs);
|
2014-06-15 09:43:13 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*
|
|
|
|
* If our per-cpu crng is older than the base_crng, then it means
|
|
|
|
* somebody reseeded the base_crng. In that case, we do fast key
|
|
|
|
* erasure on the base_crng, and use its output as the new key
|
|
|
|
* for our per-cpu crng. This brings us up to date with base_crng.
|
|
|
|
*/
|
|
|
|
if (unlikely(crng->generation != READ_ONCE(base_crng.generation))) {
|
|
|
|
spin_lock(&base_crng.lock);
|
|
|
|
crng_fast_key_erasure(base_crng.key, chacha_state,
|
|
|
|
crng->key, sizeof(crng->key));
|
|
|
|
crng->generation = base_crng.generation;
|
|
|
|
spin_unlock(&base_crng.lock);
|
|
|
|
}
|
2014-06-15 09:43:13 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*
|
|
|
|
* Finally, when we've made it this far, our per-cpu crng has an up
|
|
|
|
* to date key, and we can do fast key erasure with it to produce
|
|
|
|
* some random data and a ChaCha state for the caller. All other
|
|
|
|
* branches of this function are "unlikely", so most of the time we
|
|
|
|
* should wind up here immediately.
|
|
|
|
*/
|
|
|
|
crng_fast_key_erasure(crng->key, chacha_state, random_data, random_data_len);
|
|
|
|
local_irq_restore(flags);
|
2012-07-02 19:52:16 +08:00
|
|
|
}
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static void _get_random_bytes(void *buf, size_t len)
|
2015-06-09 18:19:39 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
u32 chacha_state[CHACHA_BLOCK_SIZE / sizeof(u32)];
|
|
|
|
u8 tmp[CHACHA_BLOCK_SIZE];
|
|
|
|
size_t first_block_len;
|
|
|
|
|
|
|
|
if (!len)
|
|
|
|
return;
|
|
|
|
|
|
|
|
first_block_len = min_t(size_t, 32, len);
|
|
|
|
crng_make_state(chacha_state, buf, first_block_len);
|
|
|
|
len -= first_block_len;
|
|
|
|
buf += first_block_len;
|
2015-06-09 18:19:39 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
while (len) {
|
|
|
|
if (len < CHACHA_BLOCK_SIZE) {
|
|
|
|
chacha20_block(chacha_state, tmp);
|
|
|
|
memcpy(buf, tmp, len);
|
|
|
|
memzero_explicit(tmp, sizeof(tmp));
|
|
|
|
break;
|
|
|
|
}
|
2015-06-09 18:19:39 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
chacha20_block(chacha_state, buf);
|
|
|
|
if (unlikely(chacha_state[12] == 0))
|
|
|
|
++chacha_state[13];
|
|
|
|
len -= CHACHA_BLOCK_SIZE;
|
|
|
|
buf += CHACHA_BLOCK_SIZE;
|
2015-06-09 18:19:39 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
|
|
|
|
memzero_explicit(chacha_state, sizeof(chacha_state));
|
2015-06-09 18:19:39 +08:00
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* This function is the exported kernel interface. It returns some
|
|
|
|
* number of good random numbers, suitable for key generation, seeding
|
|
|
|
* TCP sequence numbers, etc. It does not rely on the hardware random
|
|
|
|
* number generator. For random bytes direct from the hardware RNG
|
|
|
|
* (when available), use get_random_bytes_arch(). In order to ensure
|
|
|
|
* that the randomness provided by this function is okay, the function
|
|
|
|
* wait_for_random_bytes() should be called and return 0 at least once
|
|
|
|
* at any point prior.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
void get_random_bytes(void *buf, size_t len)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
warn_unseeded_randomness();
|
|
|
|
_get_random_bytes(buf, len);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(get_random_bytes);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static ssize_t get_random_bytes_user(struct iov_iter *iter)
|
|
|
|
{
|
|
|
|
u32 chacha_state[CHACHA_BLOCK_SIZE / sizeof(u32)];
|
|
|
|
u8 block[CHACHA_BLOCK_SIZE];
|
|
|
|
size_t ret = 0, copied;
|
2008-04-29 16:03:07 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
if (unlikely(!iov_iter_count(iter)))
|
|
|
|
return 0;
|
2012-07-05 04:19:30 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*
|
|
|
|
* Immediately overwrite the ChaCha key at index 4 with random
|
|
|
|
* bytes, in case userspace causes copy_to_iter() below to sleep
|
|
|
|
* forever, so that we still retain forward secrecy in that case.
|
|
|
|
*/
|
|
|
|
crng_make_state(chacha_state, (u8 *)&chacha_state[4], CHACHA_KEY_SIZE);
|
|
|
|
/*
|
|
|
|
* However, if we're doing a read of len <= 32, we don't need to
|
|
|
|
* use chacha_state after, so we can simply return those bytes to
|
|
|
|
* the user directly.
|
|
|
|
*/
|
|
|
|
if (iov_iter_count(iter) <= CHACHA_KEY_SIZE) {
|
|
|
|
ret = copy_to_iter(&chacha_state[4], CHACHA_KEY_SIZE, iter);
|
|
|
|
goto out_zero_chacha;
|
2019-05-23 00:02:16 +08:00
|
|
|
}
|
2012-07-02 19:52:16 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
for (;;) {
|
|
|
|
chacha20_block(chacha_state, block);
|
|
|
|
if (unlikely(chacha_state[12] == 0))
|
|
|
|
++chacha_state[13];
|
2012-07-05 04:19:30 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
copied = copy_to_iter(block, sizeof(block), iter);
|
|
|
|
ret += copied;
|
|
|
|
if (!iov_iter_count(iter) || copied != sizeof(block))
|
|
|
|
break;
|
2013-10-03 13:08:15 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
BUILD_BUG_ON(PAGE_SIZE % sizeof(block) != 0);
|
|
|
|
if (ret % PAGE_SIZE == 0) {
|
|
|
|
if (signal_pending(current))
|
|
|
|
break;
|
|
|
|
cond_resched();
|
2016-06-13 06:13:36 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
}
|
2016-06-13 06:13:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
memzero_explicit(block, sizeof(block));
|
|
|
|
out_zero_chacha:
|
|
|
|
memzero_explicit(chacha_state, sizeof(chacha_state));
|
|
|
|
return ret ? ret : -EFAULT;
|
|
|
|
}
|
2019-05-23 00:02:16 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*
|
|
|
|
* Batched entropy returns random integers. The quality of the random
|
|
|
|
* number is good as /dev/urandom. In order to ensure that the randomness
|
|
|
|
* provided by this function is okay, the function wait_for_random_bytes()
|
|
|
|
* should be called and return 0 at least once at any point prior.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define DEFINE_BATCHED_ENTROPY(type) \
|
|
|
|
struct batch_ ##type { \
|
|
|
|
/* \
|
|
|
|
* We make this 1.5x a ChaCha block, so that we get the \
|
|
|
|
* remaining 32 bytes from fast key erasure, plus one full \
|
|
|
|
* block from the detached ChaCha state. We can increase \
|
|
|
|
* the size of this later if needed so long as we keep the \
|
|
|
|
* formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE. \
|
|
|
|
*/ \
|
|
|
|
type entropy[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(type))]; \
|
|
|
|
unsigned long generation; \
|
|
|
|
unsigned int position; \
|
|
|
|
}; \
|
|
|
|
\
|
|
|
|
static DEFINE_PER_CPU(struct batch_ ##type, batched_entropy_ ##type) = { \
|
|
|
|
.position = UINT_MAX \
|
|
|
|
}; \
|
|
|
|
\
|
|
|
|
type get_random_ ##type(void) \
|
|
|
|
{ \
|
|
|
|
type ret; \
|
|
|
|
unsigned long flags; \
|
|
|
|
struct batch_ ##type *batch; \
|
|
|
|
unsigned long next_gen; \
|
|
|
|
\
|
|
|
|
warn_unseeded_randomness(); \
|
|
|
|
\
|
|
|
|
if (!crng_ready()) { \
|
|
|
|
_get_random_bytes(&ret, sizeof(ret)); \
|
|
|
|
return ret; \
|
|
|
|
} \
|
|
|
|
\
|
|
|
|
local_irq_save(flags); \
|
|
|
|
batch = raw_cpu_ptr(&batched_entropy_##type); \
|
|
|
|
\
|
|
|
|
next_gen = READ_ONCE(base_crng.generation); \
|
|
|
|
if (batch->position >= ARRAY_SIZE(batch->entropy) || \
|
|
|
|
next_gen != batch->generation) { \
|
|
|
|
_get_random_bytes(batch->entropy, sizeof(batch->entropy)); \
|
|
|
|
batch->position = 0; \
|
|
|
|
batch->generation = next_gen; \
|
|
|
|
} \
|
|
|
|
\
|
|
|
|
ret = batch->entropy[batch->position]; \
|
|
|
|
batch->entropy[batch->position] = 0; \
|
|
|
|
++batch->position; \
|
|
|
|
local_irq_restore(flags); \
|
|
|
|
return ret; \
|
|
|
|
} \
|
|
|
|
EXPORT_SYMBOL(get_random_ ##type);
|
|
|
|
|
|
|
|
DEFINE_BATCHED_ENTROPY(u64)
|
|
|
|
DEFINE_BATCHED_ENTROPY(u32)
|
|
|
|
|
|
|
|
#ifdef CONFIG_SMP
|
|
|
|
/*
|
|
|
|
* This function is called when the CPU is coming up, with entry
|
|
|
|
* CPUHP_RANDOM_PREPARE, which comes before CPUHP_WORKQUEUE_PREP.
|
|
|
|
*/
|
|
|
|
int __cold random_prepare_cpu(unsigned int cpu)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* When the cpu comes back online, immediately invalidate both
|
|
|
|
* the per-cpu crng and all batches, so that we serve fresh
|
|
|
|
* randomness.
|
|
|
|
*/
|
|
|
|
per_cpu_ptr(&crngs, cpu)->generation = ULONG_MAX;
|
|
|
|
per_cpu_ptr(&batched_entropy_u32, cpu)->position = UINT_MAX;
|
|
|
|
per_cpu_ptr(&batched_entropy_u64, cpu)->position = UINT_MAX;
|
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*
|
|
|
|
* This function will use the architecture-specific hardware random
|
|
|
|
* number generator if it is available. It is not recommended for
|
|
|
|
* use. Use get_random_bytes() instead. It returns the number of
|
|
|
|
* bytes filled in.
|
|
|
|
*/
|
|
|
|
size_t __must_check get_random_bytes_arch(void *buf, size_t len)
|
2013-09-11 11:16:17 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
size_t left = len;
|
|
|
|
u8 *p = buf;
|
2013-09-11 11:16:17 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
while (left) {
|
|
|
|
unsigned long v;
|
|
|
|
size_t block_len = min_t(size_t, left, sizeof(unsigned long));
|
|
|
|
|
|
|
|
if (!arch_get_random_long(&v))
|
|
|
|
break;
|
2016-07-04 05:01:26 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
memcpy(p, &v, block_len);
|
|
|
|
p += block_len;
|
|
|
|
left -= block_len;
|
|
|
|
}
|
2013-09-11 11:16:17 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
return len - left;
|
2013-09-11 11:16:17 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
EXPORT_SYMBOL(get_random_bytes_arch);
|
2013-09-11 11:16:17 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
|
|
|
|
/**********************************************************************
|
2016-06-13 06:13:36 +08:00
|
|
|
*
|
2024-06-12 13:13:20 +08:00
|
|
|
* Entropy accumulation and extraction routines.
|
2016-06-13 06:13:36 +08:00
|
|
|
*
|
2024-06-12 13:13:20 +08:00
|
|
|
* Callers may add entropy via:
|
|
|
|
*
|
|
|
|
* static void mix_pool_bytes(const void *buf, size_t len)
|
|
|
|
*
|
|
|
|
* After which, if added entropy should be credited:
|
|
|
|
*
|
|
|
|
* static void credit_init_bits(size_t bits)
|
|
|
|
*
|
|
|
|
* Finally, extract entropy via:
|
|
|
|
*
|
|
|
|
* static void extract_entropy(void *buf, size_t len)
|
|
|
|
*
|
|
|
|
**********************************************************************/
|
2016-06-13 06:13:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
enum {
|
|
|
|
POOL_BITS = BLAKE2S_HASH_SIZE * 8,
|
|
|
|
POOL_READY_BITS = POOL_BITS, /* When crng_init->CRNG_READY */
|
|
|
|
POOL_EARLY_BITS = POOL_READY_BITS / 2 /* When crng_init->CRNG_EARLY */
|
|
|
|
};
|
2018-04-24 06:51:28 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static struct {
|
|
|
|
struct blake2s_state hash;
|
|
|
|
spinlock_t lock;
|
|
|
|
unsigned int init_bits;
|
|
|
|
} input_pool = {
|
|
|
|
.hash.h = { BLAKE2S_IV0 ^ (0x01010000 | BLAKE2S_HASH_SIZE),
|
|
|
|
BLAKE2S_IV1, BLAKE2S_IV2, BLAKE2S_IV3, BLAKE2S_IV4,
|
|
|
|
BLAKE2S_IV5, BLAKE2S_IV6, BLAKE2S_IV7 },
|
|
|
|
.hash.outlen = BLAKE2S_HASH_SIZE,
|
|
|
|
.lock = __SPIN_LOCK_UNLOCKED(input_pool.lock),
|
|
|
|
};
|
2018-04-24 06:51:28 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static void _mix_pool_bytes(const void *buf, size_t len)
|
2018-04-24 06:51:28 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
blake2s_update(&input_pool.hash, buf, len);
|
2018-04-24 06:51:28 +08:00
|
|
|
}
|
2018-04-12 03:23:56 +08:00
|
|
|
|
2018-04-12 02:58:27 +08:00
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* This function adds bytes into the input pool. It does not
|
|
|
|
* update the initialization bit counter; the caller should call
|
|
|
|
* credit_init_bits if this is appropriate.
|
2018-04-12 02:58:27 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
static void mix_pool_bytes(const void *buf, size_t len)
|
2016-06-13 06:13:36 +08:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
spin_lock_irqsave(&input_pool.lock, flags);
|
|
|
|
_mix_pool_bytes(buf, len);
|
|
|
|
spin_unlock_irqrestore(&input_pool.lock, flags);
|
2016-06-13 06:13:36 +08:00
|
|
|
}
|
|
|
|
|
2018-04-12 02:58:27 +08:00
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* This is an HKDF-like construction for using the hashed collected entropy
|
|
|
|
* as a PRF key, that's then expanded block-by-block.
|
2018-04-12 02:58:27 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
static void extract_entropy(void *buf, size_t len)
|
2018-04-12 02:58:27 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
unsigned long flags;
|
|
|
|
u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE];
|
|
|
|
struct {
|
|
|
|
unsigned long rdseed[32 / sizeof(long)];
|
|
|
|
size_t counter;
|
|
|
|
} block;
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(block.rdseed); ++i) {
|
|
|
|
if (!arch_get_random_seed_long(&block.rdseed[i]) &&
|
|
|
|
!arch_get_random_long(&block.rdseed[i]))
|
|
|
|
block.rdseed[i] = random_get_entropy();
|
2018-04-12 02:58:27 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
|
|
|
|
spin_lock_irqsave(&input_pool.lock, flags);
|
|
|
|
|
|
|
|
/* seed = HASHPRF(last_key, entropy_input) */
|
|
|
|
blake2s_final(&input_pool.hash, seed);
|
|
|
|
|
|
|
|
/* next_key = HASHPRF(seed, RDSEED || 0) */
|
|
|
|
block.counter = 0;
|
|
|
|
blake2s(next_key, (u8 *)&block, seed, sizeof(next_key), sizeof(block), sizeof(seed));
|
|
|
|
blake2s_init_key(&input_pool.hash, BLAKE2S_HASH_SIZE, next_key, sizeof(next_key));
|
|
|
|
|
|
|
|
spin_unlock_irqrestore(&input_pool.lock, flags);
|
|
|
|
memzero_explicit(next_key, sizeof(next_key));
|
|
|
|
|
|
|
|
while (len) {
|
|
|
|
i = min_t(size_t, len, BLAKE2S_HASH_SIZE);
|
|
|
|
/* output = HASHPRF(seed, RDSEED || ++counter) */
|
|
|
|
++block.counter;
|
|
|
|
blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed));
|
|
|
|
len -= i;
|
|
|
|
buf += i;
|
2018-04-12 02:58:27 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
|
|
|
|
memzero_explicit(seed, sizeof(seed));
|
|
|
|
memzero_explicit(&block, sizeof(block));
|
2018-04-12 02:58:27 +08:00
|
|
|
}
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
#define credit_init_bits(bits) if (!crng_ready()) _credit_init_bits(bits)
|
|
|
|
|
|
|
|
static void __cold _credit_init_bits(size_t bits)
|
2016-06-13 06:13:36 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
unsigned int new, orig, add;
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
if (!bits)
|
|
|
|
return;
|
|
|
|
|
|
|
|
add = min_t(size_t, bits, POOL_BITS);
|
|
|
|
|
|
|
|
do {
|
|
|
|
orig = READ_ONCE(input_pool.init_bits);
|
|
|
|
new = min_t(unsigned int, POOL_BITS, orig + add);
|
|
|
|
} while (cmpxchg(&input_pool.init_bits, orig, new) != orig);
|
|
|
|
|
|
|
|
if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) {
|
|
|
|
crng_reseed(); /* Sets crng_init to CRNG_READY under base_crng.lock. */
|
2016-06-13 06:13:36 +08:00
|
|
|
process_random_ready_list();
|
|
|
|
wake_up_interruptible(&crng_init_wait);
|
2024-06-12 13:13:20 +08:00
|
|
|
kill_fasync(&fasync, SIGIO, POLL_IN);
|
|
|
|
pr_notice("crng init done\n");
|
|
|
|
if (urandom_warning.missed)
|
|
|
|
pr_notice("%d urandom warning(s) missed due to ratelimiting\n",
|
2018-04-25 13:12:32 +08:00
|
|
|
urandom_warning.missed);
|
2024-06-12 13:13:20 +08:00
|
|
|
} else if (orig < POOL_EARLY_BITS && new >= POOL_EARLY_BITS) {
|
|
|
|
spin_lock_irqsave(&base_crng.lock, flags);
|
|
|
|
/* Check if crng_init is CRNG_EMPTY, to avoid race with crng_reseed(). */
|
|
|
|
if (crng_init == CRNG_EMPTY) {
|
|
|
|
extract_entropy(base_crng.key, sizeof(base_crng.key));
|
|
|
|
crng_init = CRNG_EARLY;
|
2018-04-25 13:12:32 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
spin_unlock_irqrestore(&base_crng.lock, flags);
|
2016-06-13 06:13:36 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
|
|
|
|
/**********************************************************************
|
|
|
|
*
|
|
|
|
* Entropy collection routines.
|
|
|
|
*
|
|
|
|
* The following exported functions are used for pushing entropy into
|
|
|
|
* the above entropy accumulation routines:
|
|
|
|
*
|
|
|
|
* void add_device_randomness(const void *buf, size_t len);
|
|
|
|
* void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy);
|
|
|
|
* void add_bootloader_randomness(const void *buf, size_t len);
|
|
|
|
* void add_interrupt_randomness(int irq);
|
|
|
|
* void add_input_randomness(unsigned int type, unsigned int code, unsigned int value);
|
|
|
|
* void add_disk_randomness(struct gendisk *disk);
|
|
|
|
*
|
|
|
|
* add_device_randomness() adds data to the input pool that
|
|
|
|
* is likely to differ between two devices (or possibly even per boot).
|
|
|
|
* This would be things like MAC addresses or serial numbers, or the
|
|
|
|
* read-out of the RTC. This does *not* credit any actual entropy to
|
|
|
|
* the pool, but it initializes the pool to different values for devices
|
|
|
|
* that might otherwise be identical and have very little entropy
|
|
|
|
* available to them (particularly common in the embedded world).
|
|
|
|
*
|
|
|
|
* add_hwgenerator_randomness() is for true hardware RNGs, and will credit
|
|
|
|
* entropy as specified by the caller. If the entropy pool is full it will
|
|
|
|
* block until more entropy is needed.
|
|
|
|
*
|
|
|
|
* add_bootloader_randomness() is called by bootloader drivers, such as EFI
|
|
|
|
* and device tree, and credits its input depending on whether or not the
|
|
|
|
* configuration option CONFIG_RANDOM_TRUST_BOOTLOADER is set.
|
|
|
|
*
|
|
|
|
* add_interrupt_randomness() uses the interrupt timing as random
|
|
|
|
* inputs to the entropy pool. Using the cycle counters and the irq source
|
|
|
|
* as inputs, it feeds the input pool roughly once a second or after 64
|
|
|
|
* interrupts, crediting 1 bit of entropy for whichever comes first.
|
|
|
|
*
|
|
|
|
* add_input_randomness() uses the input layer interrupt timing, as well
|
|
|
|
* as the event type information from the hardware.
|
|
|
|
*
|
|
|
|
* add_disk_randomness() uses what amounts to the seek time of block
|
|
|
|
* layer request events, on a per-disk_devt basis, as input to the
|
|
|
|
* entropy pool. Note that high-speed solid state drives with very low
|
|
|
|
* seek times do not make for good sources of entropy, as their seek
|
|
|
|
* times are usually fairly consistent.
|
|
|
|
*
|
|
|
|
* The last two routines try to estimate how many bits of entropy
|
|
|
|
* to credit. They do this by keeping track of the first and second
|
|
|
|
* order deltas of the event timings.
|
|
|
|
*
|
|
|
|
**********************************************************************/
|
|
|
|
|
|
|
|
static bool trust_cpu __initdata = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU);
|
|
|
|
static bool trust_bootloader __initdata = IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER);
|
|
|
|
static int __init parse_trust_cpu(char *arg)
|
2016-06-13 06:13:36 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
return kstrtobool(arg, &trust_cpu);
|
2016-06-13 06:13:36 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
static int __init parse_trust_bootloader(char *arg)
|
2016-05-02 14:04:41 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
return kstrtobool(arg, &trust_bootloader);
|
2016-05-02 14:04:41 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
early_param("random.trust_cpu", parse_trust_cpu);
|
|
|
|
early_param("random.trust_bootloader", parse_trust_bootloader);
|
2016-05-02 14:04:41 +08:00
|
|
|
|
2016-05-05 01:29:18 +08:00
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* The first collection of entropy occurs at system boot while interrupts
|
|
|
|
* are still turned off. Here we push in latent entropy, RDSEED, a timestamp,
|
|
|
|
* utsname(), and the command line. Depending on the above configuration knob,
|
|
|
|
* RDSEED may be considered sufficient for initialization. Note that much
|
|
|
|
* earlier setup may already have pushed entropy into the input pool by the
|
|
|
|
* time we get here.
|
2016-05-05 01:29:18 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
int __init random_init(const char *command_line)
|
2016-05-05 01:29:18 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
ktime_t now = ktime_get_real();
|
|
|
|
unsigned int i, arch_bits;
|
|
|
|
unsigned long entropy;
|
2016-05-05 01:29:18 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
#if defined(LATENT_ENTROPY_PLUGIN)
|
|
|
|
static const u8 compiletime_seed[BLAKE2S_BLOCK_SIZE] __initconst __latent_entropy;
|
|
|
|
_mix_pool_bytes(compiletime_seed, sizeof(compiletime_seed));
|
2016-05-05 01:29:18 +08:00
|
|
|
#endif
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
for (i = 0, arch_bits = BLAKE2S_BLOCK_SIZE * 8;
|
|
|
|
i < BLAKE2S_BLOCK_SIZE; i += sizeof(entropy)) {
|
|
|
|
if (!arch_get_random_seed_long_early(&entropy) &&
|
|
|
|
!arch_get_random_long_early(&entropy)) {
|
|
|
|
entropy = random_get_entropy();
|
|
|
|
arch_bits -= sizeof(entropy) * 8;
|
2016-06-13 06:13:36 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
_mix_pool_bytes(&entropy, sizeof(entropy));
|
2016-06-13 06:13:36 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
_mix_pool_bytes(&now, sizeof(now));
|
|
|
|
_mix_pool_bytes(utsname(), sizeof(*(utsname())));
|
|
|
|
_mix_pool_bytes(command_line, strlen(command_line));
|
|
|
|
add_latent_entropy();
|
2016-06-13 06:13:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
if (crng_ready())
|
|
|
|
crng_reseed();
|
|
|
|
else if (trust_cpu)
|
|
|
|
_credit_init_bits(arch_bits);
|
2016-06-13 06:13:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
return 0;
|
2016-06-13 06:13:36 +08:00
|
|
|
}
|
|
|
|
|
2012-07-04 23:16:01 +08:00
|
|
|
/*
|
2016-06-13 06:13:36 +08:00
|
|
|
* Add device- or boot-specific data to the input pool to help
|
|
|
|
* initialize it.
|
2012-07-04 23:16:01 +08:00
|
|
|
*
|
2016-06-13 06:13:36 +08:00
|
|
|
* None of this adds any entropy; it is meant to avoid the problem of
|
|
|
|
* the entropy pool having similar initial state across largely
|
|
|
|
* identical devices.
|
2012-07-04 23:16:01 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
void add_device_randomness(const void *buf, size_t len)
|
2012-07-04 23:16:01 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
unsigned long entropy = random_get_entropy();
|
2013-09-13 02:27:22 +08:00
|
|
|
unsigned long flags;
|
2012-07-04 23:16:01 +08:00
|
|
|
|
2013-09-13 02:27:22 +08:00
|
|
|
spin_lock_irqsave(&input_pool.lock, flags);
|
2024-06-12 13:13:20 +08:00
|
|
|
_mix_pool_bytes(&entropy, sizeof(entropy));
|
|
|
|
_mix_pool_bytes(buf, len);
|
2013-09-13 02:27:22 +08:00
|
|
|
spin_unlock_irqrestore(&input_pool.lock, flags);
|
2012-07-04 23:16:01 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(add_device_randomness);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* Interface for in-kernel drivers of true hardware RNGs.
|
|
|
|
* Those devices may produce endless random bits and will be throttled
|
|
|
|
* when our pool is full.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
mix_pool_bytes(buf, len);
|
|
|
|
credit_init_bits(entropy);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* Throttle writing to once every CRNG_RESEED_INTERVAL, unless
|
|
|
|
* we're not yet initialized.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
if (!kthread_should_stop() && crng_ready())
|
|
|
|
schedule_timeout_interruptible(CRNG_RESEED_INTERVAL);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
EXPORT_SYMBOL_GPL(add_hwgenerator_randomness);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* Handle random seed passed by bootloader, and credit it if
|
|
|
|
* CONFIG_RANDOM_TRUST_BOOTLOADER is set.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
void __init add_bootloader_randomness(const void *buf, size_t len)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
mix_pool_bytes(buf, len);
|
|
|
|
if (trust_bootloader)
|
|
|
|
credit_init_bits(len * 8);
|
2013-10-03 13:08:15 +08:00
|
|
|
}
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
struct fast_pool {
|
|
|
|
unsigned long pool[4];
|
|
|
|
unsigned long last;
|
|
|
|
unsigned int count;
|
|
|
|
struct timer_list mix;
|
|
|
|
};
|
2013-10-03 13:08:15 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static void mix_interrupt_randomness(struct timer_list *work);
|
2013-10-03 13:08:15 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = {
|
|
|
|
#ifdef CONFIG_64BIT
|
|
|
|
#define FASTMIX_PERM SIPHASH_PERMUTATION
|
|
|
|
.pool = { SIPHASH_CONST_0, SIPHASH_CONST_1, SIPHASH_CONST_2, SIPHASH_CONST_3 },
|
|
|
|
#else
|
|
|
|
#define FASTMIX_PERM HSIPHASH_PERMUTATION
|
|
|
|
.pool = { HSIPHASH_CONST_0, HSIPHASH_CONST_1, HSIPHASH_CONST_2, HSIPHASH_CONST_3 },
|
|
|
|
#endif
|
|
|
|
.mix = __TIMER_INITIALIZER(mix_interrupt_randomness, 0)
|
|
|
|
};
|
2013-10-03 13:08:15 +08:00
|
|
|
|
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* This is [Half]SipHash-1-x, starting from an empty key. Because
|
|
|
|
* the key is fixed, it assumes that its inputs are non-malicious,
|
|
|
|
* and therefore this has no security on its own. s represents the
|
|
|
|
* four-word SipHash state, while v represents a two-word input.
|
2013-10-03 13:08:15 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
static void fast_mix(unsigned long s[4], unsigned long v1, unsigned long v2)
|
2013-10-03 13:08:15 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
s[3] ^= v1;
|
|
|
|
FASTMIX_PERM(s[0], s[1], s[2], s[3]);
|
|
|
|
s[0] ^= v1;
|
|
|
|
s[3] ^= v2;
|
|
|
|
FASTMIX_PERM(s[0], s[1], s[2], s[3]);
|
|
|
|
s[0] ^= v2;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
#ifdef CONFIG_SMP
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* This function is called when the CPU has just come online, with
|
|
|
|
* entry CPUHP_AP_RANDOM_ONLINE, just after CPUHP_AP_WORKQUEUE_ONLINE.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
int __cold random_online_cpu(unsigned int cpu)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2013-09-22 06:06:02 +08:00
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* During CPU shutdown and before CPU onlining, add_interrupt_
|
|
|
|
* randomness() may schedule mix_interrupt_randomness(), and
|
|
|
|
* set the MIX_INFLIGHT flag. However, because the worker can
|
|
|
|
* be scheduled on a different CPU during this period, that
|
|
|
|
* flag will never be cleared. For that reason, we zero out
|
|
|
|
* the flag here, which runs just after workqueues are onlined
|
|
|
|
* for the CPU again. This also has the effect of setting the
|
|
|
|
* irq randomness count to zero so that new accumulated irqs
|
|
|
|
* are fresh.
|
2013-09-22 06:06:02 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
per_cpu_ptr(&irq_randomness, cpu)->count = 0;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
#endif
|
2013-12-18 10:16:39 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static void mix_interrupt_randomness(struct timer_list *work)
|
|
|
|
{
|
|
|
|
struct fast_pool *fast_pool = container_of(work, struct fast_pool, mix);
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* The size of the copied stack pool is explicitly 2 longs so that we
|
|
|
|
* only ever ingest half of the siphash output each time, retaining
|
|
|
|
* the other half as the next "key" that carries over. The entropy is
|
|
|
|
* supposed to be sufficiently dispersed between bits so on average
|
|
|
|
* we don't wind up "losing" some.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
unsigned long pool[2];
|
|
|
|
unsigned int count;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/* Check to see if we're running on the wrong CPU due to hotplug. */
|
|
|
|
local_irq_disable();
|
|
|
|
if (fast_pool != this_cpu_ptr(&irq_randomness)) {
|
|
|
|
local_irq_enable();
|
|
|
|
return;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* Copy the pool to the stack so that the mixer always has a
|
|
|
|
* consistent view, before we reenable irqs again.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
memcpy(pool, fast_pool->pool, sizeof(pool));
|
|
|
|
count = fast_pool->count;
|
|
|
|
fast_pool->count = 0;
|
|
|
|
fast_pool->last = jiffies;
|
|
|
|
local_irq_enable();
|
2012-11-06 23:42:42 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
mix_pool_bytes(pool, sizeof(pool));
|
|
|
|
credit_init_bits(clamp_t(unsigned int, (count & U16_MAX) / 64, 1, sizeof(pool) * 8));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
memzero_explicit(pool, sizeof(pool));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
void add_interrupt_randomness(int irq)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
enum { MIX_INFLIGHT = 1U << 31 };
|
|
|
|
unsigned long entropy = random_get_entropy();
|
|
|
|
struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness);
|
|
|
|
struct pt_regs *regs = get_irq_regs();
|
|
|
|
unsigned int new_count;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
fast_mix(fast_pool->pool, entropy,
|
|
|
|
(regs ? instruction_pointer(regs) : _RET_IP_) ^ swab(irq));
|
|
|
|
new_count = ++fast_pool->count;
|
2017-06-08 16:16:59 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
if (new_count & MIX_INFLIGHT)
|
2017-06-08 16:16:59 +08:00
|
|
|
return;
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
|
random: try to actively add entropy rather than passively wait for it
For 5.3 we had to revert a nice ext4 IO pattern improvement, because it
caused a bootup regression due to lack of entropy at bootup together
with arguably broken user space that was asking for secure random
numbers when it really didn't need to.
See commit 72dbcf721566 (Revert "ext4: make __ext4_get_inode_loc plug").
This aims to solve the issue by actively generating entropy noise using
the CPU cycle counter when waiting for the random number generator to
initialize. This only works when you have a high-frequency time stamp
counter available, but that's the case on all modern x86 CPU's, and on
most other modern CPU's too.
What we do is to generate jitter entropy from the CPU cycle counter
under a somewhat complex load: calling the scheduler while also
guaranteeing a certain amount of timing noise by also triggering a
timer.
I'm sure we can tweak this, and that people will want to look at other
alternatives, but there's been a number of papers written on jitter
entropy, and this should really be fairly conservative by crediting one
bit of entropy for every timer-induced jump in the cycle counter. Not
because the timer itself would be all that unpredictable, but because
the interaction between the timer and the loop is going to be.
Even if (and perhaps particularly if) the timer actually happens on
another CPU, the cacheline interaction between the loop that reads the
cycle counter and the timer itself firing is going to add perturbations
to the cycle counter values that get mixed into the entropy pool.
As Thomas pointed out, with a modern out-of-order CPU, even quite simple
loops show a fair amount of hard-to-predict timing variability even in
the absense of external interrupts. But this tries to take that further
by actually having a fairly complex interaction.
This is not going to solve the entropy issue for architectures that have
no CPU cycle counter, but it's not clear how (and if) that is solvable,
and the hardware in question is largely starting to be irrelevant. And
by doing this we can at least avoid some of the even more contentious
approaches (like making the entropy waiting time out in order to avoid
the possibly unbounded waiting).
Cc: Ahmed Darwish <darwish.07@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Nicholas Mc Guire <hofrat@opentech.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Alexander E. Patrakov <patrakov@gmail.com>
Cc: Lennart Poettering <mzxreary@0pointer.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-29 07:53:52 +08:00
|
|
|
return;
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
fast_pool->count |= MIX_INFLIGHT;
|
|
|
|
if (!timer_pending(&fast_pool->mix)) {
|
|
|
|
fast_pool->mix.expires = jiffies;
|
|
|
|
add_timer_on(&fast_pool->mix, raw_smp_processor_id());
|
random: try to actively add entropy rather than passively wait for it
For 5.3 we had to revert a nice ext4 IO pattern improvement, because it
caused a bootup regression due to lack of entropy at bootup together
with arguably broken user space that was asking for secure random
numbers when it really didn't need to.
See commit 72dbcf721566 (Revert "ext4: make __ext4_get_inode_loc plug").
This aims to solve the issue by actively generating entropy noise using
the CPU cycle counter when waiting for the random number generator to
initialize. This only works when you have a high-frequency time stamp
counter available, but that's the case on all modern x86 CPU's, and on
most other modern CPU's too.
What we do is to generate jitter entropy from the CPU cycle counter
under a somewhat complex load: calling the scheduler while also
guaranteeing a certain amount of timing noise by also triggering a
timer.
I'm sure we can tweak this, and that people will want to look at other
alternatives, but there's been a number of papers written on jitter
entropy, and this should really be fairly conservative by crediting one
bit of entropy for every timer-induced jump in the cycle counter. Not
because the timer itself would be all that unpredictable, but because
the interaction between the timer and the loop is going to be.
Even if (and perhaps particularly if) the timer actually happens on
another CPU, the cacheline interaction between the loop that reads the
cycle counter and the timer itself firing is going to add perturbations
to the cycle counter values that get mixed into the entropy pool.
As Thomas pointed out, with a modern out-of-order CPU, even quite simple
loops show a fair amount of hard-to-predict timing variability even in
the absense of external interrupts. But this tries to take that further
by actually having a fairly complex interaction.
This is not going to solve the entropy issue for architectures that have
no CPU cycle counter, but it's not clear how (and if) that is solvable,
and the hardware in question is largely starting to be irrelevant. And
by doing this we can at least avoid some of the even more contentious
approaches (like making the entropy waiting time out in order to avoid
the possibly unbounded waiting).
Cc: Ahmed Darwish <darwish.07@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Nicholas Mc Guire <hofrat@opentech.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Alexander E. Patrakov <patrakov@gmail.com>
Cc: Lennart Poettering <mzxreary@0pointer.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-29 07:53:52 +08:00
|
|
|
}
|
2017-06-08 07:58:56 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
EXPORT_SYMBOL_GPL(add_interrupt_randomness);
|
2017-06-08 07:58:56 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/* There is one of these per entropy source */
|
|
|
|
struct timer_rand_state {
|
|
|
|
unsigned long last_time;
|
|
|
|
long last_delta, last_delta2;
|
|
|
|
};
|
2018-08-01 03:11:00 +08:00
|
|
|
|
2015-06-09 18:19:39 +08:00
|
|
|
/*
|
2024-06-12 13:13:20 +08:00
|
|
|
* This function adds entropy to the entropy "pool" by using timing
|
|
|
|
* delays. It uses the timer_rand_state structure to make an estimate
|
|
|
|
* of how many bits of entropy this call has added to the pool. The
|
|
|
|
* value "num" is also added to the pool; it should somehow describe
|
|
|
|
* the type of event that just happened.
|
2015-06-09 18:19:39 +08:00
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
static void add_timer_randomness(struct timer_rand_state *state, unsigned int num)
|
2015-06-09 18:19:39 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
unsigned long entropy = random_get_entropy(), now = jiffies, flags;
|
|
|
|
long delta, delta2, delta3;
|
|
|
|
unsigned int bits;
|
2015-06-09 18:19:39 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*
|
|
|
|
* If we're in a hard IRQ, add_interrupt_randomness() will be called
|
|
|
|
* sometime after, so mix into the fast pool.
|
|
|
|
*/
|
|
|
|
if (in_irq()) {
|
|
|
|
fast_mix(this_cpu_ptr(&irq_randomness)->pool, entropy, num);
|
|
|
|
} else {
|
|
|
|
spin_lock_irqsave(&input_pool.lock, flags);
|
|
|
|
_mix_pool_bytes(&entropy, sizeof(entropy));
|
|
|
|
_mix_pool_bytes(&num, sizeof(num));
|
|
|
|
spin_unlock_irqrestore(&input_pool.lock, flags);
|
|
|
|
}
|
2015-06-09 18:19:39 +08:00
|
|
|
|
2016-06-13 06:13:36 +08:00
|
|
|
if (crng_ready())
|
2024-06-12 13:13:20 +08:00
|
|
|
return;
|
2015-06-09 18:19:39 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*
|
|
|
|
* Calculate number of bits of randomness we probably added.
|
|
|
|
* We take into account the first, second and third-order deltas
|
|
|
|
* in order to make our estimate.
|
|
|
|
*/
|
|
|
|
delta = now - READ_ONCE(state->last_time);
|
|
|
|
WRITE_ONCE(state->last_time, now);
|
2015-06-09 18:19:39 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
delta2 = delta - READ_ONCE(state->last_delta);
|
|
|
|
WRITE_ONCE(state->last_delta, delta);
|
2011-08-01 04:54:50 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
delta3 = delta2 - READ_ONCE(state->last_delta2);
|
|
|
|
WRITE_ONCE(state->last_delta2, delta2);
|
random: add new get_random_bytes_arch() function
Create a new function, get_random_bytes_arch() which will use the
architecture-specific hardware random number generator if it is
present. Change get_random_bytes() to not use the HW RNG, even if it
is avaiable.
The reason for this is that the hw random number generator is fast (if
it is present), but it requires that we trust the hardware
manufacturer to have not put in a back door. (For example, an
increasing counter encrypted by an AES key known to the NSA.)
It's unlikely that Intel (for example) was paid off by the US
Government to do this, but it's impossible for them to prove otherwise
--- especially since Bull Mountain is documented to use AES as a
whitener. Hence, the output of an evil, trojan-horse version of
RDRAND is statistically indistinguishable from an RDRAND implemented
to the specifications claimed by Intel. Short of using a tunnelling
electronic microscope to reverse engineer an Ivy Bridge chip and
disassembling and analyzing the CPU microcode, there's no way for us
to tell for sure.
Since users of get_random_bytes() in the Linux kernel need to be able
to support hardware systems where the HW RNG is not present, most
time-sensitive users of this interface have already created their own
cryptographic RNG interface which uses get_random_bytes() as a seed.
So it's much better to use the HW RNG to improve the existing random
number generator, by mixing in any entropy returned by the HW RNG into
/dev/random's entropy pool, but to always _use_ /dev/random's entropy
pool.
This way we get almost of the benefits of the HW RNG without any
potential liabilities. The only benefits we forgo is the
speed/performance enhancements --- and generic kernel code can't
depend on depend on get_random_bytes() having the speed of a HW RNG
anyway.
For those places that really want access to the arch-specific HW RNG,
if it is available, we provide get_random_bytes_arch().
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org
2012-07-05 22:35:23 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
if (delta < 0)
|
|
|
|
delta = -delta;
|
|
|
|
if (delta2 < 0)
|
|
|
|
delta2 = -delta2;
|
|
|
|
if (delta3 < 0)
|
|
|
|
delta3 = -delta3;
|
|
|
|
if (delta > delta2)
|
|
|
|
delta = delta2;
|
|
|
|
if (delta > delta3)
|
|
|
|
delta = delta3;
|
2018-06-22 07:15:31 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*
|
|
|
|
* delta is now minimum absolute delta. Round down by 1 bit
|
|
|
|
* on general principles, and limit entropy estimate to 11 bits.
|
|
|
|
*/
|
|
|
|
bits = min(fls(delta >> 1), 11);
|
2011-08-01 04:54:50 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*
|
|
|
|
* As mentioned above, if we're in a hard IRQ, add_interrupt_randomness()
|
|
|
|
* will run after this, which uses a different crediting scheme of 1 bit
|
|
|
|
* per every 64 interrupts. In order to let that function do accounting
|
|
|
|
* close to the one in this function, we credit a full 64/64 bit per bit,
|
|
|
|
* and then subtract one to account for the extra one added.
|
|
|
|
*/
|
|
|
|
if (in_irq())
|
|
|
|
this_cpu_ptr(&irq_randomness)->count += max(1u, bits * 64) - 1;
|
|
|
|
else
|
|
|
|
_credit_init_bits(bits);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
random: add new get_random_bytes_arch() function
Create a new function, get_random_bytes_arch() which will use the
architecture-specific hardware random number generator if it is
present. Change get_random_bytes() to not use the HW RNG, even if it
is avaiable.
The reason for this is that the hw random number generator is fast (if
it is present), but it requires that we trust the hardware
manufacturer to have not put in a back door. (For example, an
increasing counter encrypted by an AES key known to the NSA.)
It's unlikely that Intel (for example) was paid off by the US
Government to do this, but it's impossible for them to prove otherwise
--- especially since Bull Mountain is documented to use AES as a
whitener. Hence, the output of an evil, trojan-horse version of
RDRAND is statistically indistinguishable from an RDRAND implemented
to the specifications claimed by Intel. Short of using a tunnelling
electronic microscope to reverse engineer an Ivy Bridge chip and
disassembling and analyzing the CPU microcode, there's no way for us
to tell for sure.
Since users of get_random_bytes() in the Linux kernel need to be able
to support hardware systems where the HW RNG is not present, most
time-sensitive users of this interface have already created their own
cryptographic RNG interface which uses get_random_bytes() as a seed.
So it's much better to use the HW RNG to improve the existing random
number generator, by mixing in any entropy returned by the HW RNG into
/dev/random's entropy pool, but to always _use_ /dev/random's entropy
pool.
This way we get almost of the benefits of the HW RNG without any
potential liabilities. The only benefits we forgo is the
speed/performance enhancements --- and generic kernel code can't
depend on depend on get_random_bytes() having the speed of a HW RNG
anyway.
For those places that really want access to the arch-specific HW RNG,
if it is available, we provide get_random_bytes_arch().
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org
2012-07-05 22:35:23 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
void add_input_randomness(unsigned int type, unsigned int code, unsigned int value)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
static unsigned char last_value;
|
|
|
|
static struct timer_rand_state input_timer_state = { INITIAL_JIFFIES };
|
|
|
|
|
|
|
|
/* Ignore autorepeat and the like. */
|
|
|
|
if (value == last_value)
|
|
|
|
return;
|
|
|
|
|
|
|
|
last_value = value;
|
|
|
|
add_timer_randomness(&input_timer_state,
|
|
|
|
(type << 4) ^ code ^ (code >> 4) ^ value);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
EXPORT_SYMBOL_GPL(add_input_randomness);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
#ifdef CONFIG_BLOCK
|
|
|
|
void add_disk_randomness(struct gendisk *disk)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
if (!disk || !disk->random)
|
|
|
|
return;
|
|
|
|
/* First major is 1, so we get >= 0x200 here. */
|
|
|
|
add_timer_randomness(disk->random, 0x100 + disk_devt(disk));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
EXPORT_SYMBOL_GPL(add_disk_randomness);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
void __cold rand_initialize_disk(struct gendisk *disk)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct timer_rand_state *state;
|
|
|
|
|
|
|
|
/*
|
2007-03-29 05:22:33 +08:00
|
|
|
* If kzalloc returns null, we just won't use that entropy
|
2005-04-17 06:20:36 +08:00
|
|
|
* source.
|
|
|
|
*/
|
2007-03-29 05:22:33 +08:00
|
|
|
state = kzalloc(sizeof(struct timer_rand_state), GFP_KERNEL);
|
2013-11-04 05:40:53 +08:00
|
|
|
if (state) {
|
|
|
|
state->last_time = INITIAL_JIFFIES;
|
2005-04-17 06:20:36 +08:00
|
|
|
disk->random = state;
|
2013-11-04 05:40:53 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
[PATCH] BLOCK: Make it possible to disable the block layer [try #6]
Make it possible to disable the block layer. Not all embedded devices require
it, some can make do with just JFFS2, NFS, ramfs, etc - none of which require
the block layer to be present.
This patch does the following:
(*) Introduces CONFIG_BLOCK to disable the block layer, buffering and blockdev
support.
(*) Adds dependencies on CONFIG_BLOCK to any configuration item that controls
an item that uses the block layer. This includes:
(*) Block I/O tracing.
(*) Disk partition code.
(*) All filesystems that are block based, eg: Ext3, ReiserFS, ISOFS.
(*) The SCSI layer. As far as I can tell, even SCSI chardevs use the
block layer to do scheduling. Some drivers that use SCSI facilities -
such as USB storage - end up disabled indirectly from this.
(*) Various block-based device drivers, such as IDE and the old CDROM
drivers.
(*) MTD blockdev handling and FTL.
(*) JFFS - which uses set_bdev_super(), something it could avoid doing by
taking a leaf out of JFFS2's book.
(*) Makes most of the contents of linux/blkdev.h, linux/buffer_head.h and
linux/elevator.h contingent on CONFIG_BLOCK being set. sector_div() is,
however, still used in places, and so is still available.
(*) Also made contingent are the contents of linux/mpage.h, linux/genhd.h and
parts of linux/fs.h.
(*) Makes a number of files in fs/ contingent on CONFIG_BLOCK.
(*) Makes mm/bounce.c (bounce buffering) contingent on CONFIG_BLOCK.
(*) set_page_dirty() doesn't call __set_page_dirty_buffers() if CONFIG_BLOCK
is not enabled.
(*) fs/no-block.c is created to hold out-of-line stubs and things that are
required when CONFIG_BLOCK is not set:
(*) Default blockdev file operations (to give error ENODEV on opening).
(*) Makes some /proc changes:
(*) /proc/devices does not list any blockdevs.
(*) /proc/diskstats and /proc/partitions are contingent on CONFIG_BLOCK.
(*) Makes some compat ioctl handling contingent on CONFIG_BLOCK.
(*) If CONFIG_BLOCK is not defined, makes sys_quotactl() return -ENODEV if
given command other than Q_SYNC or if a special device is specified.
(*) In init/do_mounts.c, no reference is made to the blockdev routines if
CONFIG_BLOCK is not defined. This does not prohibit NFS roots or JFFS2.
(*) The bdflush, ioprio_set and ioprio_get syscalls can now be absent (return
error ENOSYS by way of cond_syscall if so).
(*) The seclvl_bd_claim() and seclvl_bd_release() security calls do nothing if
CONFIG_BLOCK is not set, since they can't then happen.
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2006-10-01 02:45:40 +08:00
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/*
|
|
|
|
* Each time the timer fires, we expect that we got an unpredictable
|
|
|
|
* jump in the cycle counter. Even if the timer is running on another
|
|
|
|
* CPU, the timer activity will be touching the stack of the CPU that is
|
|
|
|
* generating entropy..
|
|
|
|
*
|
|
|
|
* Note that we don't re-arm the timer in the timer itself - we are
|
|
|
|
* happy to be scheduled away, since that just makes the load more
|
|
|
|
* complex, but we do not want the timer to keep ticking unless the
|
|
|
|
* entropy loop is running.
|
|
|
|
*
|
|
|
|
* So the re-arming always happens in the entropy loop itself.
|
|
|
|
*/
|
|
|
|
static void __cold entropy_timer(struct timer_list *t)
|
|
|
|
{
|
|
|
|
credit_init_bits(1);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we have an actual cycle counter, see if we can
|
|
|
|
* generate enough entropy with timing noise
|
|
|
|
*/
|
|
|
|
static void __cold try_to_generate_entropy(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
struct {
|
|
|
|
unsigned long entropy;
|
|
|
|
struct timer_list timer;
|
|
|
|
} stack;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
stack.entropy = random_get_entropy();
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/* Slow counter - or none. Don't even bother */
|
|
|
|
if (stack.entropy == random_get_entropy())
|
|
|
|
return;
|
2013-11-30 04:02:33 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
timer_setup_on_stack(&stack.timer, entropy_timer, 0);
|
|
|
|
while (!crng_ready() && !signal_pending(current)) {
|
|
|
|
if (!timer_pending(&stack.timer))
|
|
|
|
mod_timer(&stack.timer, jiffies + 1);
|
|
|
|
mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
|
|
|
|
schedule();
|
|
|
|
stack.entropy = random_get_entropy();
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
del_timer_sync(&stack.timer);
|
|
|
|
destroy_timer_on_stack(&stack.timer);
|
|
|
|
mix_pool_bytes(&stack.entropy, sizeof(stack.entropy));
|
random: introduce getrandom(2) system call
The getrandom(2) system call was requested by the LibreSSL Portable
developers. It is analoguous to the getentropy(2) system call in
OpenBSD.
The rationale of this system call is to provide resiliance against
file descriptor exhaustion attacks, where the attacker consumes all
available file descriptors, forcing the use of the fallback code where
/dev/[u]random is not available. Since the fallback code is often not
well-tested, it is better to eliminate this potential failure mode
entirely.
The other feature provided by this new system call is the ability to
request randomness from the /dev/urandom entropy pool, but to block
until at least 128 bits of entropy has been accumulated in the
/dev/urandom entropy pool. Historically, the emphasis in the
/dev/urandom development has been to ensure that urandom pool is
initialized as quickly as possible after system boot, and preferably
before the init scripts start execution.
This is because changing /dev/urandom reads to block represents an
interface change that could potentially break userspace which is not
acceptable. In practice, on most x86 desktop and server systems, in
general the entropy pool can be initialized before it is needed (and
in modern kernels, we will printk a warning message if not). However,
on an embedded system, this may not be the case. And so with this new
interface, we can provide the functionality of blocking until the
urandom pool has been initialized. Any userspace program which uses
this new functionality must take care to assure that if it is used
during the boot process, that it will not cause the init scripts or
other portions of the system startup to hang indefinitely.
SYNOPSIS
#include <linux/random.h>
int getrandom(void *buf, size_t buflen, unsigned int flags);
DESCRIPTION
The system call getrandom() fills the buffer pointed to by buf
with up to buflen random bytes which can be used to seed user
space random number generators (i.e., DRBG's) or for other
cryptographic uses. It should not be used for Monte Carlo
simulations or other programs/algorithms which are doing
probabilistic sampling.
If the GRND_RANDOM flags bit is set, then draw from the
/dev/random pool instead of the /dev/urandom pool. The
/dev/random pool is limited based on the entropy that can be
obtained from environmental noise, so if there is insufficient
entropy, the requested number of bytes may not be returned.
If there is no entropy available at all, getrandom(2) will
either block, or return an error with errno set to EAGAIN if
the GRND_NONBLOCK bit is set in flags.
If the GRND_RANDOM bit is not set, then the /dev/urandom pool
will be used. Unlike using read(2) to fetch data from
/dev/urandom, if the urandom pool has not been sufficiently
initialized, getrandom(2) will block (or return -1 with the
errno set to EAGAIN if the GRND_NONBLOCK bit is set in flags).
The getentropy(2) system call in OpenBSD can be emulated using
the following function:
int getentropy(void *buf, size_t buflen)
{
int ret;
if (buflen > 256)
goto failure;
ret = getrandom(buf, buflen, 0);
if (ret < 0)
return ret;
if (ret == buflen)
return 0;
failure:
errno = EIO;
return -1;
}
RETURN VALUE
On success, the number of bytes that was filled in the buf is
returned. This may not be all the bytes requested by the
caller via buflen if insufficient entropy was present in the
/dev/random pool, or if the system call was interrupted by a
signal.
On error, -1 is returned, and errno is set appropriately.
ERRORS
EINVAL An invalid flag was passed to getrandom(2)
EFAULT buf is outside the accessible address space.
EAGAIN The requested entropy was not available, and
getentropy(2) would have blocked if the
GRND_NONBLOCK flag was not set.
EINTR While blocked waiting for entropy, the call was
interrupted by a signal handler; see the description
of how interrupted read(2) calls on "slow" devices
are handled with and without the SA_RESTART flag
in the signal(7) man page.
NOTES
For small requests (buflen <= 256) getrandom(2) will not
return EINTR when reading from the urandom pool once the
entropy pool has been initialized, and it will return all of
the bytes that have been requested. This is the recommended
way to use getrandom(2), and is designed for compatibility
with OpenBSD's getentropy() system call.
However, if you are using GRND_RANDOM, then getrandom(2) may
block until the entropy accounting determines that sufficient
environmental noise has been gathered such that getrandom(2)
will be operating as a NRBG instead of a DRBG for those people
who are working in the NIST SP 800-90 regime. Since it may
block for a long time, these guarantees do *not* apply. The
user may want to interrupt a hanging process using a signal,
so blocking until all of the requested bytes are returned
would be unfriendly.
For this reason, the user of getrandom(2) MUST always check
the return value, in case it returns some error, or if fewer
bytes than requested was returned. In the case of
!GRND_RANDOM and small request, the latter should never
happen, but the careful userspace code (and all crypto code
should be careful) should check for this anyway!
Finally, unless you are doing long-term key generation (and
perhaps not even then), you probably shouldn't be using
GRND_RANDOM. The cryptographic algorithms used for
/dev/urandom are quite conservative, and so should be
sufficient for all purposes. The disadvantage of GRND_RANDOM
is that it can block, and the increased complexity required to
deal with partially fulfilled getrandom(2) requests.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Zach Brown <zab@zabbo.net>
2014-07-17 16:13:05 +08:00
|
|
|
}
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
|
|
|
|
/**********************************************************************
|
|
|
|
*
|
|
|
|
* Userspace reader/writer interfaces.
|
|
|
|
*
|
|
|
|
* getrandom(2) is the primary modern interface into the RNG and should
|
|
|
|
* be used in preference to anything else.
|
|
|
|
*
|
|
|
|
* Reading from /dev/random has the same functionality as calling
|
|
|
|
* getrandom(2) with flags=0. In earlier versions, however, it had
|
|
|
|
* vastly different semantics and should therefore be avoided, to
|
|
|
|
* prevent backwards compatibility issues.
|
|
|
|
*
|
|
|
|
* Reading from /dev/urandom has the same functionality as calling
|
|
|
|
* getrandom(2) with flags=GRND_INSECURE. Because it does not block
|
|
|
|
* waiting for the RNG to be ready, it should not be used.
|
|
|
|
*
|
|
|
|
* Writing to either /dev/random or /dev/urandom adds entropy to
|
|
|
|
* the input pool but does not credit it.
|
|
|
|
*
|
|
|
|
* Polling on /dev/random indicates when the RNG is initialized, on
|
|
|
|
* the read side, and when it wants new entropy, on the write side.
|
|
|
|
*
|
|
|
|
* Both /dev/random and /dev/urandom have the same set of ioctls for
|
|
|
|
* adding entropy, getting the entropy count, zeroing the count, and
|
|
|
|
* reseeding the crng.
|
|
|
|
*
|
|
|
|
**********************************************************************/
|
|
|
|
|
|
|
|
SYSCALL_DEFINE3(getrandom, char __user *, ubuf, size_t, len, unsigned int, flags)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
struct iov_iter iter;
|
|
|
|
struct iovec iov;
|
2013-11-03 19:54:51 +08:00
|
|
|
int ret;
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Requesting insecure and blocking randomness at the same time makes
|
|
|
|
* no sense.
|
|
|
|
*/
|
|
|
|
if ((flags & (GRND_INSECURE | GRND_RANDOM)) == (GRND_INSECURE | GRND_RANDOM))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (!crng_ready() && !(flags & GRND_INSECURE)) {
|
|
|
|
if (flags & GRND_NONBLOCK)
|
|
|
|
return -EAGAIN;
|
|
|
|
ret = wait_for_random_bytes();
|
|
|
|
if (unlikely(ret))
|
|
|
|
return ret;
|
2016-06-13 22:10:51 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
|
|
|
|
ret = import_single_range(READ, ubuf, len, &iov, &iter);
|
|
|
|
if (unlikely(ret))
|
|
|
|
return ret;
|
|
|
|
return get_random_bytes_user(&iter);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static __poll_t random_poll(struct file *file, poll_table *wait)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
poll_wait(file, &crng_init_wait, wait);
|
|
|
|
return crng_ready() ? EPOLLIN | EPOLLRDNORM : EPOLLOUT | EPOLLWRNORM;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static ssize_t write_pool_user(struct iov_iter *iter)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
u8 block[BLAKE2S_BLOCK_SIZE];
|
|
|
|
ssize_t ret = 0;
|
|
|
|
size_t copied;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
if (unlikely(!iov_iter_count(iter)))
|
|
|
|
return 0;
|
random: mix rdrand with entropy sent in from userspace
Fedora has integrated the jitter entropy daemon to work around slow
boot problems, especially on VM's that don't support virtio-rng:
https://bugzilla.redhat.com/show_bug.cgi?id=1572944
It's understandable why they did this, but the Jitter entropy daemon
works fundamentally on the principle: "the CPU microarchitecture is
**so** complicated and we can't figure it out, so it *must* be
random". Yes, it uses statistical tests to "prove" it is secure, but
AES_ENCRYPT(NSA_KEY, COUNTER++) will also pass statistical tests with
flying colors.
So if RDRAND is available, mix it into entropy submitted from
userspace. It can't hurt, and if you believe the NSA has backdoored
RDRAND, then they probably have enough details about the Intel
microarchitecture that they can reverse engineer how the Jitter
entropy daemon affects the microarchitecture, and attack its output
stream. And if RDRAND is in fact an honest DRNG, it will immeasurably
improve on what the Jitter entropy daemon might produce.
This also provides some protection against someone who is able to read
or set the entropy seed file.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>
2018-07-15 11:55:57 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
for (;;) {
|
|
|
|
copied = copy_from_iter(block, sizeof(block), iter);
|
|
|
|
ret += copied;
|
|
|
|
mix_pool_bytes(block, copied);
|
|
|
|
if (!iov_iter_count(iter) || copied != sizeof(block))
|
|
|
|
break;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
BUILD_BUG_ON(PAGE_SIZE % sizeof(block) != 0);
|
|
|
|
if (ret % PAGE_SIZE == 0) {
|
|
|
|
if (signal_pending(current))
|
random: mix rdrand with entropy sent in from userspace
Fedora has integrated the jitter entropy daemon to work around slow
boot problems, especially on VM's that don't support virtio-rng:
https://bugzilla.redhat.com/show_bug.cgi?id=1572944
It's understandable why they did this, but the Jitter entropy daemon
works fundamentally on the principle: "the CPU microarchitecture is
**so** complicated and we can't figure it out, so it *must* be
random". Yes, it uses statistical tests to "prove" it is secure, but
AES_ENCRYPT(NSA_KEY, COUNTER++) will also pass statistical tests with
flying colors.
So if RDRAND is available, mix it into entropy submitted from
userspace. It can't hurt, and if you believe the NSA has backdoored
RDRAND, then they probably have enough details about the Intel
microarchitecture that they can reverse engineer how the Jitter
entropy daemon affects the microarchitecture, and attack its output
stream. And if RDRAND is in fact an honest DRNG, it will immeasurably
improve on what the Jitter entropy daemon might produce.
This also provides some protection against someone who is able to read
or set the entropy seed file.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>
2018-07-15 11:55:57 +08:00
|
|
|
break;
|
2024-06-12 13:13:20 +08:00
|
|
|
cond_resched();
|
random: mix rdrand with entropy sent in from userspace
Fedora has integrated the jitter entropy daemon to work around slow
boot problems, especially on VM's that don't support virtio-rng:
https://bugzilla.redhat.com/show_bug.cgi?id=1572944
It's understandable why they did this, but the Jitter entropy daemon
works fundamentally on the principle: "the CPU microarchitecture is
**so** complicated and we can't figure it out, so it *must* be
random". Yes, it uses statistical tests to "prove" it is secure, but
AES_ENCRYPT(NSA_KEY, COUNTER++) will also pass statistical tests with
flying colors.
So if RDRAND is available, mix it into entropy submitted from
userspace. It can't hurt, and if you believe the NSA has backdoored
RDRAND, then they probably have enough details about the Intel
microarchitecture that they can reverse engineer how the Jitter
entropy daemon affects the microarchitecture, and attack its output
stream. And if RDRAND is in fact an honest DRNG, it will immeasurably
improve on what the Jitter entropy daemon might produce.
This also provides some protection against someone who is able to read
or set the entropy seed file.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>
2018-07-15 11:55:57 +08:00
|
|
|
}
|
2024-06-12 13:13:20 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
memzero_explicit(block, sizeof(block));
|
|
|
|
return ret ? ret : -EFAULT;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t random_write_iter(struct kiocb *kiocb, struct iov_iter *iter)
|
|
|
|
{
|
|
|
|
return write_pool_user(iter);
|
|
|
|
}
|
random: mix rdrand with entropy sent in from userspace
Fedora has integrated the jitter entropy daemon to work around slow
boot problems, especially on VM's that don't support virtio-rng:
https://bugzilla.redhat.com/show_bug.cgi?id=1572944
It's understandable why they did this, but the Jitter entropy daemon
works fundamentally on the principle: "the CPU microarchitecture is
**so** complicated and we can't figure it out, so it *must* be
random". Yes, it uses statistical tests to "prove" it is secure, but
AES_ENCRYPT(NSA_KEY, COUNTER++) will also pass statistical tests with
flying colors.
So if RDRAND is available, mix it into entropy submitted from
userspace. It can't hurt, and if you believe the NSA has backdoored
RDRAND, then they probably have enough details about the Intel
microarchitecture that they can reverse engineer how the Jitter
entropy daemon affects the microarchitecture, and attack its output
stream. And if RDRAND is in fact an honest DRNG, it will immeasurably
improve on what the Jitter entropy daemon might produce.
This also provides some protection against someone who is able to read
or set the entropy seed file.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>
2018-07-15 11:55:57 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static ssize_t urandom_read_iter(struct kiocb *kiocb, struct iov_iter *iter)
|
|
|
|
{
|
|
|
|
static int maxwarn = 10;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
if (!crng_ready()) {
|
|
|
|
if (!ratelimit_disable && maxwarn <= 0)
|
|
|
|
++urandom_warning.missed;
|
|
|
|
else if (ratelimit_disable || __ratelimit(&urandom_warning)) {
|
|
|
|
--maxwarn;
|
|
|
|
pr_notice("%s: uninitialized urandom read (%zu bytes read)\n",
|
|
|
|
current->comm, iov_iter_count(iter));
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2007-05-30 10:58:10 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
return get_random_bytes_user(iter);
|
2007-05-30 10:58:10 +08:00
|
|
|
}
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static ssize_t random_read_iter(struct kiocb *kiocb, struct iov_iter *iter)
|
2007-05-30 10:58:10 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
int ret;
|
2007-05-30 10:58:10 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
if (!crng_ready() &&
|
|
|
|
((kiocb->ki_flags & IOCB_NOWAIT) ||
|
|
|
|
(kiocb->ki_filp->f_flags & O_NONBLOCK)))
|
|
|
|
return -EAGAIN;
|
2007-05-30 10:58:10 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
ret = wait_for_random_bytes();
|
|
|
|
if (ret != 0)
|
|
|
|
return ret;
|
|
|
|
return get_random_bytes_user(iter);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-04-29 16:02:58 +08:00
|
|
|
static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int __user *p = (int __user *)arg;
|
2024-06-12 13:13:20 +08:00
|
|
|
int ent_count;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
switch (cmd) {
|
|
|
|
case RNDGETENTCNT:
|
2024-06-12 13:13:20 +08:00
|
|
|
/* Inherently racy, no point locking. */
|
|
|
|
if (put_user(input_pool.init_bits, p))
|
2005-04-17 06:20:36 +08:00
|
|
|
return -EFAULT;
|
|
|
|
return 0;
|
|
|
|
case RNDADDTOENTCNT:
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
if (get_user(ent_count, p))
|
|
|
|
return -EFAULT;
|
2024-06-12 13:13:20 +08:00
|
|
|
if (ent_count < 0)
|
|
|
|
return -EINVAL;
|
|
|
|
credit_init_bits(ent_count);
|
|
|
|
return 0;
|
|
|
|
case RNDADDENTROPY: {
|
|
|
|
struct iov_iter iter;
|
|
|
|
struct iovec iov;
|
|
|
|
ssize_t ret;
|
|
|
|
int len;
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
if (get_user(ent_count, p++))
|
|
|
|
return -EFAULT;
|
|
|
|
if (ent_count < 0)
|
|
|
|
return -EINVAL;
|
2024-06-12 13:13:20 +08:00
|
|
|
if (get_user(len, p++))
|
|
|
|
return -EFAULT;
|
|
|
|
ret = import_single_range(WRITE, p, len, &iov, &iter);
|
|
|
|
if (unlikely(ret))
|
|
|
|
return ret;
|
|
|
|
ret = write_pool_user(&iter);
|
|
|
|
if (unlikely(ret < 0))
|
|
|
|
return ret;
|
|
|
|
/* Since we're crediting, enforce that it was all written into the pool. */
|
|
|
|
if (unlikely(ret != len))
|
2005-04-17 06:20:36 +08:00
|
|
|
return -EFAULT;
|
2024-06-12 13:13:20 +08:00
|
|
|
credit_init_bits(ent_count);
|
|
|
|
return 0;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
case RNDZAPENTCNT:
|
|
|
|
case RNDCLEARPOOL:
|
2024-06-12 13:13:20 +08:00
|
|
|
/* No longer has any effect. */
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
return 0;
|
2018-04-12 04:32:17 +08:00
|
|
|
case RNDRESEEDCRNG:
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
2024-06-12 13:13:20 +08:00
|
|
|
if (!crng_ready())
|
2018-04-12 04:32:17 +08:00
|
|
|
return -ENODATA;
|
2024-06-12 13:13:20 +08:00
|
|
|
crng_reseed();
|
2018-04-12 04:32:17 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
random: add async notification support to /dev/random
Add async notification support to /dev/random.
A little test case is below. Without this patch, you get:
$ ./async-random
Drained the pool
Found more randomness
With it, you get:
$ ./async-random
Drained the pool
SIGIO
Found more randomness
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <errno.h>
#include <fcntl.h>
static void handler(int sig)
{
printf("SIGIO\n");
}
int main(int argc, char **argv)
{
int fd, n, err, flags;
if(signal(SIGIO, handler) < 0){
perror("setting SIGIO handler");
exit(1);
}
fd = open("/dev/random", O_RDONLY);
if(fd < 0){
perror("open");
exit(1);
}
flags = fcntl(fd, F_GETFL);
if (flags < 0){
perror("getting flags");
exit(1);
}
flags |= O_NONBLOCK;
if (fcntl(fd, F_SETFL, flags) < 0){
perror("setting flags");
exit(1);
}
while((err = read(fd, &n, sizeof(n))) > 0) ;
if(err == 0){
printf("random returned 0\n");
exit(1);
}
else if(errno != EAGAIN){
perror("read");
exit(1);
}
flags |= O_ASYNC;
if (fcntl(fd, F_SETFL, flags) < 0){
perror("setting flags");
exit(1);
}
if (fcntl(fd, F_SETOWN, getpid()) < 0) {
perror("Setting SIGIO");
exit(1);
}
printf("Drained the pool\n");
read(fd, &n, sizeof(n));
printf("Found more randomness\n");
return(0);
}
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-29 16:03:08 +08:00
|
|
|
static int random_fasync(int fd, struct file *filp, int on)
|
|
|
|
{
|
|
|
|
return fasync_helper(fd, filp, on, &fasync);
|
|
|
|
}
|
|
|
|
|
2007-02-12 16:55:32 +08:00
|
|
|
const struct file_operations random_fops = {
|
2024-06-12 13:13:20 +08:00
|
|
|
.read_iter = random_read_iter,
|
|
|
|
.write_iter = random_write_iter,
|
|
|
|
.poll = random_poll,
|
2008-04-29 16:02:58 +08:00
|
|
|
.unlocked_ioctl = random_ioctl,
|
2024-06-12 13:13:20 +08:00
|
|
|
.compat_ioctl = compat_ptr_ioctl,
|
random: add async notification support to /dev/random
Add async notification support to /dev/random.
A little test case is below. Without this patch, you get:
$ ./async-random
Drained the pool
Found more randomness
With it, you get:
$ ./async-random
Drained the pool
SIGIO
Found more randomness
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <errno.h>
#include <fcntl.h>
static void handler(int sig)
{
printf("SIGIO\n");
}
int main(int argc, char **argv)
{
int fd, n, err, flags;
if(signal(SIGIO, handler) < 0){
perror("setting SIGIO handler");
exit(1);
}
fd = open("/dev/random", O_RDONLY);
if(fd < 0){
perror("open");
exit(1);
}
flags = fcntl(fd, F_GETFL);
if (flags < 0){
perror("getting flags");
exit(1);
}
flags |= O_NONBLOCK;
if (fcntl(fd, F_SETFL, flags) < 0){
perror("setting flags");
exit(1);
}
while((err = read(fd, &n, sizeof(n))) > 0) ;
if(err == 0){
printf("random returned 0\n");
exit(1);
}
else if(errno != EAGAIN){
perror("read");
exit(1);
}
flags |= O_ASYNC;
if (fcntl(fd, F_SETFL, flags) < 0){
perror("setting flags");
exit(1);
}
if (fcntl(fd, F_SETOWN, getpid()) < 0) {
perror("Setting SIGIO");
exit(1);
}
printf("Drained the pool\n");
read(fd, &n, sizeof(n));
printf("Found more randomness\n");
return(0);
}
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-29 16:03:08 +08:00
|
|
|
.fasync = random_fasync,
|
llseek: automatically add .llseek fop
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
2010-08-16 00:52:59 +08:00
|
|
|
.llseek = noop_llseek,
|
2024-06-12 13:13:20 +08:00
|
|
|
.splice_read = generic_file_splice_read,
|
|
|
|
.splice_write = iter_file_splice_write,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2007-02-12 16:55:32 +08:00
|
|
|
const struct file_operations urandom_fops = {
|
2024-06-12 13:13:20 +08:00
|
|
|
.read_iter = urandom_read_iter,
|
|
|
|
.write_iter = random_write_iter,
|
2008-04-29 16:02:58 +08:00
|
|
|
.unlocked_ioctl = random_ioctl,
|
2024-06-12 13:13:20 +08:00
|
|
|
.compat_ioctl = compat_ptr_ioctl,
|
random: add async notification support to /dev/random
Add async notification support to /dev/random.
A little test case is below. Without this patch, you get:
$ ./async-random
Drained the pool
Found more randomness
With it, you get:
$ ./async-random
Drained the pool
SIGIO
Found more randomness
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <errno.h>
#include <fcntl.h>
static void handler(int sig)
{
printf("SIGIO\n");
}
int main(int argc, char **argv)
{
int fd, n, err, flags;
if(signal(SIGIO, handler) < 0){
perror("setting SIGIO handler");
exit(1);
}
fd = open("/dev/random", O_RDONLY);
if(fd < 0){
perror("open");
exit(1);
}
flags = fcntl(fd, F_GETFL);
if (flags < 0){
perror("getting flags");
exit(1);
}
flags |= O_NONBLOCK;
if (fcntl(fd, F_SETFL, flags) < 0){
perror("setting flags");
exit(1);
}
while((err = read(fd, &n, sizeof(n))) > 0) ;
if(err == 0){
printf("random returned 0\n");
exit(1);
}
else if(errno != EAGAIN){
perror("read");
exit(1);
}
flags |= O_ASYNC;
if (fcntl(fd, F_SETFL, flags) < 0){
perror("setting flags");
exit(1);
}
if (fcntl(fd, F_SETOWN, getpid()) < 0) {
perror("Setting SIGIO");
exit(1);
}
printf("Drained the pool\n");
read(fd, &n, sizeof(n));
printf("Found more randomness\n");
return(0);
}
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-29 16:03:08 +08:00
|
|
|
.fasync = random_fasync,
|
llseek: automatically add .llseek fop
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
2010-08-16 00:52:59 +08:00
|
|
|
.llseek = noop_llseek,
|
2024-06-12 13:13:20 +08:00
|
|
|
.splice_read = generic_file_splice_read,
|
|
|
|
.splice_write = iter_file_splice_write,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
random: introduce getrandom(2) system call
The getrandom(2) system call was requested by the LibreSSL Portable
developers. It is analoguous to the getentropy(2) system call in
OpenBSD.
The rationale of this system call is to provide resiliance against
file descriptor exhaustion attacks, where the attacker consumes all
available file descriptors, forcing the use of the fallback code where
/dev/[u]random is not available. Since the fallback code is often not
well-tested, it is better to eliminate this potential failure mode
entirely.
The other feature provided by this new system call is the ability to
request randomness from the /dev/urandom entropy pool, but to block
until at least 128 bits of entropy has been accumulated in the
/dev/urandom entropy pool. Historically, the emphasis in the
/dev/urandom development has been to ensure that urandom pool is
initialized as quickly as possible after system boot, and preferably
before the init scripts start execution.
This is because changing /dev/urandom reads to block represents an
interface change that could potentially break userspace which is not
acceptable. In practice, on most x86 desktop and server systems, in
general the entropy pool can be initialized before it is needed (and
in modern kernels, we will printk a warning message if not). However,
on an embedded system, this may not be the case. And so with this new
interface, we can provide the functionality of blocking until the
urandom pool has been initialized. Any userspace program which uses
this new functionality must take care to assure that if it is used
during the boot process, that it will not cause the init scripts or
other portions of the system startup to hang indefinitely.
SYNOPSIS
#include <linux/random.h>
int getrandom(void *buf, size_t buflen, unsigned int flags);
DESCRIPTION
The system call getrandom() fills the buffer pointed to by buf
with up to buflen random bytes which can be used to seed user
space random number generators (i.e., DRBG's) or for other
cryptographic uses. It should not be used for Monte Carlo
simulations or other programs/algorithms which are doing
probabilistic sampling.
If the GRND_RANDOM flags bit is set, then draw from the
/dev/random pool instead of the /dev/urandom pool. The
/dev/random pool is limited based on the entropy that can be
obtained from environmental noise, so if there is insufficient
entropy, the requested number of bytes may not be returned.
If there is no entropy available at all, getrandom(2) will
either block, or return an error with errno set to EAGAIN if
the GRND_NONBLOCK bit is set in flags.
If the GRND_RANDOM bit is not set, then the /dev/urandom pool
will be used. Unlike using read(2) to fetch data from
/dev/urandom, if the urandom pool has not been sufficiently
initialized, getrandom(2) will block (or return -1 with the
errno set to EAGAIN if the GRND_NONBLOCK bit is set in flags).
The getentropy(2) system call in OpenBSD can be emulated using
the following function:
int getentropy(void *buf, size_t buflen)
{
int ret;
if (buflen > 256)
goto failure;
ret = getrandom(buf, buflen, 0);
if (ret < 0)
return ret;
if (ret == buflen)
return 0;
failure:
errno = EIO;
return -1;
}
RETURN VALUE
On success, the number of bytes that was filled in the buf is
returned. This may not be all the bytes requested by the
caller via buflen if insufficient entropy was present in the
/dev/random pool, or if the system call was interrupted by a
signal.
On error, -1 is returned, and errno is set appropriately.
ERRORS
EINVAL An invalid flag was passed to getrandom(2)
EFAULT buf is outside the accessible address space.
EAGAIN The requested entropy was not available, and
getentropy(2) would have blocked if the
GRND_NONBLOCK flag was not set.
EINTR While blocked waiting for entropy, the call was
interrupted by a signal handler; see the description
of how interrupted read(2) calls on "slow" devices
are handled with and without the SA_RESTART flag
in the signal(7) man page.
NOTES
For small requests (buflen <= 256) getrandom(2) will not
return EINTR when reading from the urandom pool once the
entropy pool has been initialized, and it will return all of
the bytes that have been requested. This is the recommended
way to use getrandom(2), and is designed for compatibility
with OpenBSD's getentropy() system call.
However, if you are using GRND_RANDOM, then getrandom(2) may
block until the entropy accounting determines that sufficient
environmental noise has been gathered such that getrandom(2)
will be operating as a NRBG instead of a DRBG for those people
who are working in the NIST SP 800-90 regime. Since it may
block for a long time, these guarantees do *not* apply. The
user may want to interrupt a hanging process using a signal,
so blocking until all of the requested bytes are returned
would be unfriendly.
For this reason, the user of getrandom(2) MUST always check
the return value, in case it returns some error, or if fewer
bytes than requested was returned. In the case of
!GRND_RANDOM and small request, the latter should never
happen, but the careful userspace code (and all crypto code
should be careful) should check for this anyway!
Finally, unless you are doing long-term key generation (and
perhaps not even then), you probably shouldn't be using
GRND_RANDOM. The cryptographic algorithms used for
/dev/urandom are quite conservative, and so should be
sufficient for all purposes. The disadvantage of GRND_RANDOM
is that it can block, and the increased complexity required to
deal with partially fulfilled getrandom(2) requests.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Zach Brown <zab@zabbo.net>
2014-07-17 16:13:05 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/********************************************************************
|
|
|
|
*
|
2024-06-12 13:13:20 +08:00
|
|
|
* Sysctl interface.
|
|
|
|
*
|
|
|
|
* These are partly unused legacy knobs with dummy values to not break
|
|
|
|
* userspace and partly still useful things. They are usually accessible
|
|
|
|
* in /proc/sys/kernel/random/ and are as follows:
|
|
|
|
*
|
|
|
|
* - boot_id - a UUID representing the current boot.
|
|
|
|
*
|
|
|
|
* - uuid - a random UUID, different each time the file is read.
|
|
|
|
*
|
|
|
|
* - poolsize - the number of bits of entropy that the input pool can
|
|
|
|
* hold, tied to the POOL_BITS constant.
|
|
|
|
*
|
|
|
|
* - entropy_avail - the number of bits of entropy currently in the
|
|
|
|
* input pool. Always <= poolsize.
|
|
|
|
*
|
|
|
|
* - write_wakeup_threshold - the amount of entropy in the input pool
|
|
|
|
* below which write polls to /dev/random will unblock, requesting
|
|
|
|
* more entropy, tied to the POOL_READY_BITS constant. It is writable
|
|
|
|
* to avoid breaking old userspaces, but writing to it does not
|
|
|
|
* change any behavior of the RNG.
|
|
|
|
*
|
|
|
|
* - urandom_min_reseed_secs - fixed to the value CRNG_RESEED_INTERVAL.
|
|
|
|
* It is writable to avoid breaking old userspaces, but writing
|
|
|
|
* to it does not change any behavior of the RNG.
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
********************************************************************/
|
|
|
|
|
|
|
|
#ifdef CONFIG_SYSCTL
|
|
|
|
|
|
|
|
#include <linux/sysctl.h>
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
static int sysctl_random_min_urandom_seed = CRNG_RESEED_INTERVAL / HZ;
|
|
|
|
static int sysctl_random_write_wakeup_bits = POOL_READY_BITS;
|
|
|
|
static int sysctl_poolsize = POOL_BITS;
|
|
|
|
static u8 sysctl_bootid[UUID_SIZE];
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
2013-11-30 03:58:16 +08:00
|
|
|
* This function is used to return both the bootid UUID, and random
|
2024-06-12 13:13:20 +08:00
|
|
|
* UUID. The difference is in whether table->data is NULL; if it is,
|
2005-04-17 06:20:36 +08:00
|
|
|
* then a new UUID is generated and returned to the user.
|
|
|
|
*/
|
2024-06-12 13:13:20 +08:00
|
|
|
static int proc_do_uuid(struct ctl_table *table, int write, void __user *buf,
|
|
|
|
size_t *lenp, loff_t *ppos)
|
|
|
|
{
|
|
|
|
u8 tmp_uuid[UUID_SIZE], *uuid;
|
|
|
|
char uuid_string[UUID_STRING_LEN + 1];
|
|
|
|
struct ctl_table fake_table = {
|
|
|
|
.data = uuid_string,
|
|
|
|
.maxlen = UUID_STRING_LEN
|
|
|
|
};
|
|
|
|
|
|
|
|
if (write)
|
|
|
|
return -EPERM;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
uuid = table->data;
|
|
|
|
if (!uuid) {
|
|
|
|
uuid = tmp_uuid;
|
|
|
|
generate_random_uuid(uuid);
|
2012-04-13 03:49:12 +08:00
|
|
|
} else {
|
|
|
|
static DEFINE_SPINLOCK(bootid_spinlock);
|
|
|
|
|
|
|
|
spin_lock(&bootid_spinlock);
|
|
|
|
if (!uuid[8])
|
|
|
|
generate_random_uuid(uuid);
|
|
|
|
spin_unlock(&bootid_spinlock);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
snprintf(uuid_string, sizeof(uuid_string), "%pU", uuid);
|
|
|
|
return proc_dostring(&fake_table, 0, buf, lenp, ppos);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2024-06-12 13:13:20 +08:00
|
|
|
/* The same as proc_dointvec, but writes don't change anything. */
|
|
|
|
static int proc_do_rointvec(struct ctl_table *table, int write, void __user *buf,
|
|
|
|
size_t *lenp, loff_t *ppos)
|
2013-09-11 11:16:17 +08:00
|
|
|
{
|
2024-06-12 13:13:20 +08:00
|
|
|
return write ? 0 : proc_dointvec(table, 0, buf, lenp, ppos);
|
2013-09-11 11:16:17 +08:00
|
|
|
}
|
|
|
|
|
2013-06-14 10:37:35 +08:00
|
|
|
extern struct ctl_table random_table[];
|
|
|
|
struct ctl_table random_table[] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
.procname = "poolsize",
|
|
|
|
.data = &sysctl_poolsize,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0444,
|
2009-11-16 19:11:48 +08:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 06:20:36 +08:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "entropy_avail",
|
2024-06-12 13:13:20 +08:00
|
|
|
.data = &input_pool.init_bits,
|
2005-04-17 06:20:36 +08:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0444,
|
2024-06-12 13:13:20 +08:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 06:20:36 +08:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "write_wakeup_threshold",
|
2024-06-12 13:13:20 +08:00
|
|
|
.data = &sysctl_random_write_wakeup_bits,
|
2005-04-17 06:20:36 +08:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2024-06-12 13:13:20 +08:00
|
|
|
.proc_handler = proc_do_rointvec,
|
2005-04-17 06:20:36 +08:00
|
|
|
},
|
2013-09-23 03:14:32 +08:00
|
|
|
{
|
|
|
|
.procname = "urandom_min_reseed_secs",
|
2024-06-12 13:13:20 +08:00
|
|
|
.data = &sysctl_random_min_urandom_seed,
|
2013-09-23 03:14:32 +08:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2024-06-12 13:13:20 +08:00
|
|
|
.proc_handler = proc_do_rointvec,
|
2013-09-23 03:14:32 +08:00
|
|
|
},
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
.procname = "boot_id",
|
|
|
|
.data = &sysctl_bootid,
|
|
|
|
.mode = 0444,
|
2009-11-16 19:11:48 +08:00
|
|
|
.proc_handler = proc_do_uuid,
|
2005-04-17 06:20:36 +08:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "uuid",
|
|
|
|
.mode = 0444,
|
2009-11-16 19:11:48 +08:00
|
|
|
.proc_handler = proc_do_uuid,
|
2005-04-17 06:20:36 +08:00
|
|
|
},
|
2009-11-06 06:34:02 +08:00
|
|
|
{ }
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
2024-06-12 13:13:20 +08:00
|
|
|
#endif /* CONFIG_SYSCTL */
|