2022-02-10 23:43:57 +08:00
|
|
|
// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
random: use BLAKE2s instead of SHA1 in extraction
This commit addresses one of the lower hanging fruits of the RNG: its
usage of SHA1.
BLAKE2s is generally faster, and certainly more secure, than SHA1, which
has [1] been [2] really [3] very [4] broken [5]. Additionally, the
current construction in the RNG doesn't use the full SHA1 function, as
specified, and allows overwriting the IV with RDRAND output in an
undocumented way, even in the case when RDRAND isn't set to "trusted",
which means potential malicious IV choices. And its short length means
that keeping only half of it secret when feeding back into the mixer
gives us only 2^80 bits of forward secrecy. In other words, not only is
the choice of hash function dated, but the use of it isn't really great
either.
This commit aims to fix both of these issues while also keeping the
general structure and semantics as close to the original as possible.
Specifically:
a) Rather than overwriting the hash IV with RDRAND, we put it into
BLAKE2's documented "salt" and "personal" fields, which were
specifically created for this type of usage.
b) Since this function feeds the full hash result back into the
entropy collector, we only return from it half the length of the
hash, just as it was done before. This increases the
construction's forward secrecy from 2^80 to a much more
comfortable 2^128.
c) Rather than using the raw "sha1_transform" function alone, we
instead use the full proper BLAKE2s function, with finalization.
This also has the advantage of supplying 16 bytes at a time rather than
SHA1's 10 bytes, which, in addition to having a faster compression
function to begin with, means faster extraction in general. On an Intel
i7-11850H, this commit makes initial seeding around 131% faster.
BLAKE2s itself has the nice property of internally being based on the
ChaCha permutation, which the RNG is already using for expansion, so
there shouldn't be any issue with newness, funkiness, or surprising CPU
behavior, since it's based on something already in use.
[1] https://eprint.iacr.org/2005/010.pdf
[2] https://www.iacr.org/archive/crypto2005/36210017/36210017.pdf
[3] https://eprint.iacr.org/2015/967.pdf
[4] https://shattered.io/static/shattered.pdf
[5] https://www.usenix.org/system/files/sec20-leurent.pdf
Reviewed-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-12-21 23:31:27 +08:00
|
|
|
* Copyright (C) 2017-2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
|
2005-04-17 06:25:56 +08:00
|
|
|
* Copyright Matt Mackall <mpm@selenic.com>, 2003, 2004, 2005
|
2005-04-17 06:20:36 +08:00
|
|
|
* Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All
|
|
|
|
* rights reserved.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Exported interfaces ---- output
|
|
|
|
* ===============================
|
|
|
|
*
|
2019-04-20 11:48:20 +08:00
|
|
|
* There are four exported interfaces; two for use within the kernel,
|
2022-01-14 16:12:16 +08:00
|
|
|
* and two for use from userspace.
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
2019-04-20 11:48:20 +08:00
|
|
|
* Exported interfaces ---- userspace output
|
|
|
|
* -----------------------------------------
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
2019-04-20 11:48:20 +08:00
|
|
|
* The userspace interfaces are two character devices /dev/random and
|
2005-04-17 06:20:36 +08:00
|
|
|
* /dev/urandom. /dev/random is suitable for use when very high
|
|
|
|
* quality randomness is desired (for example, for key generation or
|
|
|
|
* one-time pads), as it will only return a maximum of the number of
|
|
|
|
* bits of randomness (as estimated by the random number generator)
|
|
|
|
* contained in the entropy pool.
|
|
|
|
*
|
|
|
|
* The /dev/urandom device does not have this limit, and will return
|
|
|
|
* as many bytes as are requested. As more and more random bytes are
|
|
|
|
* requested without giving time for the entropy pool to recharge,
|
|
|
|
* this will result in random numbers that are merely cryptographically
|
|
|
|
* strong. For many applications, however, this is acceptable.
|
|
|
|
*
|
2019-04-20 11:48:20 +08:00
|
|
|
* Exported interfaces ---- kernel output
|
|
|
|
* --------------------------------------
|
|
|
|
*
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
* The primary kernel interfaces are:
|
2019-04-20 11:48:20 +08:00
|
|
|
*
|
2022-02-09 21:43:25 +08:00
|
|
|
* void get_random_bytes(void *buf, size_t nbytes);
|
2022-01-15 21:57:22 +08:00
|
|
|
* u32 get_random_u32()
|
|
|
|
* u64 get_random_u64()
|
|
|
|
* unsigned int get_random_int()
|
|
|
|
* unsigned long get_random_long()
|
2019-04-20 11:48:20 +08:00
|
|
|
*
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
* These interfaces will return the requested number of random bytes
|
|
|
|
* into the given buffer or as a return value. This is equivalent to a
|
|
|
|
* read from /dev/urandom. The get_random_{u32,u64,int,long}() family
|
|
|
|
* of functions may be higher performance for one-off random integers,
|
|
|
|
* because they do a bit of buffering.
|
2019-04-20 11:48:20 +08:00
|
|
|
*
|
|
|
|
* prandom_u32()
|
|
|
|
* -------------
|
|
|
|
*
|
|
|
|
* For even weaker applications, see the pseudorandom generator
|
|
|
|
* prandom_u32(), prandom_max(), and prandom_bytes(). If the random
|
|
|
|
* numbers aren't security-critical at all, these are *far* cheaper.
|
|
|
|
* Useful for self-tests, random error simulation, randomized backoffs,
|
|
|
|
* and any other application where you trust that nobody is trying to
|
|
|
|
* maliciously mess with you by guessing the "random" numbers.
|
|
|
|
*
|
2005-04-17 06:20:36 +08:00
|
|
|
* Exported interfaces ---- input
|
|
|
|
* ==============================
|
|
|
|
*
|
|
|
|
* The current exported interfaces for gathering environmental noise
|
|
|
|
* from the devices are:
|
|
|
|
*
|
2022-02-09 21:43:25 +08:00
|
|
|
* void add_device_randomness(const void *buf, size_t size);
|
2022-01-15 21:57:22 +08:00
|
|
|
* void add_input_randomness(unsigned int type, unsigned int code,
|
2005-04-17 06:20:36 +08:00
|
|
|
* unsigned int value);
|
2021-12-07 20:17:33 +08:00
|
|
|
* void add_interrupt_randomness(int irq);
|
2022-01-15 21:57:22 +08:00
|
|
|
* void add_disk_randomness(struct gendisk *disk);
|
2022-02-09 21:43:25 +08:00
|
|
|
* void add_hwgenerator_randomness(const void *buffer, size_t count,
|
2021-12-02 01:44:49 +08:00
|
|
|
* size_t entropy);
|
2022-02-09 21:43:25 +08:00
|
|
|
* void add_bootloader_randomness(const void *buf, size_t size);
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
2012-07-04 23:16:01 +08:00
|
|
|
* add_device_randomness() is for adding data to the random pool that
|
|
|
|
* is likely to differ between two devices (or possibly even per boot).
|
|
|
|
* This would be things like MAC addresses or serial numbers, or the
|
|
|
|
* read-out of the RTC. This does *not* add any actual entropy to the
|
|
|
|
* pool, but it initializes the pool to different values for devices
|
|
|
|
* that might otherwise be identical and have very little entropy
|
|
|
|
* available to them (particularly common in the embedded world).
|
|
|
|
*
|
2005-04-17 06:20:36 +08:00
|
|
|
* add_input_randomness() uses the input layer interrupt timing, as well as
|
|
|
|
* the event type information from the hardware.
|
|
|
|
*
|
2012-07-02 19:52:16 +08:00
|
|
|
* add_interrupt_randomness() uses the interrupt timing as random
|
|
|
|
* inputs to the entropy pool. Using the cycle counters and the irq source
|
|
|
|
* as inputs, it feeds the randomness roughly once a second.
|
random: update interface comments to reflect reality
At present, the comment header in random.c makes no mention of
add_disk_randomness, and instead, suggests that disk activity adds to the
random pool by way of add_interrupt_randomness, which appears to not have
been the case since sometime prior to the existence of git, and even prior
to bitkeeper. Didn't look any further back. At least, as far as I can
tell, there are no storage drivers setting IRQF_SAMPLE_RANDOM, which is a
requirement for add_interrupt_randomness to trigger, so the only way for a
disk to contribute entropy is by way of add_disk_randomness. Update
comments accordingly, complete with special mention about solid state
drives being a crappy source of entropy (see e2e1a148bc for reference).
Signed-off-by: Jarod Wilson <jarod@redhat.com>
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2011-02-21 18:43:10 +08:00
|
|
|
*
|
|
|
|
* add_disk_randomness() uses what amounts to the seek time of block
|
|
|
|
* layer request events, on a per-disk_devt basis, as input to the
|
|
|
|
* entropy pool. Note that high-speed solid state drives with very low
|
|
|
|
* seek times do not make for good sources of entropy, as their seek
|
|
|
|
* times are usually fairly consistent.
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
|
|
|
* All of these routines try to estimate how many bits of randomness a
|
|
|
|
* particular randomness source. They do this by keeping track of the
|
|
|
|
* first and second order deltas of the event timings.
|
|
|
|
*
|
2021-12-02 01:44:49 +08:00
|
|
|
* add_hwgenerator_randomness() is for true hardware RNGs, and will credit
|
|
|
|
* entropy as specified by the caller. If the entropy pool is full it will
|
|
|
|
* block until more entropy is needed.
|
|
|
|
*
|
|
|
|
* add_bootloader_randomness() is the same as add_hwgenerator_randomness() or
|
|
|
|
* add_device_randomness(), depending on whether or not the configuration
|
|
|
|
* option CONFIG_RANDOM_TRUST_BOOTLOADER is set.
|
|
|
|
*
|
2005-04-17 06:20:36 +08:00
|
|
|
* Ensuring unpredictability at system startup
|
|
|
|
* ============================================
|
|
|
|
*
|
|
|
|
* When any operating system starts up, it will go through a sequence
|
|
|
|
* of actions that are fairly predictable by an adversary, especially
|
|
|
|
* if the start-up does not involve interaction with a human operator.
|
|
|
|
* This reduces the actual number of bits of unpredictability in the
|
|
|
|
* entropy pool below the value in entropy_count. In order to
|
|
|
|
* counteract this effect, it helps to carry information in the
|
|
|
|
* entropy pool across shut-downs and start-ups. To do this, put the
|
|
|
|
* following lines an appropriate script which is run during the boot
|
|
|
|
* sequence:
|
|
|
|
*
|
|
|
|
* echo "Initializing random number generator..."
|
|
|
|
* random_seed=/var/run/random-seed
|
|
|
|
* # Carry a random seed from start-up to start-up
|
|
|
|
* # Load and then save the whole entropy pool
|
|
|
|
* if [ -f $random_seed ]; then
|
|
|
|
* cat $random_seed >/dev/urandom
|
|
|
|
* else
|
|
|
|
* touch $random_seed
|
|
|
|
* fi
|
|
|
|
* chmod 600 $random_seed
|
|
|
|
* dd if=/dev/urandom of=$random_seed count=1 bs=512
|
|
|
|
*
|
|
|
|
* and the following lines in an appropriate script which is run as
|
|
|
|
* the system is shutdown:
|
|
|
|
*
|
|
|
|
* # Carry a random seed from shut-down to start-up
|
|
|
|
* # Save the whole entropy pool
|
|
|
|
* echo "Saving random seed..."
|
|
|
|
* random_seed=/var/run/random-seed
|
|
|
|
* touch $random_seed
|
|
|
|
* chmod 600 $random_seed
|
|
|
|
* dd if=/dev/urandom of=$random_seed count=1 bs=512
|
|
|
|
*
|
|
|
|
* For example, on most modern systems using the System V init
|
|
|
|
* scripts, such code fragments would be found in
|
|
|
|
* /etc/rc.d/init.d/random. On older Linux systems, the correct script
|
|
|
|
* location might be in /etc/rcb.d/rc.local or /etc/rc.d/rc.0.
|
|
|
|
*
|
|
|
|
* Effectively, these commands cause the contents of the entropy pool
|
|
|
|
* to be saved at shut-down time and reloaded into the entropy pool at
|
|
|
|
* start-up. (The 'dd' in the addition to the bootup script is to
|
|
|
|
* make sure that /etc/random-seed is different for every start-up,
|
|
|
|
* even if the system crashes without executing rc.0.) Even with
|
|
|
|
* complete knowledge of the start-up activities, predicting the state
|
|
|
|
* of the entropy pool requires knowledge of the previous history of
|
|
|
|
* the system.
|
|
|
|
*
|
|
|
|
* Configuring the /dev/random driver under Linux
|
|
|
|
* ==============================================
|
|
|
|
*
|
|
|
|
* The /dev/random driver under Linux uses minor numbers 8 and 9 of
|
|
|
|
* the /dev/mem major number (#1). So if your system does not have
|
|
|
|
* /dev/random and /dev/urandom created already, they can be created
|
|
|
|
* by using the commands:
|
|
|
|
*
|
2022-01-15 21:57:22 +08:00
|
|
|
* mknod /dev/random c 1 8
|
|
|
|
* mknod /dev/urandom c 1 9
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
|
|
|
|
2019-06-08 02:25:15 +08:00
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/utsname.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/major.h>
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/fcntl.h>
|
|
|
|
#include <linux/slab.h>
|
|
|
|
#include <linux/random.h>
|
|
|
|
#include <linux/poll.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/fs.h>
|
|
|
|
#include <linux/genhd.h>
|
|
|
|
#include <linux/interrupt.h>
|
2008-07-24 12:28:13 +08:00
|
|
|
#include <linux/mm.h>
|
2016-07-30 22:23:08 +08:00
|
|
|
#include <linux/nodemask.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/spinlock.h>
|
2014-06-15 11:38:36 +08:00
|
|
|
#include <linux/kthread.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <linux/percpu.h>
|
2012-07-02 19:52:16 +08:00
|
|
|
#include <linux/ptrace.h>
|
2013-10-03 13:08:15 +08:00
|
|
|
#include <linux/workqueue.h>
|
2013-08-30 15:39:53 +08:00
|
|
|
#include <linux/irq.h>
|
2018-04-25 13:12:32 +08:00
|
|
|
#include <linux/ratelimit.h>
|
random: introduce getrandom(2) system call
The getrandom(2) system call was requested by the LibreSSL Portable
developers. It is analoguous to the getentropy(2) system call in
OpenBSD.
The rationale of this system call is to provide resiliance against
file descriptor exhaustion attacks, where the attacker consumes all
available file descriptors, forcing the use of the fallback code where
/dev/[u]random is not available. Since the fallback code is often not
well-tested, it is better to eliminate this potential failure mode
entirely.
The other feature provided by this new system call is the ability to
request randomness from the /dev/urandom entropy pool, but to block
until at least 128 bits of entropy has been accumulated in the
/dev/urandom entropy pool. Historically, the emphasis in the
/dev/urandom development has been to ensure that urandom pool is
initialized as quickly as possible after system boot, and preferably
before the init scripts start execution.
This is because changing /dev/urandom reads to block represents an
interface change that could potentially break userspace which is not
acceptable. In practice, on most x86 desktop and server systems, in
general the entropy pool can be initialized before it is needed (and
in modern kernels, we will printk a warning message if not). However,
on an embedded system, this may not be the case. And so with this new
interface, we can provide the functionality of blocking until the
urandom pool has been initialized. Any userspace program which uses
this new functionality must take care to assure that if it is used
during the boot process, that it will not cause the init scripts or
other portions of the system startup to hang indefinitely.
SYNOPSIS
#include <linux/random.h>
int getrandom(void *buf, size_t buflen, unsigned int flags);
DESCRIPTION
The system call getrandom() fills the buffer pointed to by buf
with up to buflen random bytes which can be used to seed user
space random number generators (i.e., DRBG's) or for other
cryptographic uses. It should not be used for Monte Carlo
simulations or other programs/algorithms which are doing
probabilistic sampling.
If the GRND_RANDOM flags bit is set, then draw from the
/dev/random pool instead of the /dev/urandom pool. The
/dev/random pool is limited based on the entropy that can be
obtained from environmental noise, so if there is insufficient
entropy, the requested number of bytes may not be returned.
If there is no entropy available at all, getrandom(2) will
either block, or return an error with errno set to EAGAIN if
the GRND_NONBLOCK bit is set in flags.
If the GRND_RANDOM bit is not set, then the /dev/urandom pool
will be used. Unlike using read(2) to fetch data from
/dev/urandom, if the urandom pool has not been sufficiently
initialized, getrandom(2) will block (or return -1 with the
errno set to EAGAIN if the GRND_NONBLOCK bit is set in flags).
The getentropy(2) system call in OpenBSD can be emulated using
the following function:
int getentropy(void *buf, size_t buflen)
{
int ret;
if (buflen > 256)
goto failure;
ret = getrandom(buf, buflen, 0);
if (ret < 0)
return ret;
if (ret == buflen)
return 0;
failure:
errno = EIO;
return -1;
}
RETURN VALUE
On success, the number of bytes that was filled in the buf is
returned. This may not be all the bytes requested by the
caller via buflen if insufficient entropy was present in the
/dev/random pool, or if the system call was interrupted by a
signal.
On error, -1 is returned, and errno is set appropriately.
ERRORS
EINVAL An invalid flag was passed to getrandom(2)
EFAULT buf is outside the accessible address space.
EAGAIN The requested entropy was not available, and
getentropy(2) would have blocked if the
GRND_NONBLOCK flag was not set.
EINTR While blocked waiting for entropy, the call was
interrupted by a signal handler; see the description
of how interrupted read(2) calls on "slow" devices
are handled with and without the SA_RESTART flag
in the signal(7) man page.
NOTES
For small requests (buflen <= 256) getrandom(2) will not
return EINTR when reading from the urandom pool once the
entropy pool has been initialized, and it will return all of
the bytes that have been requested. This is the recommended
way to use getrandom(2), and is designed for compatibility
with OpenBSD's getentropy() system call.
However, if you are using GRND_RANDOM, then getrandom(2) may
block until the entropy accounting determines that sufficient
environmental noise has been gathered such that getrandom(2)
will be operating as a NRBG instead of a DRBG for those people
who are working in the NIST SP 800-90 regime. Since it may
block for a long time, these guarantees do *not* apply. The
user may want to interrupt a hanging process using a signal,
so blocking until all of the requested bytes are returned
would be unfriendly.
For this reason, the user of getrandom(2) MUST always check
the return value, in case it returns some error, or if fewer
bytes than requested was returned. In the case of
!GRND_RANDOM and small request, the latter should never
happen, but the careful userspace code (and all crypto code
should be careful) should check for this anyway!
Finally, unless you are doing long-term key generation (and
perhaps not even then), you probably shouldn't be using
GRND_RANDOM. The cryptographic algorithms used for
/dev/urandom are quite conservative, and so should be
sufficient for all purposes. The disadvantage of GRND_RANDOM
is that it can block, and the increased complexity required to
deal with partially fulfilled getrandom(2) requests.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Zach Brown <zab@zabbo.net>
2014-07-17 16:13:05 +08:00
|
|
|
#include <linux/syscalls.h>
|
|
|
|
#include <linux/completion.h>
|
2016-05-21 08:01:00 +08:00
|
|
|
#include <linux/uuid.h>
|
2022-02-11 20:41:41 +08:00
|
|
|
#include <linux/uaccess.h>
|
2018-11-17 09:26:21 +08:00
|
|
|
#include <crypto/chacha.h>
|
random: use BLAKE2s instead of SHA1 in extraction
This commit addresses one of the lower hanging fruits of the RNG: its
usage of SHA1.
BLAKE2s is generally faster, and certainly more secure, than SHA1, which
has [1] been [2] really [3] very [4] broken [5]. Additionally, the
current construction in the RNG doesn't use the full SHA1 function, as
specified, and allows overwriting the IV with RDRAND output in an
undocumented way, even in the case when RDRAND isn't set to "trusted",
which means potential malicious IV choices. And its short length means
that keeping only half of it secret when feeding back into the mixer
gives us only 2^80 bits of forward secrecy. In other words, not only is
the choice of hash function dated, but the use of it isn't really great
either.
This commit aims to fix both of these issues while also keeping the
general structure and semantics as close to the original as possible.
Specifically:
a) Rather than overwriting the hash IV with RDRAND, we put it into
BLAKE2's documented "salt" and "personal" fields, which were
specifically created for this type of usage.
b) Since this function feeds the full hash result back into the
entropy collector, we only return from it half the length of the
hash, just as it was done before. This increases the
construction's forward secrecy from 2^80 to a much more
comfortable 2^128.
c) Rather than using the raw "sha1_transform" function alone, we
instead use the full proper BLAKE2s function, with finalization.
This also has the advantage of supplying 16 bytes at a time rather than
SHA1's 10 bytes, which, in addition to having a faster compression
function to begin with, means faster extraction in general. On an Intel
i7-11850H, this commit makes initial seeding around 131% faster.
BLAKE2s itself has the nice property of internally being based on the
ChaCha permutation, which the RNG is already using for expansion, so
there shouldn't be any issue with newness, funkiness, or surprising CPU
behavior, since it's based on something already in use.
[1] https://eprint.iacr.org/2005/010.pdf
[2] https://www.iacr.org/archive/crypto2005/36210017/36210017.pdf
[3] https://eprint.iacr.org/2015/967.pdf
[4] https://shattered.io/static/shattered.pdf
[5] https://www.usenix.org/system/files/sec20-leurent.pdf
Reviewed-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-12-21 23:31:27 +08:00
|
|
|
#include <crypto/blake2s.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <asm/processor.h>
|
|
|
|
#include <asm/irq.h>
|
2012-07-02 19:52:16 +08:00
|
|
|
#include <asm/irq_regs.h>
|
2005-04-17 06:20:36 +08:00
|
|
|
#include <asm/io.h>
|
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
/*********************************************************************
|
|
|
|
*
|
|
|
|
* Initialization and readiness waiting.
|
|
|
|
*
|
|
|
|
* Much of the RNG infrastructure is devoted to various dependencies
|
|
|
|
* being able to wait until the RNG has collected enough entropy and
|
|
|
|
* is ready for safe consumption.
|
|
|
|
*
|
|
|
|
*********************************************************************/
|
2015-06-09 18:19:39 +08:00
|
|
|
|
2016-06-13 06:13:36 +08:00
|
|
|
/*
|
|
|
|
* crng_init = 0 --> Uninitialized
|
|
|
|
* 1 --> Initialized
|
|
|
|
* 2 --> Initialized from input_pool
|
|
|
|
*
|
2022-02-11 19:53:34 +08:00
|
|
|
* crng_init is protected by base_crng->lock, and only increases
|
2016-06-13 06:13:36 +08:00
|
|
|
* its value (from 0->1->2).
|
|
|
|
*/
|
|
|
|
static int crng_init = 0;
|
2018-04-12 01:27:52 +08:00
|
|
|
#define crng_ready() (likely(crng_init > 1))
|
2022-02-11 19:53:34 +08:00
|
|
|
/* Various types of waiters for crng_init->2 transition. */
|
|
|
|
static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait);
|
|
|
|
static struct fasync_struct *fasync;
|
|
|
|
static DEFINE_SPINLOCK(random_ready_list_lock);
|
|
|
|
static LIST_HEAD(random_ready_list);
|
2016-06-13 06:13:36 +08:00
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
/* Control how we warn userspace. */
|
2018-04-25 13:12:32 +08:00
|
|
|
static struct ratelimit_state unseeded_warning =
|
|
|
|
RATELIMIT_STATE_INIT("warn_unseeded_randomness", HZ, 3);
|
|
|
|
static struct ratelimit_state urandom_warning =
|
|
|
|
RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3);
|
|
|
|
static int ratelimit_disable __read_mostly;
|
|
|
|
module_param_named(ratelimit_disable, ratelimit_disable, int, 0644);
|
|
|
|
MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression");
|
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
/*
|
|
|
|
* Returns whether or not the input pool has been seeded and thus guaranteed
|
|
|
|
* to supply cryptographically secure random numbers. This applies to: the
|
|
|
|
* /dev/urandom device, the get_random_bytes function, and the get_random_{u32,
|
|
|
|
* ,u64,int,long} family of functions.
|
|
|
|
*
|
|
|
|
* Returns: true if the input pool has been seeded.
|
|
|
|
* false if the input pool has not been seeded.
|
|
|
|
*/
|
|
|
|
bool rng_is_initialized(void)
|
|
|
|
{
|
|
|
|
return crng_ready();
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(rng_is_initialized);
|
|
|
|
|
|
|
|
/* Used by wait_for_random_bytes(), and considered an entropy collector, below. */
|
|
|
|
static void try_to_generate_entropy(void);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for the input pool to be seeded and thus guaranteed to supply
|
|
|
|
* cryptographically secure random numbers. This applies to: the /dev/urandom
|
|
|
|
* device, the get_random_bytes function, and the get_random_{u32,u64,int,long}
|
|
|
|
* family of functions. Using any of these functions without first calling
|
|
|
|
* this function forfeits the guarantee of security.
|
|
|
|
*
|
|
|
|
* Returns: 0 if the input pool has been seeded.
|
|
|
|
* -ERESTARTSYS if the function was interrupted by a signal.
|
|
|
|
*/
|
|
|
|
int wait_for_random_bytes(void)
|
|
|
|
{
|
|
|
|
if (likely(crng_ready()))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
do {
|
|
|
|
int ret;
|
|
|
|
ret = wait_event_interruptible_timeout(crng_init_wait, crng_ready(), HZ);
|
|
|
|
if (ret)
|
|
|
|
return ret > 0 ? 0 : ret;
|
|
|
|
|
|
|
|
try_to_generate_entropy();
|
|
|
|
} while (!crng_ready());
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(wait_for_random_bytes);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add a callback function that will be invoked when the input
|
|
|
|
* pool is initialised.
|
|
|
|
*
|
|
|
|
* returns: 0 if callback is successfully added
|
|
|
|
* -EALREADY if pool is already initialised (callback not called)
|
|
|
|
* -ENOENT if module for callback is not alive
|
|
|
|
*/
|
|
|
|
int add_random_ready_callback(struct random_ready_callback *rdy)
|
|
|
|
{
|
|
|
|
struct module *owner;
|
|
|
|
unsigned long flags;
|
|
|
|
int err = -EALREADY;
|
|
|
|
|
|
|
|
if (crng_ready())
|
|
|
|
return err;
|
|
|
|
|
|
|
|
owner = rdy->owner;
|
|
|
|
if (!try_module_get(owner))
|
|
|
|
return -ENOENT;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&random_ready_list_lock, flags);
|
|
|
|
if (crng_ready())
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
owner = NULL;
|
|
|
|
|
|
|
|
list_add(&rdy->list, &random_ready_list);
|
|
|
|
err = 0;
|
|
|
|
|
|
|
|
out:
|
|
|
|
spin_unlock_irqrestore(&random_ready_list_lock, flags);
|
|
|
|
|
|
|
|
module_put(owner);
|
|
|
|
|
|
|
|
return err;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(add_random_ready_callback);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Delete a previously registered readiness callback function.
|
|
|
|
*/
|
|
|
|
void del_random_ready_callback(struct random_ready_callback *rdy)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
struct module *owner = NULL;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&random_ready_list_lock, flags);
|
|
|
|
if (!list_empty(&rdy->list)) {
|
|
|
|
list_del_init(&rdy->list);
|
|
|
|
owner = rdy->owner;
|
|
|
|
}
|
|
|
|
spin_unlock_irqrestore(&random_ready_list_lock, flags);
|
|
|
|
|
|
|
|
module_put(owner);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(del_random_ready_callback);
|
|
|
|
|
|
|
|
static void process_random_ready_list(void)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
struct random_ready_callback *rdy, *tmp;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&random_ready_list_lock, flags);
|
|
|
|
list_for_each_entry_safe(rdy, tmp, &random_ready_list, list) {
|
|
|
|
struct module *owner = rdy->owner;
|
|
|
|
|
|
|
|
list_del_init(&rdy->list);
|
|
|
|
rdy->func(rdy);
|
|
|
|
module_put(owner);
|
|
|
|
}
|
|
|
|
spin_unlock_irqrestore(&random_ready_list_lock, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
#define warn_unseeded_randomness(previous) \
|
|
|
|
_warn_unseeded_randomness(__func__, (void *)_RET_IP_, (previous))
|
|
|
|
|
|
|
|
static void _warn_unseeded_randomness(const char *func_name, void *caller, void **previous)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_WARN_ALL_UNSEEDED_RANDOM
|
|
|
|
const bool print_once = false;
|
|
|
|
#else
|
|
|
|
static bool print_once __read_mostly;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
if (print_once || crng_ready() ||
|
|
|
|
(previous && (caller == READ_ONCE(*previous))))
|
|
|
|
return;
|
|
|
|
WRITE_ONCE(*previous, caller);
|
|
|
|
#ifndef CONFIG_WARN_ALL_UNSEEDED_RANDOM
|
|
|
|
print_once = true;
|
|
|
|
#endif
|
|
|
|
if (__ratelimit(&unseeded_warning))
|
|
|
|
printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n",
|
|
|
|
func_name, caller, crng_init);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
/*********************************************************************
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
2022-02-11 19:53:34 +08:00
|
|
|
* Fast key erasure RNG, the "crng".
|
2005-04-17 06:20:36 +08:00
|
|
|
*
|
2022-02-11 19:53:34 +08:00
|
|
|
* These functions expand entropy from the entropy extractor into
|
|
|
|
* long streams for external consumption using the "fast key erasure"
|
|
|
|
* RNG described at <https://blog.cr.yp.to/20170723-random.html>.
|
2016-06-13 06:13:36 +08:00
|
|
|
*
|
2022-02-11 19:53:34 +08:00
|
|
|
* There are a few exported interfaces for use by other drivers:
|
|
|
|
*
|
|
|
|
* void get_random_bytes(void *buf, size_t nbytes)
|
|
|
|
* u32 get_random_u32()
|
|
|
|
* u64 get_random_u64()
|
|
|
|
* unsigned int get_random_int()
|
|
|
|
* unsigned long get_random_long()
|
|
|
|
*
|
|
|
|
* These interfaces will return the requested number of random bytes
|
|
|
|
* into the given buffer or as a return value. This is equivalent to
|
|
|
|
* a read from /dev/urandom. The integer family of functions may be
|
|
|
|
* higher performance for one-off random integers, because they do a
|
|
|
|
* bit of buffering.
|
2016-06-13 06:13:36 +08:00
|
|
|
*
|
|
|
|
*********************************************************************/
|
|
|
|
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
enum {
|
|
|
|
CRNG_RESEED_INTERVAL = 300 * HZ,
|
|
|
|
CRNG_INIT_CNT_THRESH = 2 * CHACHA_KEY_SIZE
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct {
|
|
|
|
u8 key[CHACHA_KEY_SIZE] __aligned(__alignof__(long));
|
|
|
|
unsigned long birth;
|
|
|
|
unsigned long generation;
|
|
|
|
spinlock_t lock;
|
|
|
|
} base_crng = {
|
|
|
|
.lock = __SPIN_LOCK_UNLOCKED(base_crng.lock)
|
|
|
|
};
|
|
|
|
|
|
|
|
struct crng {
|
|
|
|
u8 key[CHACHA_KEY_SIZE];
|
|
|
|
unsigned long generation;
|
|
|
|
local_lock_t lock;
|
|
|
|
};
|
|
|
|
|
|
|
|
static DEFINE_PER_CPU(struct crng, crngs) = {
|
|
|
|
.generation = ULONG_MAX,
|
|
|
|
.lock = INIT_LOCAL_LOCK(crngs.lock),
|
|
|
|
};
|
2016-06-13 06:13:36 +08:00
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
/* Used by crng_reseed() to extract a new seed from the input pool. */
|
|
|
|
static bool drain_entropy(void *buf, size_t nbytes);
|
2016-06-13 06:13:36 +08:00
|
|
|
|
2018-04-12 02:58:27 +08:00
|
|
|
/*
|
2022-02-11 19:53:34 +08:00
|
|
|
* This extracts a new crng key from the input pool, but only if there is a
|
|
|
|
* sufficient amount of entropy available, in order to mitigate bruteforcing
|
|
|
|
* of newly added bits.
|
2018-04-12 02:58:27 +08:00
|
|
|
*/
|
2022-02-07 06:51:41 +08:00
|
|
|
static void crng_reseed(void)
|
2016-06-13 06:13:36 +08:00
|
|
|
{
|
2022-01-15 21:57:22 +08:00
|
|
|
unsigned long flags;
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
unsigned long next_gen;
|
|
|
|
u8 key[CHACHA_KEY_SIZE];
|
2022-02-10 02:57:06 +08:00
|
|
|
bool finalize_init = false;
|
2016-06-13 06:13:36 +08:00
|
|
|
|
2022-02-11 19:19:49 +08:00
|
|
|
/* Only reseed if we can, to prevent brute forcing a small amount of new bits. */
|
|
|
|
if (!drain_entropy(key, sizeof(key)))
|
|
|
|
return;
|
2022-02-07 06:51:41 +08:00
|
|
|
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
/*
|
|
|
|
* We copy the new key into the base_crng, overwriting the old one,
|
|
|
|
* and update the generation counter. We avoid hitting ULONG_MAX,
|
|
|
|
* because the per-cpu crngs are initialized to ULONG_MAX, so this
|
|
|
|
* forces new CPUs that come online to always initialize.
|
|
|
|
*/
|
|
|
|
spin_lock_irqsave(&base_crng.lock, flags);
|
|
|
|
memcpy(base_crng.key, key, sizeof(base_crng.key));
|
|
|
|
next_gen = base_crng.generation + 1;
|
|
|
|
if (next_gen == ULONG_MAX)
|
|
|
|
++next_gen;
|
|
|
|
WRITE_ONCE(base_crng.generation, next_gen);
|
|
|
|
WRITE_ONCE(base_crng.birth, jiffies);
|
2022-02-07 06:51:41 +08:00
|
|
|
if (crng_init < 2) {
|
|
|
|
crng_init = 2;
|
2022-02-10 02:57:06 +08:00
|
|
|
finalize_init = true;
|
|
|
|
}
|
|
|
|
spin_unlock_irqrestore(&base_crng.lock, flags);
|
|
|
|
memzero_explicit(key, sizeof(key));
|
|
|
|
if (finalize_init) {
|
2022-02-07 06:51:41 +08:00
|
|
|
process_random_ready_list();
|
|
|
|
wake_up_interruptible(&crng_init_wait);
|
|
|
|
kill_fasync(&fasync, SIGIO, POLL_IN);
|
|
|
|
pr_notice("crng init done\n");
|
|
|
|
if (unseeded_warning.missed) {
|
|
|
|
pr_notice("%d get_random_xx warning(s) missed due to ratelimiting\n",
|
|
|
|
unseeded_warning.missed);
|
|
|
|
unseeded_warning.missed = 0;
|
|
|
|
}
|
|
|
|
if (urandom_warning.missed) {
|
|
|
|
pr_notice("%d urandom warning(s) missed due to ratelimiting\n",
|
|
|
|
urandom_warning.missed);
|
|
|
|
urandom_warning.missed = 0;
|
|
|
|
}
|
|
|
|
}
|
2016-06-13 06:13:36 +08:00
|
|
|
}
|
|
|
|
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
/*
|
2022-02-11 19:53:34 +08:00
|
|
|
* This generates a ChaCha block using the provided key, and then
|
|
|
|
* immediately overwites that key with half the block. It returns
|
|
|
|
* the resultant ChaCha state to the user, along with the second
|
|
|
|
* half of the block containing 32 bytes of random data that may
|
|
|
|
* be used; random_data_len may not be greater than 32.
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
*/
|
|
|
|
static void crng_fast_key_erasure(u8 key[CHACHA_KEY_SIZE],
|
|
|
|
u32 chacha_state[CHACHA_STATE_WORDS],
|
|
|
|
u8 *random_data, size_t random_data_len)
|
2016-06-13 06:13:36 +08:00
|
|
|
{
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
u8 first_block[CHACHA_BLOCK_SIZE];
|
2021-12-21 06:41:57 +08:00
|
|
|
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
BUG_ON(random_data_len > 32);
|
|
|
|
|
|
|
|
chacha_init_consts(chacha_state);
|
|
|
|
memcpy(&chacha_state[4], key, CHACHA_KEY_SIZE);
|
|
|
|
memset(&chacha_state[12], 0, sizeof(u32) * 4);
|
|
|
|
chacha20_block(chacha_state, first_block);
|
|
|
|
|
|
|
|
memcpy(key, first_block, CHACHA_KEY_SIZE);
|
|
|
|
memcpy(random_data, first_block + CHACHA_KEY_SIZE, random_data_len);
|
|
|
|
memzero_explicit(first_block, sizeof(first_block));
|
2016-05-02 14:04:41 +08:00
|
|
|
}
|
|
|
|
|
2016-05-05 01:29:18 +08:00
|
|
|
/*
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
* This function returns a ChaCha state that you may use for generating
|
|
|
|
* random data. It also returns up to 32 bytes on its own of random data
|
|
|
|
* that may be used; random_data_len may not be greater than 32.
|
2016-05-05 01:29:18 +08:00
|
|
|
*/
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
static void crng_make_state(u32 chacha_state[CHACHA_STATE_WORDS],
|
|
|
|
u8 *random_data, size_t random_data_len)
|
2016-05-05 01:29:18 +08:00
|
|
|
{
|
2022-01-15 21:57:22 +08:00
|
|
|
unsigned long flags;
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
struct crng *crng;
|
2016-05-05 01:29:18 +08:00
|
|
|
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
BUG_ON(random_data_len > 32);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For the fast path, we check whether we're ready, unlocked first, and
|
|
|
|
* then re-check once locked later. In the case where we're really not
|
|
|
|
* ready, we do fast key erasure with the base_crng directly, because
|
|
|
|
* this is what crng_{fast,slow}_load mutate during early init.
|
|
|
|
*/
|
|
|
|
if (unlikely(!crng_ready())) {
|
|
|
|
bool ready;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&base_crng.lock, flags);
|
|
|
|
ready = crng_ready();
|
|
|
|
if (!ready)
|
|
|
|
crng_fast_key_erasure(base_crng.key, chacha_state,
|
|
|
|
random_data, random_data_len);
|
|
|
|
spin_unlock_irqrestore(&base_crng.lock, flags);
|
|
|
|
if (!ready)
|
|
|
|
return;
|
2016-05-05 01:29:18 +08:00
|
|
|
}
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If the base_crng is more than 5 minutes old, we reseed, which
|
|
|
|
* in turn bumps the generation counter that we check below.
|
|
|
|
*/
|
|
|
|
if (unlikely(time_after(jiffies, READ_ONCE(base_crng.birth) + CRNG_RESEED_INTERVAL)))
|
|
|
|
crng_reseed();
|
|
|
|
|
|
|
|
local_lock_irqsave(&crngs.lock, flags);
|
|
|
|
crng = raw_cpu_ptr(&crngs);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If our per-cpu crng is older than the base_crng, then it means
|
|
|
|
* somebody reseeded the base_crng. In that case, we do fast key
|
|
|
|
* erasure on the base_crng, and use its output as the new key
|
|
|
|
* for our per-cpu crng. This brings us up to date with base_crng.
|
|
|
|
*/
|
|
|
|
if (unlikely(crng->generation != READ_ONCE(base_crng.generation))) {
|
|
|
|
spin_lock(&base_crng.lock);
|
|
|
|
crng_fast_key_erasure(base_crng.key, chacha_state,
|
|
|
|
crng->key, sizeof(crng->key));
|
|
|
|
crng->generation = base_crng.generation;
|
|
|
|
spin_unlock(&base_crng.lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Finally, when we've made it this far, our per-cpu crng has an up
|
|
|
|
* to date key, and we can do fast key erasure with it to produce
|
|
|
|
* some random data and a ChaCha state for the caller. All other
|
|
|
|
* branches of this function are "unlikely", so most of the time we
|
|
|
|
* should wind up here immediately.
|
|
|
|
*/
|
|
|
|
crng_fast_key_erasure(crng->key, chacha_state, random_data, random_data_len);
|
|
|
|
local_unlock_irqrestore(&crngs.lock, flags);
|
2016-05-05 01:29:18 +08:00
|
|
|
}
|
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
/*
|
|
|
|
* This function is for crng_init == 0 only.
|
|
|
|
*
|
|
|
|
* crng_fast_load() can be called by code in the interrupt service
|
|
|
|
* path. So we can't afford to dilly-dally. Returns the number of
|
|
|
|
* bytes processed from cp.
|
|
|
|
*/
|
|
|
|
static size_t crng_fast_load(const void *cp, size_t len)
|
|
|
|
{
|
|
|
|
static int crng_init_cnt = 0;
|
|
|
|
unsigned long flags;
|
|
|
|
const u8 *src = (const u8 *)cp;
|
|
|
|
size_t ret = 0;
|
|
|
|
|
|
|
|
if (!spin_trylock_irqsave(&base_crng.lock, flags))
|
|
|
|
return 0;
|
|
|
|
if (crng_init != 0) {
|
|
|
|
spin_unlock_irqrestore(&base_crng.lock, flags);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
while (len > 0 && crng_init_cnt < CRNG_INIT_CNT_THRESH) {
|
|
|
|
base_crng.key[crng_init_cnt % sizeof(base_crng.key)] ^= *src;
|
|
|
|
src++; crng_init_cnt++; len--; ret++;
|
|
|
|
}
|
|
|
|
if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) {
|
|
|
|
++base_crng.generation;
|
|
|
|
crng_init = 1;
|
|
|
|
}
|
|
|
|
spin_unlock_irqrestore(&base_crng.lock, flags);
|
|
|
|
if (crng_init == 1)
|
|
|
|
pr_notice("fast init done\n");
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This function is for crng_init == 0 only.
|
|
|
|
*
|
|
|
|
* crng_slow_load() is called by add_device_randomness, which has two
|
|
|
|
* attributes. (1) We can't trust the buffer passed to it is
|
|
|
|
* guaranteed to be unpredictable (so it might not have any entropy at
|
|
|
|
* all), and (2) it doesn't have the performance constraints of
|
|
|
|
* crng_fast_load().
|
|
|
|
*
|
|
|
|
* So, we simply hash the contents in with the current key. Finally,
|
|
|
|
* we do *not* advance crng_init_cnt since buffer we may get may be
|
|
|
|
* something like a fixed DMI table (for example), which might very
|
|
|
|
* well be unique to the machine, but is otherwise unvarying.
|
|
|
|
*/
|
|
|
|
static void crng_slow_load(const void *cp, size_t len)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
struct blake2s_state hash;
|
|
|
|
|
|
|
|
blake2s_init(&hash, sizeof(base_crng.key));
|
|
|
|
|
|
|
|
if (!spin_trylock_irqsave(&base_crng.lock, flags))
|
|
|
|
return;
|
|
|
|
if (crng_init != 0) {
|
|
|
|
spin_unlock_irqrestore(&base_crng.lock, flags);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
blake2s_update(&hash, base_crng.key, sizeof(base_crng.key));
|
|
|
|
blake2s_update(&hash, cp, len);
|
|
|
|
blake2s_final(&hash, base_crng.key);
|
|
|
|
|
|
|
|
spin_unlock_irqrestore(&base_crng.lock, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void _get_random_bytes(void *buf, size_t nbytes)
|
2016-06-13 06:13:36 +08:00
|
|
|
{
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
u32 chacha_state[CHACHA_STATE_WORDS];
|
2022-02-11 19:53:34 +08:00
|
|
|
u8 tmp[CHACHA_BLOCK_SIZE];
|
|
|
|
size_t len;
|
|
|
|
|
|
|
|
if (!nbytes)
|
|
|
|
return;
|
|
|
|
|
|
|
|
len = min_t(size_t, 32, nbytes);
|
|
|
|
crng_make_state(chacha_state, buf, len);
|
|
|
|
nbytes -= len;
|
|
|
|
buf += len;
|
|
|
|
|
|
|
|
while (nbytes) {
|
|
|
|
if (nbytes < CHACHA_BLOCK_SIZE) {
|
|
|
|
chacha20_block(chacha_state, tmp);
|
|
|
|
memcpy(buf, tmp, nbytes);
|
|
|
|
memzero_explicit(tmp, sizeof(tmp));
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
chacha20_block(chacha_state, buf);
|
|
|
|
if (unlikely(chacha_state[12] == 0))
|
|
|
|
++chacha_state[13];
|
|
|
|
nbytes -= CHACHA_BLOCK_SIZE;
|
|
|
|
buf += CHACHA_BLOCK_SIZE;
|
|
|
|
}
|
|
|
|
|
|
|
|
memzero_explicit(chacha_state, sizeof(chacha_state));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This function is the exported kernel interface. It returns some
|
|
|
|
* number of good random numbers, suitable for key generation, seeding
|
|
|
|
* TCP sequence numbers, etc. It does not rely on the hardware random
|
|
|
|
* number generator. For random bytes direct from the hardware RNG
|
|
|
|
* (when available), use get_random_bytes_arch(). In order to ensure
|
|
|
|
* that the randomness provided by this function is okay, the function
|
|
|
|
* wait_for_random_bytes() should be called and return 0 at least once
|
|
|
|
* at any point prior.
|
|
|
|
*/
|
|
|
|
void get_random_bytes(void *buf, size_t nbytes)
|
|
|
|
{
|
|
|
|
static void *previous;
|
|
|
|
|
|
|
|
warn_unseeded_randomness(&previous);
|
|
|
|
_get_random_bytes(buf, nbytes);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(get_random_bytes);
|
|
|
|
|
|
|
|
static ssize_t get_random_bytes_user(void __user *buf, size_t nbytes)
|
|
|
|
{
|
|
|
|
bool large_request = nbytes > 256;
|
|
|
|
ssize_t ret = 0;
|
|
|
|
size_t len;
|
|
|
|
u32 chacha_state[CHACHA_STATE_WORDS];
|
|
|
|
u8 output[CHACHA_BLOCK_SIZE];
|
|
|
|
|
|
|
|
if (!nbytes)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
len = min_t(size_t, 32, nbytes);
|
|
|
|
crng_make_state(chacha_state, output, len);
|
|
|
|
|
|
|
|
if (copy_to_user(buf, output, len))
|
|
|
|
return -EFAULT;
|
|
|
|
nbytes -= len;
|
|
|
|
buf += len;
|
|
|
|
ret += len;
|
|
|
|
|
|
|
|
while (nbytes) {
|
|
|
|
if (large_request && need_resched()) {
|
|
|
|
if (signal_pending(current))
|
|
|
|
break;
|
|
|
|
schedule();
|
|
|
|
}
|
|
|
|
|
|
|
|
chacha20_block(chacha_state, output);
|
|
|
|
if (unlikely(chacha_state[12] == 0))
|
|
|
|
++chacha_state[13];
|
|
|
|
|
|
|
|
len = min_t(size_t, nbytes, CHACHA_BLOCK_SIZE);
|
|
|
|
if (copy_to_user(buf, output, len)) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
nbytes -= len;
|
|
|
|
buf += len;
|
|
|
|
ret += len;
|
|
|
|
}
|
|
|
|
|
|
|
|
memzero_explicit(chacha_state, sizeof(chacha_state));
|
|
|
|
memzero_explicit(output, sizeof(output));
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Batched entropy returns random integers. The quality of the random
|
|
|
|
* number is good as /dev/urandom. In order to ensure that the randomness
|
|
|
|
* provided by this function is okay, the function wait_for_random_bytes()
|
|
|
|
* should be called and return 0 at least once at any point prior.
|
|
|
|
*/
|
|
|
|
struct batched_entropy {
|
|
|
|
union {
|
|
|
|
/*
|
|
|
|
* We make this 1.5x a ChaCha block, so that we get the
|
|
|
|
* remaining 32 bytes from fast key erasure, plus one full
|
|
|
|
* block from the detached ChaCha state. We can increase
|
|
|
|
* the size of this later if needed so long as we keep the
|
|
|
|
* formula of (integer_blocks + 0.5) * CHACHA_BLOCK_SIZE.
|
|
|
|
*/
|
|
|
|
u64 entropy_u64[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u64))];
|
|
|
|
u32 entropy_u32[CHACHA_BLOCK_SIZE * 3 / (2 * sizeof(u32))];
|
|
|
|
};
|
|
|
|
local_lock_t lock;
|
|
|
|
unsigned long generation;
|
|
|
|
unsigned int position;
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = {
|
|
|
|
.lock = INIT_LOCAL_LOCK(batched_entropy_u64.lock),
|
|
|
|
.position = UINT_MAX
|
|
|
|
};
|
|
|
|
|
|
|
|
u64 get_random_u64(void)
|
|
|
|
{
|
|
|
|
u64 ret;
|
|
|
|
unsigned long flags;
|
|
|
|
struct batched_entropy *batch;
|
|
|
|
static void *previous;
|
|
|
|
unsigned long next_gen;
|
|
|
|
|
|
|
|
warn_unseeded_randomness(&previous);
|
|
|
|
|
|
|
|
local_lock_irqsave(&batched_entropy_u64.lock, flags);
|
|
|
|
batch = raw_cpu_ptr(&batched_entropy_u64);
|
|
|
|
|
|
|
|
next_gen = READ_ONCE(base_crng.generation);
|
|
|
|
if (batch->position >= ARRAY_SIZE(batch->entropy_u64) ||
|
|
|
|
next_gen != batch->generation) {
|
|
|
|
_get_random_bytes(batch->entropy_u64, sizeof(batch->entropy_u64));
|
|
|
|
batch->position = 0;
|
|
|
|
batch->generation = next_gen;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = batch->entropy_u64[batch->position];
|
|
|
|
batch->entropy_u64[batch->position] = 0;
|
|
|
|
++batch->position;
|
|
|
|
local_unlock_irqrestore(&batched_entropy_u64.lock, flags);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(get_random_u64);
|
|
|
|
|
|
|
|
static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = {
|
|
|
|
.lock = INIT_LOCAL_LOCK(batched_entropy_u32.lock),
|
|
|
|
.position = UINT_MAX
|
|
|
|
};
|
|
|
|
|
|
|
|
u32 get_random_u32(void)
|
|
|
|
{
|
|
|
|
u32 ret;
|
|
|
|
unsigned long flags;
|
|
|
|
struct batched_entropy *batch;
|
|
|
|
static void *previous;
|
|
|
|
unsigned long next_gen;
|
|
|
|
|
|
|
|
warn_unseeded_randomness(&previous);
|
|
|
|
|
|
|
|
local_lock_irqsave(&batched_entropy_u32.lock, flags);
|
|
|
|
batch = raw_cpu_ptr(&batched_entropy_u32);
|
|
|
|
|
|
|
|
next_gen = READ_ONCE(base_crng.generation);
|
|
|
|
if (batch->position >= ARRAY_SIZE(batch->entropy_u32) ||
|
|
|
|
next_gen != batch->generation) {
|
|
|
|
_get_random_bytes(batch->entropy_u32, sizeof(batch->entropy_u32));
|
|
|
|
batch->position = 0;
|
|
|
|
batch->generation = next_gen;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = batch->entropy_u32[batch->position];
|
|
|
|
batch->entropy_u32[batch->position] = 0;
|
|
|
|
++batch->position;
|
|
|
|
local_unlock_irqrestore(&batched_entropy_u32.lock, flags);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(get_random_u32);
|
|
|
|
|
|
|
|
/**
|
|
|
|
* randomize_page - Generate a random, page aligned address
|
|
|
|
* @start: The smallest acceptable address the caller will take.
|
|
|
|
* @range: The size of the area, starting at @start, within which the
|
|
|
|
* random address must fall.
|
|
|
|
*
|
|
|
|
* If @start + @range would overflow, @range is capped.
|
|
|
|
*
|
|
|
|
* NOTE: Historical use of randomize_range, which this replaces, presumed that
|
|
|
|
* @start was already page aligned. We now align it regardless.
|
|
|
|
*
|
|
|
|
* Return: A page aligned address within [start, start + range). On error,
|
|
|
|
* @start is returned.
|
|
|
|
*/
|
|
|
|
unsigned long randomize_page(unsigned long start, unsigned long range)
|
|
|
|
{
|
|
|
|
if (!PAGE_ALIGNED(start)) {
|
|
|
|
range -= PAGE_ALIGN(start) - start;
|
|
|
|
start = PAGE_ALIGN(start);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (start > ULONG_MAX - range)
|
|
|
|
range = ULONG_MAX - start;
|
|
|
|
|
|
|
|
range >>= PAGE_SHIFT;
|
|
|
|
|
|
|
|
if (range == 0)
|
|
|
|
return start;
|
|
|
|
|
|
|
|
return start + (get_random_long() % range << PAGE_SHIFT);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This function will use the architecture-specific hardware random
|
|
|
|
* number generator if it is available. It is not recommended for
|
|
|
|
* use. Use get_random_bytes() instead. It returns the number of
|
|
|
|
* bytes filled in.
|
|
|
|
*/
|
|
|
|
size_t __must_check get_random_bytes_arch(void *buf, size_t nbytes)
|
|
|
|
{
|
|
|
|
size_t left = nbytes;
|
|
|
|
u8 *p = buf;
|
|
|
|
|
|
|
|
while (left) {
|
|
|
|
unsigned long v;
|
|
|
|
size_t chunk = min_t(size_t, left, sizeof(unsigned long));
|
|
|
|
|
|
|
|
if (!arch_get_random_long(&v))
|
|
|
|
break;
|
|
|
|
|
|
|
|
memcpy(p, &v, chunk);
|
|
|
|
p += chunk;
|
|
|
|
left -= chunk;
|
|
|
|
}
|
|
|
|
|
|
|
|
return nbytes - left;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(get_random_bytes_arch);
|
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
|
|
|
|
/**********************************************************************
|
|
|
|
*
|
|
|
|
* Entropy accumulation and extraction routines.
|
|
|
|
*
|
|
|
|
* Callers may add entropy via:
|
|
|
|
*
|
|
|
|
* static void mix_pool_bytes(const void *in, size_t nbytes)
|
|
|
|
*
|
|
|
|
* After which, if added entropy should be credited:
|
|
|
|
*
|
|
|
|
* static void credit_entropy_bits(size_t nbits)
|
|
|
|
*
|
|
|
|
* Finally, extract entropy via these two, with the latter one
|
|
|
|
* setting the entropy count to zero and extracting only if there
|
|
|
|
* is POOL_MIN_BITS entropy credited prior:
|
|
|
|
*
|
|
|
|
* static void extract_entropy(void *buf, size_t nbytes)
|
|
|
|
* static bool drain_entropy(void *buf, size_t nbytes)
|
|
|
|
*
|
|
|
|
**********************************************************************/
|
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
enum {
|
|
|
|
POOL_BITS = BLAKE2S_HASH_SIZE * 8,
|
|
|
|
POOL_MIN_BITS = POOL_BITS /* No point in settling for less. */
|
|
|
|
};
|
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
/* For notifying userspace should write into /dev/random. */
|
2022-02-11 19:53:34 +08:00
|
|
|
static DECLARE_WAIT_QUEUE_HEAD(random_write_wait);
|
|
|
|
|
|
|
|
static struct {
|
|
|
|
struct blake2s_state hash;
|
|
|
|
spinlock_t lock;
|
|
|
|
unsigned int entropy_count;
|
|
|
|
} input_pool = {
|
|
|
|
.hash.h = { BLAKE2S_IV0 ^ (0x01010000 | BLAKE2S_HASH_SIZE),
|
|
|
|
BLAKE2S_IV1, BLAKE2S_IV2, BLAKE2S_IV3, BLAKE2S_IV4,
|
|
|
|
BLAKE2S_IV5, BLAKE2S_IV6, BLAKE2S_IV7 },
|
|
|
|
.hash.outlen = BLAKE2S_HASH_SIZE,
|
|
|
|
.lock = __SPIN_LOCK_UNLOCKED(input_pool.lock),
|
|
|
|
};
|
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
static void _mix_pool_bytes(const void *in, size_t nbytes)
|
|
|
|
{
|
|
|
|
blake2s_update(&input_pool.hash, in, nbytes);
|
|
|
|
}
|
2022-02-11 19:53:34 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This function adds bytes into the entropy "pool". It does not
|
|
|
|
* update the entropy estimate. The caller should call
|
|
|
|
* credit_entropy_bits if this is appropriate.
|
|
|
|
*/
|
2022-02-11 19:53:34 +08:00
|
|
|
static void mix_pool_bytes(const void *in, size_t nbytes)
|
2022-02-11 19:53:34 +08:00
|
|
|
{
|
2022-02-11 19:53:34 +08:00
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&input_pool.lock, flags);
|
|
|
|
_mix_pool_bytes(in, nbytes);
|
|
|
|
spin_unlock_irqrestore(&input_pool.lock, flags);
|
2022-02-11 19:53:34 +08:00
|
|
|
}
|
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
static void credit_entropy_bits(size_t nbits)
|
|
|
|
{
|
|
|
|
unsigned int entropy_count, orig, add;
|
|
|
|
|
|
|
|
if (!nbits)
|
|
|
|
return;
|
|
|
|
|
|
|
|
add = min_t(size_t, nbits, POOL_BITS);
|
|
|
|
|
|
|
|
do {
|
|
|
|
orig = READ_ONCE(input_pool.entropy_count);
|
|
|
|
entropy_count = min_t(unsigned int, POOL_BITS, orig + add);
|
|
|
|
} while (cmpxchg(&input_pool.entropy_count, orig, entropy_count) != orig);
|
|
|
|
|
|
|
|
if (crng_init < 2 && entropy_count >= POOL_MIN_BITS)
|
|
|
|
crng_reseed();
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This is an HKDF-like construction for using the hashed collected entropy
|
|
|
|
* as a PRF key, that's then expanded block-by-block.
|
|
|
|
*/
|
|
|
|
static void extract_entropy(void *buf, size_t nbytes)
|
2022-02-11 19:53:34 +08:00
|
|
|
{
|
|
|
|
unsigned long flags;
|
2022-02-11 19:53:34 +08:00
|
|
|
u8 seed[BLAKE2S_HASH_SIZE], next_key[BLAKE2S_HASH_SIZE];
|
|
|
|
struct {
|
|
|
|
unsigned long rdseed[32 / sizeof(long)];
|
|
|
|
size_t counter;
|
|
|
|
} block;
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(block.rdseed); ++i) {
|
|
|
|
if (!arch_get_random_seed_long(&block.rdseed[i]) &&
|
|
|
|
!arch_get_random_long(&block.rdseed[i]))
|
|
|
|
block.rdseed[i] = random_get_entropy();
|
|
|
|
}
|
2022-02-11 19:53:34 +08:00
|
|
|
|
|
|
|
spin_lock_irqsave(&input_pool.lock, flags);
|
2022-02-11 19:53:34 +08:00
|
|
|
|
|
|
|
/* seed = HASHPRF(last_key, entropy_input) */
|
|
|
|
blake2s_final(&input_pool.hash, seed);
|
|
|
|
|
|
|
|
/* next_key = HASHPRF(seed, RDSEED || 0) */
|
|
|
|
block.counter = 0;
|
|
|
|
blake2s(next_key, (u8 *)&block, seed, sizeof(next_key), sizeof(block), sizeof(seed));
|
|
|
|
blake2s_init_key(&input_pool.hash, BLAKE2S_HASH_SIZE, next_key, sizeof(next_key));
|
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
spin_unlock_irqrestore(&input_pool.lock, flags);
|
2022-02-11 19:53:34 +08:00
|
|
|
memzero_explicit(next_key, sizeof(next_key));
|
|
|
|
|
|
|
|
while (nbytes) {
|
|
|
|
i = min_t(size_t, nbytes, BLAKE2S_HASH_SIZE);
|
|
|
|
/* output = HASHPRF(seed, RDSEED || ++counter) */
|
|
|
|
++block.counter;
|
|
|
|
blake2s(buf, (u8 *)&block, seed, i, sizeof(block), sizeof(seed));
|
|
|
|
nbytes -= i;
|
|
|
|
buf += i;
|
|
|
|
}
|
|
|
|
|
|
|
|
memzero_explicit(seed, sizeof(seed));
|
|
|
|
memzero_explicit(&block, sizeof(block));
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* First we make sure we have POOL_MIN_BITS of entropy in the pool, and then we
|
|
|
|
* set the entropy count to zero (but don't actually touch any data). Only then
|
|
|
|
* can we extract a new key with extract_entropy().
|
|
|
|
*/
|
|
|
|
static bool drain_entropy(void *buf, size_t nbytes)
|
|
|
|
{
|
|
|
|
unsigned int entropy_count;
|
|
|
|
do {
|
|
|
|
entropy_count = READ_ONCE(input_pool.entropy_count);
|
|
|
|
if (entropy_count < POOL_MIN_BITS)
|
|
|
|
return false;
|
|
|
|
} while (cmpxchg(&input_pool.entropy_count, entropy_count, 0) != entropy_count);
|
|
|
|
extract_entropy(buf, nbytes);
|
|
|
|
wake_up_interruptible(&random_write_wait);
|
|
|
|
kill_fasync(&fasync, SIGIO, POLL_OUT);
|
|
|
|
return true;
|
2022-02-11 19:53:34 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
struct fast_pool {
|
|
|
|
union {
|
|
|
|
u32 pool32[4];
|
|
|
|
u64 pool64[2];
|
|
|
|
};
|
|
|
|
unsigned long last;
|
|
|
|
u16 reg_idx;
|
|
|
|
u8 count;
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This is a fast mixing routine used by the interrupt randomness
|
|
|
|
* collector. It's hardcoded for an 128 bit pool and assumes that any
|
|
|
|
* locks that might be needed are taken by the caller.
|
|
|
|
*/
|
|
|
|
static void fast_mix(u32 pool[4])
|
|
|
|
{
|
|
|
|
u32 a = pool[0], b = pool[1];
|
|
|
|
u32 c = pool[2], d = pool[3];
|
|
|
|
|
|
|
|
a += b; c += d;
|
|
|
|
b = rol32(b, 6); d = rol32(d, 27);
|
|
|
|
d ^= a; b ^= c;
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
a += b; c += d;
|
|
|
|
b = rol32(b, 16); d = rol32(d, 14);
|
|
|
|
d ^= a; b ^= c;
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
a += b; c += d;
|
|
|
|
b = rol32(b, 6); d = rol32(d, 27);
|
|
|
|
d ^= a; b ^= c;
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
a += b; c += d;
|
|
|
|
b = rol32(b, 16); d = rol32(d, 14);
|
|
|
|
d ^= a; b ^= c;
|
2016-06-13 06:13:36 +08:00
|
|
|
|
2022-02-11 19:53:34 +08:00
|
|
|
pool[0] = a; pool[1] = b;
|
|
|
|
pool[2] = c; pool[3] = d;
|
|
|
|
}
|
2016-06-13 06:13:36 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*********************************************************************
|
|
|
|
*
|
|
|
|
* Entropy input management
|
|
|
|
*
|
|
|
|
*********************************************************************/
|
|
|
|
|
|
|
|
/* There is one of these per entropy source */
|
|
|
|
struct timer_rand_state {
|
|
|
|
cycles_t last_time;
|
2008-04-29 16:02:55 +08:00
|
|
|
long last_delta, last_delta2;
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2013-11-04 05:40:53 +08:00
|
|
|
#define INIT_TIMER_RAND_STATE { INITIAL_JIFFIES, };
|
|
|
|
|
2012-07-04 23:16:01 +08:00
|
|
|
/*
|
2016-06-13 06:13:36 +08:00
|
|
|
* Add device- or boot-specific data to the input pool to help
|
|
|
|
* initialize it.
|
2012-07-04 23:16:01 +08:00
|
|
|
*
|
2016-06-13 06:13:36 +08:00
|
|
|
* None of this adds any entropy; it is meant to avoid the problem of
|
|
|
|
* the entropy pool having similar initial state across largely
|
|
|
|
* identical devices.
|
2012-07-04 23:16:01 +08:00
|
|
|
*/
|
2022-02-09 21:43:25 +08:00
|
|
|
void add_device_randomness(const void *buf, size_t size)
|
2012-07-04 23:16:01 +08:00
|
|
|
{
|
2013-09-22 01:58:22 +08:00
|
|
|
unsigned long time = random_get_entropy() ^ jiffies;
|
2013-09-13 02:27:22 +08:00
|
|
|
unsigned long flags;
|
2012-07-04 23:16:01 +08:00
|
|
|
|
2018-04-12 02:58:27 +08:00
|
|
|
if (!crng_ready() && size)
|
|
|
|
crng_slow_load(buf, size);
|
2017-07-13 05:34:04 +08:00
|
|
|
|
2013-09-13 02:27:22 +08:00
|
|
|
spin_lock_irqsave(&input_pool.lock, flags);
|
2022-01-13 00:18:08 +08:00
|
|
|
_mix_pool_bytes(buf, size);
|
|
|
|
_mix_pool_bytes(&time, sizeof(time));
|
2013-09-13 02:27:22 +08:00
|
|
|
spin_unlock_irqrestore(&input_pool.lock, flags);
|
2012-07-04 23:16:01 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(add_device_randomness);
|
|
|
|
|
2013-11-04 05:40:53 +08:00
|
|
|
static struct timer_rand_state input_timer_state = INIT_TIMER_RAND_STATE;
|
2008-08-20 11:50:08 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
|
|
|
* This function adds entropy to the entropy "pool" by using timing
|
|
|
|
* delays. It uses the timer_rand_state structure to make an estimate
|
|
|
|
* of how many bits of entropy this call has added to the pool.
|
|
|
|
*
|
|
|
|
* The number "num" is also added to the pool - it should somehow describe
|
|
|
|
* the type of event which just happened. This is currently 0-255 for
|
|
|
|
* keyboard scan codes, and 256 upwards for interrupts.
|
|
|
|
*
|
|
|
|
*/
|
2022-02-09 21:43:25 +08:00
|
|
|
static void add_timer_randomness(struct timer_rand_state *state, unsigned int num)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
struct {
|
|
|
|
long jiffies;
|
2022-01-10 00:48:58 +08:00
|
|
|
unsigned int cycles;
|
|
|
|
unsigned int num;
|
2005-04-17 06:20:36 +08:00
|
|
|
} sample;
|
|
|
|
long delta, delta2, delta3;
|
|
|
|
|
|
|
|
sample.jiffies = jiffies;
|
2013-09-22 01:58:22 +08:00
|
|
|
sample.cycles = random_get_entropy();
|
2005-04-17 06:20:36 +08:00
|
|
|
sample.num = num;
|
2022-01-13 00:18:08 +08:00
|
|
|
mix_pool_bytes(&sample, sizeof(sample));
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Calculate number of bits of randomness we probably added.
|
|
|
|
* We take into account the first, second and third-order deltas
|
|
|
|
* in order to make our estimate.
|
|
|
|
*/
|
2020-02-26 00:27:04 +08:00
|
|
|
delta = sample.jiffies - READ_ONCE(state->last_time);
|
|
|
|
WRITE_ONCE(state->last_time, sample.jiffies);
|
2018-03-01 07:22:47 +08:00
|
|
|
|
2020-02-26 00:27:04 +08:00
|
|
|
delta2 = delta - READ_ONCE(state->last_delta);
|
|
|
|
WRITE_ONCE(state->last_delta, delta);
|
2018-03-01 07:22:47 +08:00
|
|
|
|
2020-02-26 00:27:04 +08:00
|
|
|
delta3 = delta2 - READ_ONCE(state->last_delta2);
|
|
|
|
WRITE_ONCE(state->last_delta2, delta2);
|
2018-03-01 07:22:47 +08:00
|
|
|
|
|
|
|
if (delta < 0)
|
|
|
|
delta = -delta;
|
|
|
|
if (delta2 < 0)
|
|
|
|
delta2 = -delta2;
|
|
|
|
if (delta3 < 0)
|
|
|
|
delta3 = -delta3;
|
|
|
|
if (delta > delta2)
|
|
|
|
delta = delta2;
|
|
|
|
if (delta > delta3)
|
|
|
|
delta = delta3;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2018-03-01 07:22:47 +08:00
|
|
|
/*
|
|
|
|
* delta is now minimum absolute delta.
|
|
|
|
* Round down by 1 bit on general principles,
|
2020-01-08 05:55:34 +08:00
|
|
|
* and limit entropy estimate to 12 bits.
|
2018-03-01 07:22:47 +08:00
|
|
|
*/
|
2022-02-09 21:43:25 +08:00
|
|
|
credit_entropy_bits(min_t(unsigned int, fls(delta >> 1), 11));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2006-01-12 04:17:38 +08:00
|
|
|
void add_input_randomness(unsigned int type, unsigned int code,
|
2022-01-15 21:57:22 +08:00
|
|
|
unsigned int value)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
static unsigned char last_value;
|
|
|
|
|
|
|
|
/* ignore autorepeat and the like */
|
|
|
|
if (value == last_value)
|
|
|
|
return;
|
|
|
|
|
|
|
|
last_value = value;
|
|
|
|
add_timer_randomness(&input_timer_state,
|
|
|
|
(type << 4) ^ code ^ (code >> 4) ^ value);
|
|
|
|
}
|
2006-10-11 13:43:58 +08:00
|
|
|
EXPORT_SYMBOL_GPL(add_input_randomness);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-07-02 19:52:16 +08:00
|
|
|
static DEFINE_PER_CPU(struct fast_pool, irq_randomness);
|
|
|
|
|
2022-01-10 00:48:58 +08:00
|
|
|
static u32 get_reg(struct fast_pool *f, struct pt_regs *regs)
|
2014-06-16 04:59:24 +08:00
|
|
|
{
|
2022-01-15 21:57:22 +08:00
|
|
|
u32 *ptr = (u32 *)regs;
|
2017-06-08 07:01:32 +08:00
|
|
|
unsigned int idx;
|
2014-06-16 04:59:24 +08:00
|
|
|
|
|
|
|
if (regs == NULL)
|
|
|
|
return 0;
|
2017-06-08 07:01:32 +08:00
|
|
|
idx = READ_ONCE(f->reg_idx);
|
2022-01-10 00:48:58 +08:00
|
|
|
if (idx >= sizeof(struct pt_regs) / sizeof(u32))
|
2017-06-08 07:01:32 +08:00
|
|
|
idx = 0;
|
|
|
|
ptr += idx++;
|
|
|
|
WRITE_ONCE(f->reg_idx, idx);
|
2017-04-30 15:49:21 +08:00
|
|
|
return *ptr;
|
2014-06-16 04:59:24 +08:00
|
|
|
}
|
|
|
|
|
2021-12-07 20:17:33 +08:00
|
|
|
void add_interrupt_randomness(int irq)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2022-01-15 21:57:22 +08:00
|
|
|
struct fast_pool *fast_pool = this_cpu_ptr(&irq_randomness);
|
|
|
|
struct pt_regs *regs = get_irq_regs();
|
|
|
|
unsigned long now = jiffies;
|
|
|
|
cycles_t cycles = random_get_entropy();
|
2008-08-20 11:50:08 +08:00
|
|
|
|
2014-06-16 04:59:24 +08:00
|
|
|
if (cycles == 0)
|
|
|
|
cycles = get_reg(fast_pool, regs);
|
2008-08-20 11:50:08 +08:00
|
|
|
|
2022-02-11 00:01:27 +08:00
|
|
|
if (sizeof(cycles) == 8)
|
|
|
|
fast_pool->pool64[0] ^= cycles ^ rol64(now, 32) ^ irq;
|
|
|
|
else {
|
|
|
|
fast_pool->pool32[0] ^= cycles ^ irq;
|
|
|
|
fast_pool->pool32[1] ^= now;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (sizeof(unsigned long) == 8)
|
|
|
|
fast_pool->pool64[1] ^= regs ? instruction_pointer(regs) : _RET_IP_;
|
|
|
|
else {
|
|
|
|
fast_pool->pool32[2] ^= regs ? instruction_pointer(regs) : _RET_IP_;
|
|
|
|
fast_pool->pool32[3] ^= get_reg(fast_pool, regs);
|
|
|
|
}
|
|
|
|
|
|
|
|
fast_mix(fast_pool->pool32);
|
|
|
|
++fast_pool->count;
|
2008-08-20 11:50:08 +08:00
|
|
|
|
2018-04-12 01:27:52 +08:00
|
|
|
if (unlikely(crng_init == 0)) {
|
2022-02-09 21:43:25 +08:00
|
|
|
if (fast_pool->count >= 64 &&
|
2022-02-11 00:01:27 +08:00
|
|
|
crng_fast_load(fast_pool->pool32, sizeof(fast_pool->pool32)) > 0) {
|
2016-06-13 06:13:36 +08:00
|
|
|
fast_pool->count = 0;
|
|
|
|
fast_pool->last = now;
|
2022-02-09 08:56:35 +08:00
|
|
|
if (spin_trylock(&input_pool.lock)) {
|
2022-02-11 00:01:27 +08:00
|
|
|
_mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32));
|
2022-02-09 08:56:35 +08:00
|
|
|
spin_unlock(&input_pool.lock);
|
|
|
|
}
|
2016-06-13 06:13:36 +08:00
|
|
|
}
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2022-01-15 21:57:22 +08:00
|
|
|
if ((fast_pool->count < 64) && !time_after(now, fast_pool->last + HZ))
|
2005-04-17 06:20:36 +08:00
|
|
|
return;
|
|
|
|
|
2022-01-13 00:18:08 +08:00
|
|
|
if (!spin_trylock(&input_pool.lock))
|
2014-06-11 10:46:37 +08:00
|
|
|
return;
|
2014-03-18 07:36:28 +08:00
|
|
|
|
2014-06-11 10:46:37 +08:00
|
|
|
fast_pool->last = now;
|
2022-02-11 00:01:27 +08:00
|
|
|
_mix_pool_bytes(&fast_pool->pool32, sizeof(fast_pool->pool32));
|
2022-01-13 00:18:08 +08:00
|
|
|
spin_unlock(&input_pool.lock);
|
2014-03-18 07:36:28 +08:00
|
|
|
|
2014-06-16 04:59:24 +08:00
|
|
|
fast_pool->count = 0;
|
2014-03-18 07:36:28 +08:00
|
|
|
|
2014-06-16 04:59:24 +08:00
|
|
|
/* award one bit for the contents of the fast pool */
|
2022-01-13 00:18:08 +08:00
|
|
|
credit_entropy_bits(1);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2016-05-02 14:14:34 +08:00
|
|
|
EXPORT_SYMBOL_GPL(add_interrupt_randomness);
|
2005-04-17 06:20:36 +08:00
|
|
|
|
[PATCH] BLOCK: Make it possible to disable the block layer [try #6]
Make it possible to disable the block layer. Not all embedded devices require
it, some can make do with just JFFS2, NFS, ramfs, etc - none of which require
the block layer to be present.
This patch does the following:
(*) Introduces CONFIG_BLOCK to disable the block layer, buffering and blockdev
support.
(*) Adds dependencies on CONFIG_BLOCK to any configuration item that controls
an item that uses the block layer. This includes:
(*) Block I/O tracing.
(*) Disk partition code.
(*) All filesystems that are block based, eg: Ext3, ReiserFS, ISOFS.
(*) The SCSI layer. As far as I can tell, even SCSI chardevs use the
block layer to do scheduling. Some drivers that use SCSI facilities -
such as USB storage - end up disabled indirectly from this.
(*) Various block-based device drivers, such as IDE and the old CDROM
drivers.
(*) MTD blockdev handling and FTL.
(*) JFFS - which uses set_bdev_super(), something it could avoid doing by
taking a leaf out of JFFS2's book.
(*) Makes most of the contents of linux/blkdev.h, linux/buffer_head.h and
linux/elevator.h contingent on CONFIG_BLOCK being set. sector_div() is,
however, still used in places, and so is still available.
(*) Also made contingent are the contents of linux/mpage.h, linux/genhd.h and
parts of linux/fs.h.
(*) Makes a number of files in fs/ contingent on CONFIG_BLOCK.
(*) Makes mm/bounce.c (bounce buffering) contingent on CONFIG_BLOCK.
(*) set_page_dirty() doesn't call __set_page_dirty_buffers() if CONFIG_BLOCK
is not enabled.
(*) fs/no-block.c is created to hold out-of-line stubs and things that are
required when CONFIG_BLOCK is not set:
(*) Default blockdev file operations (to give error ENODEV on opening).
(*) Makes some /proc changes:
(*) /proc/devices does not list any blockdevs.
(*) /proc/diskstats and /proc/partitions are contingent on CONFIG_BLOCK.
(*) Makes some compat ioctl handling contingent on CONFIG_BLOCK.
(*) If CONFIG_BLOCK is not defined, makes sys_quotactl() return -ENODEV if
given command other than Q_SYNC or if a special device is specified.
(*) In init/do_mounts.c, no reference is made to the blockdev routines if
CONFIG_BLOCK is not defined. This does not prohibit NFS roots or JFFS2.
(*) The bdflush, ioprio_set and ioprio_get syscalls can now be absent (return
error ENOSYS by way of cond_syscall if so).
(*) The seclvl_bd_claim() and seclvl_bd_release() security calls do nothing if
CONFIG_BLOCK is not set, since they can't then happen.
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2006-10-01 02:45:40 +08:00
|
|
|
#ifdef CONFIG_BLOCK
|
2005-04-17 06:20:36 +08:00
|
|
|
void add_disk_randomness(struct gendisk *disk)
|
|
|
|
{
|
|
|
|
if (!disk || !disk->random)
|
|
|
|
return;
|
|
|
|
/* first major is 1, so we get >= 0x200 here */
|
2008-09-03 15:01:48 +08:00
|
|
|
add_timer_randomness(disk->random, 0x100 + disk_devt(disk));
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2014-04-25 15:36:37 +08:00
|
|
|
EXPORT_SYMBOL_GPL(add_disk_randomness);
|
[PATCH] BLOCK: Make it possible to disable the block layer [try #6]
Make it possible to disable the block layer. Not all embedded devices require
it, some can make do with just JFFS2, NFS, ramfs, etc - none of which require
the block layer to be present.
This patch does the following:
(*) Introduces CONFIG_BLOCK to disable the block layer, buffering and blockdev
support.
(*) Adds dependencies on CONFIG_BLOCK to any configuration item that controls
an item that uses the block layer. This includes:
(*) Block I/O tracing.
(*) Disk partition code.
(*) All filesystems that are block based, eg: Ext3, ReiserFS, ISOFS.
(*) The SCSI layer. As far as I can tell, even SCSI chardevs use the
block layer to do scheduling. Some drivers that use SCSI facilities -
such as USB storage - end up disabled indirectly from this.
(*) Various block-based device drivers, such as IDE and the old CDROM
drivers.
(*) MTD blockdev handling and FTL.
(*) JFFS - which uses set_bdev_super(), something it could avoid doing by
taking a leaf out of JFFS2's book.
(*) Makes most of the contents of linux/blkdev.h, linux/buffer_head.h and
linux/elevator.h contingent on CONFIG_BLOCK being set. sector_div() is,
however, still used in places, and so is still available.
(*) Also made contingent are the contents of linux/mpage.h, linux/genhd.h and
parts of linux/fs.h.
(*) Makes a number of files in fs/ contingent on CONFIG_BLOCK.
(*) Makes mm/bounce.c (bounce buffering) contingent on CONFIG_BLOCK.
(*) set_page_dirty() doesn't call __set_page_dirty_buffers() if CONFIG_BLOCK
is not enabled.
(*) fs/no-block.c is created to hold out-of-line stubs and things that are
required when CONFIG_BLOCK is not set:
(*) Default blockdev file operations (to give error ENODEV on opening).
(*) Makes some /proc changes:
(*) /proc/devices does not list any blockdevs.
(*) /proc/diskstats and /proc/partitions are contingent on CONFIG_BLOCK.
(*) Makes some compat ioctl handling contingent on CONFIG_BLOCK.
(*) If CONFIG_BLOCK is not defined, makes sys_quotactl() return -ENODEV if
given command other than Q_SYNC or if a special device is specified.
(*) In init/do_mounts.c, no reference is made to the blockdev routines if
CONFIG_BLOCK is not defined. This does not prohibit NFS roots or JFFS2.
(*) The bdflush, ioprio_set and ioprio_get syscalls can now be absent (return
error ENOSYS by way of cond_syscall if so).
(*) The seclvl_bd_claim() and seclvl_bd_release() security calls do nothing if
CONFIG_BLOCK is not set, since they can't then happen.
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2006-10-01 02:45:40 +08:00
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
|
random: try to actively add entropy rather than passively wait for it
For 5.3 we had to revert a nice ext4 IO pattern improvement, because it
caused a bootup regression due to lack of entropy at bootup together
with arguably broken user space that was asking for secure random
numbers when it really didn't need to.
See commit 72dbcf721566 (Revert "ext4: make __ext4_get_inode_loc plug").
This aims to solve the issue by actively generating entropy noise using
the CPU cycle counter when waiting for the random number generator to
initialize. This only works when you have a high-frequency time stamp
counter available, but that's the case on all modern x86 CPU's, and on
most other modern CPU's too.
What we do is to generate jitter entropy from the CPU cycle counter
under a somewhat complex load: calling the scheduler while also
guaranteeing a certain amount of timing noise by also triggering a
timer.
I'm sure we can tweak this, and that people will want to look at other
alternatives, but there's been a number of papers written on jitter
entropy, and this should really be fairly conservative by crediting one
bit of entropy for every timer-induced jump in the cycle counter. Not
because the timer itself would be all that unpredictable, but because
the interaction between the timer and the loop is going to be.
Even if (and perhaps particularly if) the timer actually happens on
another CPU, the cacheline interaction between the loop that reads the
cycle counter and the timer itself firing is going to add perturbations
to the cycle counter values that get mixed into the entropy pool.
As Thomas pointed out, with a modern out-of-order CPU, even quite simple
loops show a fair amount of hard-to-predict timing variability even in
the absense of external interrupts. But this tries to take that further
by actually having a fairly complex interaction.
This is not going to solve the entropy issue for architectures that have
no CPU cycle counter, but it's not clear how (and if) that is solvable,
and the hardware in question is largely starting to be irrelevant. And
by doing this we can at least avoid some of the even more contentious
approaches (like making the entropy waiting time out in order to avoid
the possibly unbounded waiting).
Cc: Ahmed Darwish <darwish.07@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Nicholas Mc Guire <hofrat@opentech.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Alexander E. Patrakov <patrakov@gmail.com>
Cc: Lennart Poettering <mzxreary@0pointer.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-29 07:53:52 +08:00
|
|
|
/*
|
|
|
|
* Each time the timer fires, we expect that we got an unpredictable
|
|
|
|
* jump in the cycle counter. Even if the timer is running on another
|
|
|
|
* CPU, the timer activity will be touching the stack of the CPU that is
|
|
|
|
* generating entropy..
|
|
|
|
*
|
|
|
|
* Note that we don't re-arm the timer in the timer itself - we are
|
|
|
|
* happy to be scheduled away, since that just makes the load more
|
|
|
|
* complex, but we do not want the timer to keep ticking unless the
|
|
|
|
* entropy loop is running.
|
|
|
|
*
|
|
|
|
* So the re-arming always happens in the entropy loop itself.
|
|
|
|
*/
|
|
|
|
static void entropy_timer(struct timer_list *t)
|
|
|
|
{
|
2022-01-13 00:18:08 +08:00
|
|
|
credit_entropy_bits(1);
|
random: try to actively add entropy rather than passively wait for it
For 5.3 we had to revert a nice ext4 IO pattern improvement, because it
caused a bootup regression due to lack of entropy at bootup together
with arguably broken user space that was asking for secure random
numbers when it really didn't need to.
See commit 72dbcf721566 (Revert "ext4: make __ext4_get_inode_loc plug").
This aims to solve the issue by actively generating entropy noise using
the CPU cycle counter when waiting for the random number generator to
initialize. This only works when you have a high-frequency time stamp
counter available, but that's the case on all modern x86 CPU's, and on
most other modern CPU's too.
What we do is to generate jitter entropy from the CPU cycle counter
under a somewhat complex load: calling the scheduler while also
guaranteeing a certain amount of timing noise by also triggering a
timer.
I'm sure we can tweak this, and that people will want to look at other
alternatives, but there's been a number of papers written on jitter
entropy, and this should really be fairly conservative by crediting one
bit of entropy for every timer-induced jump in the cycle counter. Not
because the timer itself would be all that unpredictable, but because
the interaction between the timer and the loop is going to be.
Even if (and perhaps particularly if) the timer actually happens on
another CPU, the cacheline interaction between the loop that reads the
cycle counter and the timer itself firing is going to add perturbations
to the cycle counter values that get mixed into the entropy pool.
As Thomas pointed out, with a modern out-of-order CPU, even quite simple
loops show a fair amount of hard-to-predict timing variability even in
the absense of external interrupts. But this tries to take that further
by actually having a fairly complex interaction.
This is not going to solve the entropy issue for architectures that have
no CPU cycle counter, but it's not clear how (and if) that is solvable,
and the hardware in question is largely starting to be irrelevant. And
by doing this we can at least avoid some of the even more contentious
approaches (like making the entropy waiting time out in order to avoid
the possibly unbounded waiting).
Cc: Ahmed Darwish <darwish.07@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Nicholas Mc Guire <hofrat@opentech.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Alexander E. Patrakov <patrakov@gmail.com>
Cc: Lennart Poettering <mzxreary@0pointer.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-29 07:53:52 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we have an actual cycle counter, see if we can
|
|
|
|
* generate enough entropy with timing noise
|
|
|
|
*/
|
|
|
|
static void try_to_generate_entropy(void)
|
|
|
|
{
|
|
|
|
struct {
|
|
|
|
unsigned long now;
|
|
|
|
struct timer_list timer;
|
|
|
|
} stack;
|
|
|
|
|
|
|
|
stack.now = random_get_entropy();
|
|
|
|
|
|
|
|
/* Slow counter - or none. Don't even bother */
|
|
|
|
if (stack.now == random_get_entropy())
|
|
|
|
return;
|
|
|
|
|
|
|
|
timer_setup_on_stack(&stack.timer, entropy_timer, 0);
|
|
|
|
while (!crng_ready()) {
|
|
|
|
if (!timer_pending(&stack.timer))
|
2022-01-15 21:57:22 +08:00
|
|
|
mod_timer(&stack.timer, jiffies + 1);
|
2022-01-13 00:18:08 +08:00
|
|
|
mix_pool_bytes(&stack.now, sizeof(stack.now));
|
random: try to actively add entropy rather than passively wait for it
For 5.3 we had to revert a nice ext4 IO pattern improvement, because it
caused a bootup regression due to lack of entropy at bootup together
with arguably broken user space that was asking for secure random
numbers when it really didn't need to.
See commit 72dbcf721566 (Revert "ext4: make __ext4_get_inode_loc plug").
This aims to solve the issue by actively generating entropy noise using
the CPU cycle counter when waiting for the random number generator to
initialize. This only works when you have a high-frequency time stamp
counter available, but that's the case on all modern x86 CPU's, and on
most other modern CPU's too.
What we do is to generate jitter entropy from the CPU cycle counter
under a somewhat complex load: calling the scheduler while also
guaranteeing a certain amount of timing noise by also triggering a
timer.
I'm sure we can tweak this, and that people will want to look at other
alternatives, but there's been a number of papers written on jitter
entropy, and this should really be fairly conservative by crediting one
bit of entropy for every timer-induced jump in the cycle counter. Not
because the timer itself would be all that unpredictable, but because
the interaction between the timer and the loop is going to be.
Even if (and perhaps particularly if) the timer actually happens on
another CPU, the cacheline interaction between the loop that reads the
cycle counter and the timer itself firing is going to add perturbations
to the cycle counter values that get mixed into the entropy pool.
As Thomas pointed out, with a modern out-of-order CPU, even quite simple
loops show a fair amount of hard-to-predict timing variability even in
the absense of external interrupts. But this tries to take that further
by actually having a fairly complex interaction.
This is not going to solve the entropy issue for architectures that have
no CPU cycle counter, but it's not clear how (and if) that is solvable,
and the hardware in question is largely starting to be irrelevant. And
by doing this we can at least avoid some of the even more contentious
approaches (like making the entropy waiting time out in order to avoid
the possibly unbounded waiting).
Cc: Ahmed Darwish <darwish.07@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Nicholas Mc Guire <hofrat@opentech.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Alexander E. Patrakov <patrakov@gmail.com>
Cc: Lennart Poettering <mzxreary@0pointer.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-29 07:53:52 +08:00
|
|
|
schedule();
|
|
|
|
stack.now = random_get_entropy();
|
|
|
|
}
|
|
|
|
|
|
|
|
del_timer_sync(&stack.timer);
|
|
|
|
destroy_timer_on_stack(&stack.timer);
|
2022-01-13 00:18:08 +08:00
|
|
|
mix_pool_bytes(&stack.now, sizeof(stack.now));
|
random: try to actively add entropy rather than passively wait for it
For 5.3 we had to revert a nice ext4 IO pattern improvement, because it
caused a bootup regression due to lack of entropy at bootup together
with arguably broken user space that was asking for secure random
numbers when it really didn't need to.
See commit 72dbcf721566 (Revert "ext4: make __ext4_get_inode_loc plug").
This aims to solve the issue by actively generating entropy noise using
the CPU cycle counter when waiting for the random number generator to
initialize. This only works when you have a high-frequency time stamp
counter available, but that's the case on all modern x86 CPU's, and on
most other modern CPU's too.
What we do is to generate jitter entropy from the CPU cycle counter
under a somewhat complex load: calling the scheduler while also
guaranteeing a certain amount of timing noise by also triggering a
timer.
I'm sure we can tweak this, and that people will want to look at other
alternatives, but there's been a number of papers written on jitter
entropy, and this should really be fairly conservative by crediting one
bit of entropy for every timer-induced jump in the cycle counter. Not
because the timer itself would be all that unpredictable, but because
the interaction between the timer and the loop is going to be.
Even if (and perhaps particularly if) the timer actually happens on
another CPU, the cacheline interaction between the loop that reads the
cycle counter and the timer itself firing is going to add perturbations
to the cycle counter values that get mixed into the entropy pool.
As Thomas pointed out, with a modern out-of-order CPU, even quite simple
loops show a fair amount of hard-to-predict timing variability even in
the absense of external interrupts. But this tries to take that further
by actually having a fairly complex interaction.
This is not going to solve the entropy issue for architectures that have
no CPU cycle counter, but it's not clear how (and if) that is solvable,
and the hardware in question is largely starting to be irrelevant. And
by doing this we can at least avoid some of the even more contentious
approaches (like making the entropy waiting time out in order to avoid
the possibly unbounded waiting).
Cc: Ahmed Darwish <darwish.07@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Nicholas Mc Guire <hofrat@opentech.at>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Alexander E. Patrakov <patrakov@gmail.com>
Cc: Lennart Poettering <mzxreary@0pointer.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-09-29 07:53:52 +08:00
|
|
|
}
|
|
|
|
|
2022-02-08 19:40:14 +08:00
|
|
|
static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU);
|
|
|
|
static int __init parse_trust_cpu(char *arg)
|
|
|
|
{
|
|
|
|
return kstrtobool(arg, &trust_cpu);
|
|
|
|
}
|
|
|
|
early_param("random.trust_cpu", parse_trust_cpu);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/*
|
2022-02-08 19:40:14 +08:00
|
|
|
* Note that setup_arch() may call add_device_randomness()
|
|
|
|
* long before we get here. This allows seeding of the pools
|
|
|
|
* with some platform dependent data very early in the boot
|
|
|
|
* process. But it limits our options here. We must use
|
|
|
|
* statically allocated structures that already have all
|
|
|
|
* initializations complete at compile time. We should also
|
|
|
|
* take care not to overwrite the precious per platform data
|
|
|
|
* we were given.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2022-02-08 19:40:14 +08:00
|
|
|
int __init rand_initialize(void)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2022-02-09 21:43:25 +08:00
|
|
|
size_t i;
|
2012-07-04 22:38:30 +08:00
|
|
|
ktime_t now = ktime_get_real();
|
2022-02-08 19:40:14 +08:00
|
|
|
bool arch_init = true;
|
2012-07-04 22:38:30 +08:00
|
|
|
unsigned long rv;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2022-02-09 21:43:25 +08:00
|
|
|
for (i = 0; i < BLAKE2S_BLOCK_SIZE; i += sizeof(rv)) {
|
2022-02-08 19:40:14 +08:00
|
|
|
if (!arch_get_random_seed_long_early(&rv) &&
|
|
|
|
!arch_get_random_long_early(&rv)) {
|
|
|
|
rv = random_get_entropy();
|
|
|
|
arch_init = false;
|
|
|
|
}
|
2022-02-08 19:44:28 +08:00
|
|
|
mix_pool_bytes(&rv, sizeof(rv));
|
2022-02-08 19:40:14 +08:00
|
|
|
}
|
2022-02-08 19:44:28 +08:00
|
|
|
mix_pool_bytes(&now, sizeof(now));
|
|
|
|
mix_pool_bytes(utsname(), sizeof(*(utsname())));
|
|
|
|
|
random: use simpler fast key erasure flow on per-cpu keys
Rather than the clunky NUMA full ChaCha state system we had prior, this
commit is closer to the original "fast key erasure RNG" proposal from
<https://blog.cr.yp.to/20170723-random.html>, by simply treating ChaCha
keys on a per-cpu basis.
All entropy is extracted to a base crng key of 32 bytes. This base crng
has a birthdate and a generation counter. When we go to take bytes from
the crng, we first check if the birthdate is too old; if it is, we
reseed per usual. Then we start working on a per-cpu crng.
This per-cpu crng makes sure that it has the same generation counter as
the base crng. If it doesn't, it does fast key erasure with the base
crng key and uses the output as its new per-cpu key, and then updates
its local generation counter. Then, using this per-cpu state, we do
ordinary fast key erasure. Half of this first block is used to overwrite
the per-cpu crng key for the next call -- this is the fast key erasure
RNG idea -- and the other half, along with the ChaCha state, is returned
to the caller. If the caller desires more than this remaining half, it
can generate more ChaCha blocks, unlocked, using the now detached ChaCha
state that was just returned. Crypto-wise, this is more or less what we
were doing before, but this simply makes it more explicit and ensures
that we always have backtrack protection by not playing games with a
shared block counter.
The flow looks like this:
──extract()──► base_crng.key ◄──memcpy()───┐
│ │
└──chacha()──────┬─► new_base_key
└─► crngs[n].key ◄──memcpy()───┐
│ │
└──chacha()───┬─► new_key
└─► random_bytes
│
└────►
There are a few hairy details around early init. Just as was done
before, prior to having gathered enough entropy, crng_fast_load() and
crng_slow_load() dump bytes directly into the base crng, and when we go
to take bytes from the crng, in that case, we're doing fast key erasure
with the base crng rather than the fast unlocked per-cpu crngs. This is
fine as that's only the state of affairs during very early boot; once
the crng initializes we never use these paths again.
In the process of all this, the APIs into the crng become a bit simpler:
we have get_random_bytes(buf, len) and get_random_bytes_user(buf, len),
which both do what you'd expect. All of the details of fast key erasure
and per-cpu selection happen only in a very short critical section of
crng_make_state(), which selects the right per-cpu key, does the fast
key erasure, and returns a local state to the caller's stack. So, we no
longer have a need for a separate backtrack function, as this happens
all at once here. The API then allows us to extend backtrack protection
to batched entropy without really having to do much at all.
The result is a bit simpler than before and has fewer foot guns. The
init time state machine also gets a lot simpler as we don't need to wait
for workqueues to come online and do deferred work. And the multi-core
performance should be increased significantly, by virtue of having hardly
any locking on the fast path.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-07 22:08:49 +08:00
|
|
|
extract_entropy(base_crng.key, sizeof(base_crng.key));
|
2022-02-10 05:46:48 +08:00
|
|
|
++base_crng.generation;
|
|
|
|
|
2022-02-08 19:40:14 +08:00
|
|
|
if (arch_init && trust_cpu && crng_init < 2) {
|
|
|
|
crng_init = 2;
|
|
|
|
pr_notice("crng init done (trusting CPU's manufacturer)\n");
|
|
|
|
}
|
|
|
|
|
2018-04-25 13:12:32 +08:00
|
|
|
if (ratelimit_disable) {
|
|
|
|
urandom_warning.interval = 0;
|
|
|
|
unseeded_warning.interval = 0;
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
[PATCH] BLOCK: Make it possible to disable the block layer [try #6]
Make it possible to disable the block layer. Not all embedded devices require
it, some can make do with just JFFS2, NFS, ramfs, etc - none of which require
the block layer to be present.
This patch does the following:
(*) Introduces CONFIG_BLOCK to disable the block layer, buffering and blockdev
support.
(*) Adds dependencies on CONFIG_BLOCK to any configuration item that controls
an item that uses the block layer. This includes:
(*) Block I/O tracing.
(*) Disk partition code.
(*) All filesystems that are block based, eg: Ext3, ReiserFS, ISOFS.
(*) The SCSI layer. As far as I can tell, even SCSI chardevs use the
block layer to do scheduling. Some drivers that use SCSI facilities -
such as USB storage - end up disabled indirectly from this.
(*) Various block-based device drivers, such as IDE and the old CDROM
drivers.
(*) MTD blockdev handling and FTL.
(*) JFFS - which uses set_bdev_super(), something it could avoid doing by
taking a leaf out of JFFS2's book.
(*) Makes most of the contents of linux/blkdev.h, linux/buffer_head.h and
linux/elevator.h contingent on CONFIG_BLOCK being set. sector_div() is,
however, still used in places, and so is still available.
(*) Also made contingent are the contents of linux/mpage.h, linux/genhd.h and
parts of linux/fs.h.
(*) Makes a number of files in fs/ contingent on CONFIG_BLOCK.
(*) Makes mm/bounce.c (bounce buffering) contingent on CONFIG_BLOCK.
(*) set_page_dirty() doesn't call __set_page_dirty_buffers() if CONFIG_BLOCK
is not enabled.
(*) fs/no-block.c is created to hold out-of-line stubs and things that are
required when CONFIG_BLOCK is not set:
(*) Default blockdev file operations (to give error ENODEV on opening).
(*) Makes some /proc changes:
(*) /proc/devices does not list any blockdevs.
(*) /proc/diskstats and /proc/partitions are contingent on CONFIG_BLOCK.
(*) Makes some compat ioctl handling contingent on CONFIG_BLOCK.
(*) If CONFIG_BLOCK is not defined, makes sys_quotactl() return -ENODEV if
given command other than Q_SYNC or if a special device is specified.
(*) In init/do_mounts.c, no reference is made to the blockdev routines if
CONFIG_BLOCK is not defined. This does not prohibit NFS roots or JFFS2.
(*) The bdflush, ioprio_set and ioprio_get syscalls can now be absent (return
error ENOSYS by way of cond_syscall if so).
(*) The seclvl_bd_claim() and seclvl_bd_release() security calls do nothing if
CONFIG_BLOCK is not set, since they can't then happen.
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2006-10-01 02:45:40 +08:00
|
|
|
#ifdef CONFIG_BLOCK
|
2005-04-17 06:20:36 +08:00
|
|
|
void rand_initialize_disk(struct gendisk *disk)
|
|
|
|
{
|
|
|
|
struct timer_rand_state *state;
|
|
|
|
|
|
|
|
/*
|
2007-03-29 05:22:33 +08:00
|
|
|
* If kzalloc returns null, we just won't use that entropy
|
2005-04-17 06:20:36 +08:00
|
|
|
* source.
|
|
|
|
*/
|
2007-03-29 05:22:33 +08:00
|
|
|
state = kzalloc(sizeof(struct timer_rand_state), GFP_KERNEL);
|
2013-11-04 05:40:53 +08:00
|
|
|
if (state) {
|
|
|
|
state->last_time = INITIAL_JIFFIES;
|
2005-04-17 06:20:36 +08:00
|
|
|
disk->random = state;
|
2013-11-04 05:40:53 +08:00
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
[PATCH] BLOCK: Make it possible to disable the block layer [try #6]
Make it possible to disable the block layer. Not all embedded devices require
it, some can make do with just JFFS2, NFS, ramfs, etc - none of which require
the block layer to be present.
This patch does the following:
(*) Introduces CONFIG_BLOCK to disable the block layer, buffering and blockdev
support.
(*) Adds dependencies on CONFIG_BLOCK to any configuration item that controls
an item that uses the block layer. This includes:
(*) Block I/O tracing.
(*) Disk partition code.
(*) All filesystems that are block based, eg: Ext3, ReiserFS, ISOFS.
(*) The SCSI layer. As far as I can tell, even SCSI chardevs use the
block layer to do scheduling. Some drivers that use SCSI facilities -
such as USB storage - end up disabled indirectly from this.
(*) Various block-based device drivers, such as IDE and the old CDROM
drivers.
(*) MTD blockdev handling and FTL.
(*) JFFS - which uses set_bdev_super(), something it could avoid doing by
taking a leaf out of JFFS2's book.
(*) Makes most of the contents of linux/blkdev.h, linux/buffer_head.h and
linux/elevator.h contingent on CONFIG_BLOCK being set. sector_div() is,
however, still used in places, and so is still available.
(*) Also made contingent are the contents of linux/mpage.h, linux/genhd.h and
parts of linux/fs.h.
(*) Makes a number of files in fs/ contingent on CONFIG_BLOCK.
(*) Makes mm/bounce.c (bounce buffering) contingent on CONFIG_BLOCK.
(*) set_page_dirty() doesn't call __set_page_dirty_buffers() if CONFIG_BLOCK
is not enabled.
(*) fs/no-block.c is created to hold out-of-line stubs and things that are
required when CONFIG_BLOCK is not set:
(*) Default blockdev file operations (to give error ENODEV on opening).
(*) Makes some /proc changes:
(*) /proc/devices does not list any blockdevs.
(*) /proc/diskstats and /proc/partitions are contingent on CONFIG_BLOCK.
(*) Makes some compat ioctl handling contingent on CONFIG_BLOCK.
(*) If CONFIG_BLOCK is not defined, makes sys_quotactl() return -ENODEV if
given command other than Q_SYNC or if a special device is specified.
(*) In init/do_mounts.c, no reference is made to the blockdev routines if
CONFIG_BLOCK is not defined. This does not prohibit NFS roots or JFFS2.
(*) The bdflush, ioprio_set and ioprio_get syscalls can now be absent (return
error ENOSYS by way of cond_syscall if so).
(*) The seclvl_bd_claim() and seclvl_bd_release() security calls do nothing if
CONFIG_BLOCK is not set, since they can't then happen.
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2006-10-01 02:45:40 +08:00
|
|
|
#endif
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2022-01-15 21:57:22 +08:00
|
|
|
static ssize_t urandom_read(struct file *file, char __user *buf, size_t nbytes,
|
|
|
|
loff_t *ppos)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2016-06-13 22:10:51 +08:00
|
|
|
static int maxwarn = 10;
|
2013-11-03 19:54:51 +08:00
|
|
|
|
2016-06-13 06:13:36 +08:00
|
|
|
if (!crng_ready() && maxwarn > 0) {
|
2016-06-13 22:10:51 +08:00
|
|
|
maxwarn--;
|
2018-04-25 13:12:32 +08:00
|
|
|
if (__ratelimit(&urandom_warning))
|
2019-06-08 02:25:15 +08:00
|
|
|
pr_notice("%s: uninitialized urandom read (%zd bytes read)\n",
|
|
|
|
current->comm, nbytes);
|
2016-06-13 22:10:51 +08:00
|
|
|
}
|
2019-12-23 16:20:45 +08:00
|
|
|
|
2022-02-10 23:40:44 +08:00
|
|
|
return get_random_bytes_user(buf, nbytes);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2022-01-15 21:57:22 +08:00
|
|
|
static ssize_t random_read(struct file *file, char __user *buf, size_t nbytes,
|
|
|
|
loff_t *ppos)
|
2019-12-23 16:20:48 +08:00
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = wait_for_random_bytes();
|
|
|
|
if (ret != 0)
|
|
|
|
return ret;
|
2022-02-10 23:40:44 +08:00
|
|
|
return get_random_bytes_user(buf, nbytes);
|
2019-12-23 16:20:48 +08:00
|
|
|
}
|
|
|
|
|
2022-01-15 21:57:22 +08:00
|
|
|
static __poll_t random_poll(struct file *file, poll_table *wait)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2018-06-29 00:43:44 +08:00
|
|
|
__poll_t mask;
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2019-12-23 16:20:48 +08:00
|
|
|
poll_wait(file, &crng_init_wait, wait);
|
2018-06-29 00:43:44 +08:00
|
|
|
poll_wait(file, &random_write_wait, wait);
|
|
|
|
mask = 0;
|
2019-12-23 16:20:48 +08:00
|
|
|
if (crng_ready())
|
2018-02-12 06:34:03 +08:00
|
|
|
mask |= EPOLLIN | EPOLLRDNORM;
|
2022-02-05 21:00:58 +08:00
|
|
|
if (input_pool.entropy_count < POOL_MIN_BITS)
|
2018-02-12 06:34:03 +08:00
|
|
|
mask |= EPOLLOUT | EPOLLWRNORM;
|
2005-04-17 06:20:36 +08:00
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
2022-02-09 21:43:25 +08:00
|
|
|
static int write_pool(const char __user *ubuf, size_t count)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2022-02-09 21:43:25 +08:00
|
|
|
size_t len;
|
2022-02-10 01:42:13 +08:00
|
|
|
int ret = 0;
|
2022-02-09 21:43:25 +08:00
|
|
|
u8 block[BLAKE2S_BLOCK_SIZE];
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2022-02-09 21:43:25 +08:00
|
|
|
while (count) {
|
|
|
|
len = min(count, sizeof(block));
|
2022-02-10 01:42:13 +08:00
|
|
|
if (copy_from_user(block, ubuf, len)) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
goto out;
|
|
|
|
}
|
2022-02-09 21:43:25 +08:00
|
|
|
count -= len;
|
|
|
|
ubuf += len;
|
|
|
|
mix_pool_bytes(block, len);
|
2008-02-06 17:37:20 +08:00
|
|
|
cond_resched();
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
2007-05-30 10:58:10 +08:00
|
|
|
|
2022-02-10 01:42:13 +08:00
|
|
|
out:
|
|
|
|
memzero_explicit(block, sizeof(block));
|
|
|
|
return ret;
|
2007-05-30 10:58:10 +08:00
|
|
|
}
|
|
|
|
|
2008-04-29 16:02:55 +08:00
|
|
|
static ssize_t random_write(struct file *file, const char __user *buffer,
|
|
|
|
size_t count, loff_t *ppos)
|
2007-05-30 10:58:10 +08:00
|
|
|
{
|
2022-02-09 21:43:25 +08:00
|
|
|
int ret;
|
2007-05-30 10:58:10 +08:00
|
|
|
|
2022-01-13 00:18:08 +08:00
|
|
|
ret = write_pool(buffer, count);
|
2007-05-30 10:58:10 +08:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
return (ssize_t)count;
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2008-04-29 16:02:58 +08:00
|
|
|
static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
int size, ent_count;
|
|
|
|
int __user *p = (int __user *)arg;
|
|
|
|
int retval;
|
|
|
|
|
|
|
|
switch (cmd) {
|
|
|
|
case RNDGETENTCNT:
|
2008-04-29 16:02:58 +08:00
|
|
|
/* inherently racy, no point locking */
|
random: use linear min-entropy accumulation crediting
30e37ec516ae ("random: account for entropy loss due to overwrites")
assumed that adding new entropy to the LFSR pool probabilistically
cancelled out old entropy there, so entropy was credited asymptotically,
approximating Shannon entropy of independent sources (rather than a
stronger min-entropy notion) using 1/8th fractional bits and replacing
a constant 2-2/√𝑒 term (~0.786938) with 3/4 (0.75) to slightly
underestimate it. This wasn't superb, but it was perhaps better than
nothing, so that's what was done. Which entropy specifically was being
cancelled out and how much precisely each time is hard to tell, though
as I showed with the attack code in my previous commit, a motivated
adversary with sufficient information can actually cancel out
everything.
Since we're no longer using an LFSR for entropy accumulation, this
probabilistic cancellation is no longer relevant. Rather, we're now
using a computational hash function as the accumulator and we've
switched to working in the random oracle model, from which we can now
revisit the question of min-entropy accumulation, which is done in
detail in <https://eprint.iacr.org/2019/198>.
Consider a long input bit string that is built by concatenating various
smaller independent input bit strings. Each one of these inputs has a
designated min-entropy, which is what we're passing to
credit_entropy_bits(h). When we pass the concatenation of these to a
random oracle, it means that an adversary trying to receive back the
same reply as us would need to become certain about each part of the
concatenated bit string we passed in, which means becoming certain about
all of those h values. That means we can estimate the accumulation by
simply adding up the h values in calls to credit_entropy_bits(h);
there's no probabilistic cancellation at play like there was said to be
for the LFSR. Incidentally, this is also what other entropy accumulators
based on computational hash functions do as well.
So this commit replaces credit_entropy_bits(h) with essentially `total =
min(POOL_BITS, total + h)`, done with a cmpxchg loop as before.
What if we're wrong and the above is nonsense? It's not, but let's
assume we don't want the actual _behavior_ of the code to change much.
Currently that behavior is not extracting from the input pool until it
has 128 bits of entropy in it. With the old algorithm, we'd hit that
magic 128 number after roughly 256 calls to credit_entropy_bits(1). So,
we can retain more or less the old behavior by waiting to extract from
the input pool until it hits 256 bits of entropy using the new code. For
people concerned about this change, it means that there's not that much
practical behavioral change. And for folks actually trying to model
the behavior rigorously, it means that we have an even higher margin
against attacks.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-03 20:28:06 +08:00
|
|
|
if (put_user(input_pool.entropy_count, p))
|
2005-04-17 06:20:36 +08:00
|
|
|
return -EFAULT;
|
|
|
|
return 0;
|
|
|
|
case RNDADDTOENTCNT:
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
if (get_user(ent_count, p))
|
|
|
|
return -EFAULT;
|
2022-02-04 08:45:53 +08:00
|
|
|
if (ent_count < 0)
|
|
|
|
return -EINVAL;
|
|
|
|
credit_entropy_bits(ent_count);
|
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
case RNDADDENTROPY:
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
if (get_user(ent_count, p++))
|
|
|
|
return -EFAULT;
|
|
|
|
if (ent_count < 0)
|
|
|
|
return -EINVAL;
|
|
|
|
if (get_user(size, p++))
|
|
|
|
return -EFAULT;
|
2022-01-13 00:18:08 +08:00
|
|
|
retval = write_pool((const char __user *)p, size);
|
2005-04-17 06:20:36 +08:00
|
|
|
if (retval < 0)
|
|
|
|
return retval;
|
2022-02-04 08:45:53 +08:00
|
|
|
credit_entropy_bits(ent_count);
|
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
case RNDZAPENTCNT:
|
|
|
|
case RNDCLEARPOOL:
|
2013-11-03 20:56:17 +08:00
|
|
|
/*
|
|
|
|
* Clear the entropy pool counters. We no longer clear
|
|
|
|
* the entropy pool, as that's silly.
|
|
|
|
*/
|
2005-04-17 06:20:36 +08:00
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
2022-02-05 21:00:58 +08:00
|
|
|
if (xchg(&input_pool.entropy_count, 0)) {
|
2022-01-29 06:44:03 +08:00
|
|
|
wake_up_interruptible(&random_write_wait);
|
|
|
|
kill_fasync(&fasync, SIGIO, POLL_OUT);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
return 0;
|
2018-04-12 04:32:17 +08:00
|
|
|
case RNDRESEEDCRNG:
|
|
|
|
if (!capable(CAP_SYS_ADMIN))
|
|
|
|
return -EPERM;
|
|
|
|
if (crng_init < 2)
|
|
|
|
return -ENODATA;
|
2022-02-07 06:51:41 +08:00
|
|
|
crng_reseed();
|
2018-04-12 04:32:17 +08:00
|
|
|
return 0;
|
2005-04-17 06:20:36 +08:00
|
|
|
default:
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
random: add async notification support to /dev/random
Add async notification support to /dev/random.
A little test case is below. Without this patch, you get:
$ ./async-random
Drained the pool
Found more randomness
With it, you get:
$ ./async-random
Drained the pool
SIGIO
Found more randomness
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <errno.h>
#include <fcntl.h>
static void handler(int sig)
{
printf("SIGIO\n");
}
int main(int argc, char **argv)
{
int fd, n, err, flags;
if(signal(SIGIO, handler) < 0){
perror("setting SIGIO handler");
exit(1);
}
fd = open("/dev/random", O_RDONLY);
if(fd < 0){
perror("open");
exit(1);
}
flags = fcntl(fd, F_GETFL);
if (flags < 0){
perror("getting flags");
exit(1);
}
flags |= O_NONBLOCK;
if (fcntl(fd, F_SETFL, flags) < 0){
perror("setting flags");
exit(1);
}
while((err = read(fd, &n, sizeof(n))) > 0) ;
if(err == 0){
printf("random returned 0\n");
exit(1);
}
else if(errno != EAGAIN){
perror("read");
exit(1);
}
flags |= O_ASYNC;
if (fcntl(fd, F_SETFL, flags) < 0){
perror("setting flags");
exit(1);
}
if (fcntl(fd, F_SETOWN, getpid()) < 0) {
perror("Setting SIGIO");
exit(1);
}
printf("Drained the pool\n");
read(fd, &n, sizeof(n));
printf("Found more randomness\n");
return(0);
}
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-29 16:03:08 +08:00
|
|
|
static int random_fasync(int fd, struct file *filp, int on)
|
|
|
|
{
|
|
|
|
return fasync_helper(fd, filp, on, &fasync);
|
|
|
|
}
|
|
|
|
|
2007-02-12 16:55:32 +08:00
|
|
|
const struct file_operations random_fops = {
|
2022-01-15 21:57:22 +08:00
|
|
|
.read = random_read,
|
2005-04-17 06:20:36 +08:00
|
|
|
.write = random_write,
|
2022-01-15 21:57:22 +08:00
|
|
|
.poll = random_poll,
|
2008-04-29 16:02:58 +08:00
|
|
|
.unlocked_ioctl = random_ioctl,
|
2018-09-07 17:10:23 +08:00
|
|
|
.compat_ioctl = compat_ptr_ioctl,
|
random: add async notification support to /dev/random
Add async notification support to /dev/random.
A little test case is below. Without this patch, you get:
$ ./async-random
Drained the pool
Found more randomness
With it, you get:
$ ./async-random
Drained the pool
SIGIO
Found more randomness
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <errno.h>
#include <fcntl.h>
static void handler(int sig)
{
printf("SIGIO\n");
}
int main(int argc, char **argv)
{
int fd, n, err, flags;
if(signal(SIGIO, handler) < 0){
perror("setting SIGIO handler");
exit(1);
}
fd = open("/dev/random", O_RDONLY);
if(fd < 0){
perror("open");
exit(1);
}
flags = fcntl(fd, F_GETFL);
if (flags < 0){
perror("getting flags");
exit(1);
}
flags |= O_NONBLOCK;
if (fcntl(fd, F_SETFL, flags) < 0){
perror("setting flags");
exit(1);
}
while((err = read(fd, &n, sizeof(n))) > 0) ;
if(err == 0){
printf("random returned 0\n");
exit(1);
}
else if(errno != EAGAIN){
perror("read");
exit(1);
}
flags |= O_ASYNC;
if (fcntl(fd, F_SETFL, flags) < 0){
perror("setting flags");
exit(1);
}
if (fcntl(fd, F_SETOWN, getpid()) < 0) {
perror("Setting SIGIO");
exit(1);
}
printf("Drained the pool\n");
read(fd, &n, sizeof(n));
printf("Found more randomness\n");
return(0);
}
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-29 16:03:08 +08:00
|
|
|
.fasync = random_fasync,
|
llseek: automatically add .llseek fop
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
2010-08-16 00:52:59 +08:00
|
|
|
.llseek = noop_llseek,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2007-02-12 16:55:32 +08:00
|
|
|
const struct file_operations urandom_fops = {
|
2022-01-15 21:57:22 +08:00
|
|
|
.read = urandom_read,
|
2005-04-17 06:20:36 +08:00
|
|
|
.write = random_write,
|
2008-04-29 16:02:58 +08:00
|
|
|
.unlocked_ioctl = random_ioctl,
|
2019-12-18 01:24:55 +08:00
|
|
|
.compat_ioctl = compat_ptr_ioctl,
|
random: add async notification support to /dev/random
Add async notification support to /dev/random.
A little test case is below. Without this patch, you get:
$ ./async-random
Drained the pool
Found more randomness
With it, you get:
$ ./async-random
Drained the pool
SIGIO
Found more randomness
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <errno.h>
#include <fcntl.h>
static void handler(int sig)
{
printf("SIGIO\n");
}
int main(int argc, char **argv)
{
int fd, n, err, flags;
if(signal(SIGIO, handler) < 0){
perror("setting SIGIO handler");
exit(1);
}
fd = open("/dev/random", O_RDONLY);
if(fd < 0){
perror("open");
exit(1);
}
flags = fcntl(fd, F_GETFL);
if (flags < 0){
perror("getting flags");
exit(1);
}
flags |= O_NONBLOCK;
if (fcntl(fd, F_SETFL, flags) < 0){
perror("setting flags");
exit(1);
}
while((err = read(fd, &n, sizeof(n))) > 0) ;
if(err == 0){
printf("random returned 0\n");
exit(1);
}
else if(errno != EAGAIN){
perror("read");
exit(1);
}
flags |= O_ASYNC;
if (fcntl(fd, F_SETFL, flags) < 0){
perror("setting flags");
exit(1);
}
if (fcntl(fd, F_SETOWN, getpid()) < 0) {
perror("Setting SIGIO");
exit(1);
}
printf("Drained the pool\n");
read(fd, &n, sizeof(n));
printf("Found more randomness\n");
return(0);
}
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Matt Mackall <mpm@selenic.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-29 16:03:08 +08:00
|
|
|
.fasync = random_fasync,
|
llseek: automatically add .llseek fop
All file_operations should get a .llseek operation so we can make
nonseekable_open the default for future file operations without a
.llseek pointer.
The three cases that we can automatically detect are no_llseek, seq_lseek
and default_llseek. For cases where we can we can automatically prove that
the file offset is always ignored, we use noop_llseek, which maintains
the current behavior of not returning an error from a seek.
New drivers should normally not use noop_llseek but instead use no_llseek
and call nonseekable_open at open time. Existing drivers can be converted
to do the same when the maintainer knows for certain that no user code
relies on calling seek on the device file.
The generated code is often incorrectly indented and right now contains
comments that clarify for each added line why a specific variant was
chosen. In the version that gets submitted upstream, the comments will
be gone and I will manually fix the indentation, because there does not
seem to be a way to do that using coccinelle.
Some amount of new code is currently sitting in linux-next that should get
the same modifications, which I will do at the end of the merge window.
Many thanks to Julia Lawall for helping me learn to write a semantic
patch that does all this.
===== begin semantic patch =====
// This adds an llseek= method to all file operations,
// as a preparation for making no_llseek the default.
//
// The rules are
// - use no_llseek explicitly if we do nonseekable_open
// - use seq_lseek for sequential files
// - use default_llseek if we know we access f_pos
// - use noop_llseek if we know we don't access f_pos,
// but we still want to allow users to call lseek
//
@ open1 exists @
identifier nested_open;
@@
nested_open(...)
{
<+...
nonseekable_open(...)
...+>
}
@ open exists@
identifier open_f;
identifier i, f;
identifier open1.nested_open;
@@
int open_f(struct inode *i, struct file *f)
{
<+...
(
nonseekable_open(...)
|
nested_open(...)
)
...+>
}
@ read disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ read_no_fpos disable optional_qualifier exists @
identifier read_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
{
... when != off
}
@ write @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
expression E;
identifier func;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
<+...
(
*off = E
|
*off += E
|
func(..., off, ...)
|
E = *off
)
...+>
}
@ write_no_fpos @
identifier write_f;
identifier f, p, s, off;
type ssize_t, size_t, loff_t;
@@
ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
{
... when != off
}
@ fops0 @
identifier fops;
@@
struct file_operations fops = {
...
};
@ has_llseek depends on fops0 @
identifier fops0.fops;
identifier llseek_f;
@@
struct file_operations fops = {
...
.llseek = llseek_f,
...
};
@ has_read depends on fops0 @
identifier fops0.fops;
identifier read_f;
@@
struct file_operations fops = {
...
.read = read_f,
...
};
@ has_write depends on fops0 @
identifier fops0.fops;
identifier write_f;
@@
struct file_operations fops = {
...
.write = write_f,
...
};
@ has_open depends on fops0 @
identifier fops0.fops;
identifier open_f;
@@
struct file_operations fops = {
...
.open = open_f,
...
};
// use no_llseek if we call nonseekable_open
////////////////////////////////////////////
@ nonseekable1 depends on !has_llseek && has_open @
identifier fops0.fops;
identifier nso ~= "nonseekable_open";
@@
struct file_operations fops = {
... .open = nso, ...
+.llseek = no_llseek, /* nonseekable */
};
@ nonseekable2 depends on !has_llseek @
identifier fops0.fops;
identifier open.open_f;
@@
struct file_operations fops = {
... .open = open_f, ...
+.llseek = no_llseek, /* open uses nonseekable */
};
// use seq_lseek for sequential files
/////////////////////////////////////
@ seq depends on !has_llseek @
identifier fops0.fops;
identifier sr ~= "seq_read";
@@
struct file_operations fops = {
... .read = sr, ...
+.llseek = seq_lseek, /* we have seq_read */
};
// use default_llseek if there is a readdir
///////////////////////////////////////////
@ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier readdir_e;
@@
// any other fop is used that changes pos
struct file_operations fops = {
... .readdir = readdir_e, ...
+.llseek = default_llseek, /* readdir is present */
};
// use default_llseek if at least one of read/write touches f_pos
/////////////////////////////////////////////////////////////////
@ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read.read_f;
@@
// read fops use offset
struct file_operations fops = {
... .read = read_f, ...
+.llseek = default_llseek, /* read accesses f_pos */
};
@ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write.write_f;
@@
// write fops use offset
struct file_operations fops = {
... .write = write_f, ...
+ .llseek = default_llseek, /* write accesses f_pos */
};
// Use noop_llseek if neither read nor write accesses f_pos
///////////////////////////////////////////////////////////
@ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
identifier write_no_fpos.write_f;
@@
// write fops use offset
struct file_operations fops = {
...
.write = write_f,
.read = read_f,
...
+.llseek = noop_llseek, /* read and write both use no f_pos */
};
@ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier write_no_fpos.write_f;
@@
struct file_operations fops = {
... .write = write_f, ...
+.llseek = noop_llseek, /* write uses no f_pos */
};
@ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
identifier read_no_fpos.read_f;
@@
struct file_operations fops = {
... .read = read_f, ...
+.llseek = noop_llseek, /* read uses no f_pos */
};
@ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
identifier fops0.fops;
@@
struct file_operations fops = {
...
+.llseek = noop_llseek, /* no read or write fn */
};
===== End semantic patch =====
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Julia Lawall <julia@diku.dk>
Cc: Christoph Hellwig <hch@infradead.org>
2010-08-16 00:52:59 +08:00
|
|
|
.llseek = noop_llseek,
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
|
|
|
|
2022-01-15 21:57:22 +08:00
|
|
|
SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count, unsigned int,
|
|
|
|
flags)
|
random: introduce getrandom(2) system call
The getrandom(2) system call was requested by the LibreSSL Portable
developers. It is analoguous to the getentropy(2) system call in
OpenBSD.
The rationale of this system call is to provide resiliance against
file descriptor exhaustion attacks, where the attacker consumes all
available file descriptors, forcing the use of the fallback code where
/dev/[u]random is not available. Since the fallback code is often not
well-tested, it is better to eliminate this potential failure mode
entirely.
The other feature provided by this new system call is the ability to
request randomness from the /dev/urandom entropy pool, but to block
until at least 128 bits of entropy has been accumulated in the
/dev/urandom entropy pool. Historically, the emphasis in the
/dev/urandom development has been to ensure that urandom pool is
initialized as quickly as possible after system boot, and preferably
before the init scripts start execution.
This is because changing /dev/urandom reads to block represents an
interface change that could potentially break userspace which is not
acceptable. In practice, on most x86 desktop and server systems, in
general the entropy pool can be initialized before it is needed (and
in modern kernels, we will printk a warning message if not). However,
on an embedded system, this may not be the case. And so with this new
interface, we can provide the functionality of blocking until the
urandom pool has been initialized. Any userspace program which uses
this new functionality must take care to assure that if it is used
during the boot process, that it will not cause the init scripts or
other portions of the system startup to hang indefinitely.
SYNOPSIS
#include <linux/random.h>
int getrandom(void *buf, size_t buflen, unsigned int flags);
DESCRIPTION
The system call getrandom() fills the buffer pointed to by buf
with up to buflen random bytes which can be used to seed user
space random number generators (i.e., DRBG's) or for other
cryptographic uses. It should not be used for Monte Carlo
simulations or other programs/algorithms which are doing
probabilistic sampling.
If the GRND_RANDOM flags bit is set, then draw from the
/dev/random pool instead of the /dev/urandom pool. The
/dev/random pool is limited based on the entropy that can be
obtained from environmental noise, so if there is insufficient
entropy, the requested number of bytes may not be returned.
If there is no entropy available at all, getrandom(2) will
either block, or return an error with errno set to EAGAIN if
the GRND_NONBLOCK bit is set in flags.
If the GRND_RANDOM bit is not set, then the /dev/urandom pool
will be used. Unlike using read(2) to fetch data from
/dev/urandom, if the urandom pool has not been sufficiently
initialized, getrandom(2) will block (or return -1 with the
errno set to EAGAIN if the GRND_NONBLOCK bit is set in flags).
The getentropy(2) system call in OpenBSD can be emulated using
the following function:
int getentropy(void *buf, size_t buflen)
{
int ret;
if (buflen > 256)
goto failure;
ret = getrandom(buf, buflen, 0);
if (ret < 0)
return ret;
if (ret == buflen)
return 0;
failure:
errno = EIO;
return -1;
}
RETURN VALUE
On success, the number of bytes that was filled in the buf is
returned. This may not be all the bytes requested by the
caller via buflen if insufficient entropy was present in the
/dev/random pool, or if the system call was interrupted by a
signal.
On error, -1 is returned, and errno is set appropriately.
ERRORS
EINVAL An invalid flag was passed to getrandom(2)
EFAULT buf is outside the accessible address space.
EAGAIN The requested entropy was not available, and
getentropy(2) would have blocked if the
GRND_NONBLOCK flag was not set.
EINTR While blocked waiting for entropy, the call was
interrupted by a signal handler; see the description
of how interrupted read(2) calls on "slow" devices
are handled with and without the SA_RESTART flag
in the signal(7) man page.
NOTES
For small requests (buflen <= 256) getrandom(2) will not
return EINTR when reading from the urandom pool once the
entropy pool has been initialized, and it will return all of
the bytes that have been requested. This is the recommended
way to use getrandom(2), and is designed for compatibility
with OpenBSD's getentropy() system call.
However, if you are using GRND_RANDOM, then getrandom(2) may
block until the entropy accounting determines that sufficient
environmental noise has been gathered such that getrandom(2)
will be operating as a NRBG instead of a DRBG for those people
who are working in the NIST SP 800-90 regime. Since it may
block for a long time, these guarantees do *not* apply. The
user may want to interrupt a hanging process using a signal,
so blocking until all of the requested bytes are returned
would be unfriendly.
For this reason, the user of getrandom(2) MUST always check
the return value, in case it returns some error, or if fewer
bytes than requested was returned. In the case of
!GRND_RANDOM and small request, the latter should never
happen, but the careful userspace code (and all crypto code
should be careful) should check for this anyway!
Finally, unless you are doing long-term key generation (and
perhaps not even then), you probably shouldn't be using
GRND_RANDOM. The cryptographic algorithms used for
/dev/urandom are quite conservative, and so should be
sufficient for all purposes. The disadvantage of GRND_RANDOM
is that it can block, and the increased complexity required to
deal with partially fulfilled getrandom(2) requests.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Zach Brown <zab@zabbo.net>
2014-07-17 16:13:05 +08:00
|
|
|
{
|
2022-01-15 21:57:22 +08:00
|
|
|
if (flags & ~(GRND_NONBLOCK | GRND_RANDOM | GRND_INSECURE))
|
2019-12-23 16:20:46 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Requesting insecure and blocking randomness at the same time makes
|
|
|
|
* no sense.
|
|
|
|
*/
|
2022-01-15 21:57:22 +08:00
|
|
|
if ((flags & (GRND_INSECURE | GRND_RANDOM)) == (GRND_INSECURE | GRND_RANDOM))
|
random: introduce getrandom(2) system call
The getrandom(2) system call was requested by the LibreSSL Portable
developers. It is analoguous to the getentropy(2) system call in
OpenBSD.
The rationale of this system call is to provide resiliance against
file descriptor exhaustion attacks, where the attacker consumes all
available file descriptors, forcing the use of the fallback code where
/dev/[u]random is not available. Since the fallback code is often not
well-tested, it is better to eliminate this potential failure mode
entirely.
The other feature provided by this new system call is the ability to
request randomness from the /dev/urandom entropy pool, but to block
until at least 128 bits of entropy has been accumulated in the
/dev/urandom entropy pool. Historically, the emphasis in the
/dev/urandom development has been to ensure that urandom pool is
initialized as quickly as possible after system boot, and preferably
before the init scripts start execution.
This is because changing /dev/urandom reads to block represents an
interface change that could potentially break userspace which is not
acceptable. In practice, on most x86 desktop and server systems, in
general the entropy pool can be initialized before it is needed (and
in modern kernels, we will printk a warning message if not). However,
on an embedded system, this may not be the case. And so with this new
interface, we can provide the functionality of blocking until the
urandom pool has been initialized. Any userspace program which uses
this new functionality must take care to assure that if it is used
during the boot process, that it will not cause the init scripts or
other portions of the system startup to hang indefinitely.
SYNOPSIS
#include <linux/random.h>
int getrandom(void *buf, size_t buflen, unsigned int flags);
DESCRIPTION
The system call getrandom() fills the buffer pointed to by buf
with up to buflen random bytes which can be used to seed user
space random number generators (i.e., DRBG's) or for other
cryptographic uses. It should not be used for Monte Carlo
simulations or other programs/algorithms which are doing
probabilistic sampling.
If the GRND_RANDOM flags bit is set, then draw from the
/dev/random pool instead of the /dev/urandom pool. The
/dev/random pool is limited based on the entropy that can be
obtained from environmental noise, so if there is insufficient
entropy, the requested number of bytes may not be returned.
If there is no entropy available at all, getrandom(2) will
either block, or return an error with errno set to EAGAIN if
the GRND_NONBLOCK bit is set in flags.
If the GRND_RANDOM bit is not set, then the /dev/urandom pool
will be used. Unlike using read(2) to fetch data from
/dev/urandom, if the urandom pool has not been sufficiently
initialized, getrandom(2) will block (or return -1 with the
errno set to EAGAIN if the GRND_NONBLOCK bit is set in flags).
The getentropy(2) system call in OpenBSD can be emulated using
the following function:
int getentropy(void *buf, size_t buflen)
{
int ret;
if (buflen > 256)
goto failure;
ret = getrandom(buf, buflen, 0);
if (ret < 0)
return ret;
if (ret == buflen)
return 0;
failure:
errno = EIO;
return -1;
}
RETURN VALUE
On success, the number of bytes that was filled in the buf is
returned. This may not be all the bytes requested by the
caller via buflen if insufficient entropy was present in the
/dev/random pool, or if the system call was interrupted by a
signal.
On error, -1 is returned, and errno is set appropriately.
ERRORS
EINVAL An invalid flag was passed to getrandom(2)
EFAULT buf is outside the accessible address space.
EAGAIN The requested entropy was not available, and
getentropy(2) would have blocked if the
GRND_NONBLOCK flag was not set.
EINTR While blocked waiting for entropy, the call was
interrupted by a signal handler; see the description
of how interrupted read(2) calls on "slow" devices
are handled with and without the SA_RESTART flag
in the signal(7) man page.
NOTES
For small requests (buflen <= 256) getrandom(2) will not
return EINTR when reading from the urandom pool once the
entropy pool has been initialized, and it will return all of
the bytes that have been requested. This is the recommended
way to use getrandom(2), and is designed for compatibility
with OpenBSD's getentropy() system call.
However, if you are using GRND_RANDOM, then getrandom(2) may
block until the entropy accounting determines that sufficient
environmental noise has been gathered such that getrandom(2)
will be operating as a NRBG instead of a DRBG for those people
who are working in the NIST SP 800-90 regime. Since it may
block for a long time, these guarantees do *not* apply. The
user may want to interrupt a hanging process using a signal,
so blocking until all of the requested bytes are returned
would be unfriendly.
For this reason, the user of getrandom(2) MUST always check
the return value, in case it returns some error, or if fewer
bytes than requested was returned. In the case of
!GRND_RANDOM and small request, the latter should never
happen, but the careful userspace code (and all crypto code
should be careful) should check for this anyway!
Finally, unless you are doing long-term key generation (and
perhaps not even then), you probably shouldn't be using
GRND_RANDOM. The cryptographic algorithms used for
/dev/urandom are quite conservative, and so should be
sufficient for all purposes. The disadvantage of GRND_RANDOM
is that it can block, and the increased complexity required to
deal with partially fulfilled getrandom(2) requests.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Zach Brown <zab@zabbo.net>
2014-07-17 16:13:05 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (count > INT_MAX)
|
|
|
|
count = INT_MAX;
|
|
|
|
|
2019-12-23 16:20:46 +08:00
|
|
|
if (!(flags & GRND_INSECURE) && !crng_ready()) {
|
2022-02-09 21:43:25 +08:00
|
|
|
int ret;
|
|
|
|
|
random: introduce getrandom(2) system call
The getrandom(2) system call was requested by the LibreSSL Portable
developers. It is analoguous to the getentropy(2) system call in
OpenBSD.
The rationale of this system call is to provide resiliance against
file descriptor exhaustion attacks, where the attacker consumes all
available file descriptors, forcing the use of the fallback code where
/dev/[u]random is not available. Since the fallback code is often not
well-tested, it is better to eliminate this potential failure mode
entirely.
The other feature provided by this new system call is the ability to
request randomness from the /dev/urandom entropy pool, but to block
until at least 128 bits of entropy has been accumulated in the
/dev/urandom entropy pool. Historically, the emphasis in the
/dev/urandom development has been to ensure that urandom pool is
initialized as quickly as possible after system boot, and preferably
before the init scripts start execution.
This is because changing /dev/urandom reads to block represents an
interface change that could potentially break userspace which is not
acceptable. In practice, on most x86 desktop and server systems, in
general the entropy pool can be initialized before it is needed (and
in modern kernels, we will printk a warning message if not). However,
on an embedded system, this may not be the case. And so with this new
interface, we can provide the functionality of blocking until the
urandom pool has been initialized. Any userspace program which uses
this new functionality must take care to assure that if it is used
during the boot process, that it will not cause the init scripts or
other portions of the system startup to hang indefinitely.
SYNOPSIS
#include <linux/random.h>
int getrandom(void *buf, size_t buflen, unsigned int flags);
DESCRIPTION
The system call getrandom() fills the buffer pointed to by buf
with up to buflen random bytes which can be used to seed user
space random number generators (i.e., DRBG's) or for other
cryptographic uses. It should not be used for Monte Carlo
simulations or other programs/algorithms which are doing
probabilistic sampling.
If the GRND_RANDOM flags bit is set, then draw from the
/dev/random pool instead of the /dev/urandom pool. The
/dev/random pool is limited based on the entropy that can be
obtained from environmental noise, so if there is insufficient
entropy, the requested number of bytes may not be returned.
If there is no entropy available at all, getrandom(2) will
either block, or return an error with errno set to EAGAIN if
the GRND_NONBLOCK bit is set in flags.
If the GRND_RANDOM bit is not set, then the /dev/urandom pool
will be used. Unlike using read(2) to fetch data from
/dev/urandom, if the urandom pool has not been sufficiently
initialized, getrandom(2) will block (or return -1 with the
errno set to EAGAIN if the GRND_NONBLOCK bit is set in flags).
The getentropy(2) system call in OpenBSD can be emulated using
the following function:
int getentropy(void *buf, size_t buflen)
{
int ret;
if (buflen > 256)
goto failure;
ret = getrandom(buf, buflen, 0);
if (ret < 0)
return ret;
if (ret == buflen)
return 0;
failure:
errno = EIO;
return -1;
}
RETURN VALUE
On success, the number of bytes that was filled in the buf is
returned. This may not be all the bytes requested by the
caller via buflen if insufficient entropy was present in the
/dev/random pool, or if the system call was interrupted by a
signal.
On error, -1 is returned, and errno is set appropriately.
ERRORS
EINVAL An invalid flag was passed to getrandom(2)
EFAULT buf is outside the accessible address space.
EAGAIN The requested entropy was not available, and
getentropy(2) would have blocked if the
GRND_NONBLOCK flag was not set.
EINTR While blocked waiting for entropy, the call was
interrupted by a signal handler; see the description
of how interrupted read(2) calls on "slow" devices
are handled with and without the SA_RESTART flag
in the signal(7) man page.
NOTES
For small requests (buflen <= 256) getrandom(2) will not
return EINTR when reading from the urandom pool once the
entropy pool has been initialized, and it will return all of
the bytes that have been requested. This is the recommended
way to use getrandom(2), and is designed for compatibility
with OpenBSD's getentropy() system call.
However, if you are using GRND_RANDOM, then getrandom(2) may
block until the entropy accounting determines that sufficient
environmental noise has been gathered such that getrandom(2)
will be operating as a NRBG instead of a DRBG for those people
who are working in the NIST SP 800-90 regime. Since it may
block for a long time, these guarantees do *not* apply. The
user may want to interrupt a hanging process using a signal,
so blocking until all of the requested bytes are returned
would be unfriendly.
For this reason, the user of getrandom(2) MUST always check
the return value, in case it returns some error, or if fewer
bytes than requested was returned. In the case of
!GRND_RANDOM and small request, the latter should never
happen, but the careful userspace code (and all crypto code
should be careful) should check for this anyway!
Finally, unless you are doing long-term key generation (and
perhaps not even then), you probably shouldn't be using
GRND_RANDOM. The cryptographic algorithms used for
/dev/urandom are quite conservative, and so should be
sufficient for all purposes. The disadvantage of GRND_RANDOM
is that it can block, and the increased complexity required to
deal with partially fulfilled getrandom(2) requests.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Zach Brown <zab@zabbo.net>
2014-07-17 16:13:05 +08:00
|
|
|
if (flags & GRND_NONBLOCK)
|
|
|
|
return -EAGAIN;
|
2017-06-08 07:58:56 +08:00
|
|
|
ret = wait_for_random_bytes();
|
|
|
|
if (unlikely(ret))
|
|
|
|
return ret;
|
random: introduce getrandom(2) system call
The getrandom(2) system call was requested by the LibreSSL Portable
developers. It is analoguous to the getentropy(2) system call in
OpenBSD.
The rationale of this system call is to provide resiliance against
file descriptor exhaustion attacks, where the attacker consumes all
available file descriptors, forcing the use of the fallback code where
/dev/[u]random is not available. Since the fallback code is often not
well-tested, it is better to eliminate this potential failure mode
entirely.
The other feature provided by this new system call is the ability to
request randomness from the /dev/urandom entropy pool, but to block
until at least 128 bits of entropy has been accumulated in the
/dev/urandom entropy pool. Historically, the emphasis in the
/dev/urandom development has been to ensure that urandom pool is
initialized as quickly as possible after system boot, and preferably
before the init scripts start execution.
This is because changing /dev/urandom reads to block represents an
interface change that could potentially break userspace which is not
acceptable. In practice, on most x86 desktop and server systems, in
general the entropy pool can be initialized before it is needed (and
in modern kernels, we will printk a warning message if not). However,
on an embedded system, this may not be the case. And so with this new
interface, we can provide the functionality of blocking until the
urandom pool has been initialized. Any userspace program which uses
this new functionality must take care to assure that if it is used
during the boot process, that it will not cause the init scripts or
other portions of the system startup to hang indefinitely.
SYNOPSIS
#include <linux/random.h>
int getrandom(void *buf, size_t buflen, unsigned int flags);
DESCRIPTION
The system call getrandom() fills the buffer pointed to by buf
with up to buflen random bytes which can be used to seed user
space random number generators (i.e., DRBG's) or for other
cryptographic uses. It should not be used for Monte Carlo
simulations or other programs/algorithms which are doing
probabilistic sampling.
If the GRND_RANDOM flags bit is set, then draw from the
/dev/random pool instead of the /dev/urandom pool. The
/dev/random pool is limited based on the entropy that can be
obtained from environmental noise, so if there is insufficient
entropy, the requested number of bytes may not be returned.
If there is no entropy available at all, getrandom(2) will
either block, or return an error with errno set to EAGAIN if
the GRND_NONBLOCK bit is set in flags.
If the GRND_RANDOM bit is not set, then the /dev/urandom pool
will be used. Unlike using read(2) to fetch data from
/dev/urandom, if the urandom pool has not been sufficiently
initialized, getrandom(2) will block (or return -1 with the
errno set to EAGAIN if the GRND_NONBLOCK bit is set in flags).
The getentropy(2) system call in OpenBSD can be emulated using
the following function:
int getentropy(void *buf, size_t buflen)
{
int ret;
if (buflen > 256)
goto failure;
ret = getrandom(buf, buflen, 0);
if (ret < 0)
return ret;
if (ret == buflen)
return 0;
failure:
errno = EIO;
return -1;
}
RETURN VALUE
On success, the number of bytes that was filled in the buf is
returned. This may not be all the bytes requested by the
caller via buflen if insufficient entropy was present in the
/dev/random pool, or if the system call was interrupted by a
signal.
On error, -1 is returned, and errno is set appropriately.
ERRORS
EINVAL An invalid flag was passed to getrandom(2)
EFAULT buf is outside the accessible address space.
EAGAIN The requested entropy was not available, and
getentropy(2) would have blocked if the
GRND_NONBLOCK flag was not set.
EINTR While blocked waiting for entropy, the call was
interrupted by a signal handler; see the description
of how interrupted read(2) calls on "slow" devices
are handled with and without the SA_RESTART flag
in the signal(7) man page.
NOTES
For small requests (buflen <= 256) getrandom(2) will not
return EINTR when reading from the urandom pool once the
entropy pool has been initialized, and it will return all of
the bytes that have been requested. This is the recommended
way to use getrandom(2), and is designed for compatibility
with OpenBSD's getentropy() system call.
However, if you are using GRND_RANDOM, then getrandom(2) may
block until the entropy accounting determines that sufficient
environmental noise has been gathered such that getrandom(2)
will be operating as a NRBG instead of a DRBG for those people
who are working in the NIST SP 800-90 regime. Since it may
block for a long time, these guarantees do *not* apply. The
user may want to interrupt a hanging process using a signal,
so blocking until all of the requested bytes are returned
would be unfriendly.
For this reason, the user of getrandom(2) MUST always check
the return value, in case it returns some error, or if fewer
bytes than requested was returned. In the case of
!GRND_RANDOM and small request, the latter should never
happen, but the careful userspace code (and all crypto code
should be careful) should check for this anyway!
Finally, unless you are doing long-term key generation (and
perhaps not even then), you probably shouldn't be using
GRND_RANDOM. The cryptographic algorithms used for
/dev/urandom are quite conservative, and so should be
sufficient for all purposes. The disadvantage of GRND_RANDOM
is that it can block, and the increased complexity required to
deal with partially fulfilled getrandom(2) requests.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Zach Brown <zab@zabbo.net>
2014-07-17 16:13:05 +08:00
|
|
|
}
|
2022-02-10 23:40:44 +08:00
|
|
|
return get_random_bytes_user(buf, count);
|
random: introduce getrandom(2) system call
The getrandom(2) system call was requested by the LibreSSL Portable
developers. It is analoguous to the getentropy(2) system call in
OpenBSD.
The rationale of this system call is to provide resiliance against
file descriptor exhaustion attacks, where the attacker consumes all
available file descriptors, forcing the use of the fallback code where
/dev/[u]random is not available. Since the fallback code is often not
well-tested, it is better to eliminate this potential failure mode
entirely.
The other feature provided by this new system call is the ability to
request randomness from the /dev/urandom entropy pool, but to block
until at least 128 bits of entropy has been accumulated in the
/dev/urandom entropy pool. Historically, the emphasis in the
/dev/urandom development has been to ensure that urandom pool is
initialized as quickly as possible after system boot, and preferably
before the init scripts start execution.
This is because changing /dev/urandom reads to block represents an
interface change that could potentially break userspace which is not
acceptable. In practice, on most x86 desktop and server systems, in
general the entropy pool can be initialized before it is needed (and
in modern kernels, we will printk a warning message if not). However,
on an embedded system, this may not be the case. And so with this new
interface, we can provide the functionality of blocking until the
urandom pool has been initialized. Any userspace program which uses
this new functionality must take care to assure that if it is used
during the boot process, that it will not cause the init scripts or
other portions of the system startup to hang indefinitely.
SYNOPSIS
#include <linux/random.h>
int getrandom(void *buf, size_t buflen, unsigned int flags);
DESCRIPTION
The system call getrandom() fills the buffer pointed to by buf
with up to buflen random bytes which can be used to seed user
space random number generators (i.e., DRBG's) or for other
cryptographic uses. It should not be used for Monte Carlo
simulations or other programs/algorithms which are doing
probabilistic sampling.
If the GRND_RANDOM flags bit is set, then draw from the
/dev/random pool instead of the /dev/urandom pool. The
/dev/random pool is limited based on the entropy that can be
obtained from environmental noise, so if there is insufficient
entropy, the requested number of bytes may not be returned.
If there is no entropy available at all, getrandom(2) will
either block, or return an error with errno set to EAGAIN if
the GRND_NONBLOCK bit is set in flags.
If the GRND_RANDOM bit is not set, then the /dev/urandom pool
will be used. Unlike using read(2) to fetch data from
/dev/urandom, if the urandom pool has not been sufficiently
initialized, getrandom(2) will block (or return -1 with the
errno set to EAGAIN if the GRND_NONBLOCK bit is set in flags).
The getentropy(2) system call in OpenBSD can be emulated using
the following function:
int getentropy(void *buf, size_t buflen)
{
int ret;
if (buflen > 256)
goto failure;
ret = getrandom(buf, buflen, 0);
if (ret < 0)
return ret;
if (ret == buflen)
return 0;
failure:
errno = EIO;
return -1;
}
RETURN VALUE
On success, the number of bytes that was filled in the buf is
returned. This may not be all the bytes requested by the
caller via buflen if insufficient entropy was present in the
/dev/random pool, or if the system call was interrupted by a
signal.
On error, -1 is returned, and errno is set appropriately.
ERRORS
EINVAL An invalid flag was passed to getrandom(2)
EFAULT buf is outside the accessible address space.
EAGAIN The requested entropy was not available, and
getentropy(2) would have blocked if the
GRND_NONBLOCK flag was not set.
EINTR While blocked waiting for entropy, the call was
interrupted by a signal handler; see the description
of how interrupted read(2) calls on "slow" devices
are handled with and without the SA_RESTART flag
in the signal(7) man page.
NOTES
For small requests (buflen <= 256) getrandom(2) will not
return EINTR when reading from the urandom pool once the
entropy pool has been initialized, and it will return all of
the bytes that have been requested. This is the recommended
way to use getrandom(2), and is designed for compatibility
with OpenBSD's getentropy() system call.
However, if you are using GRND_RANDOM, then getrandom(2) may
block until the entropy accounting determines that sufficient
environmental noise has been gathered such that getrandom(2)
will be operating as a NRBG instead of a DRBG for those people
who are working in the NIST SP 800-90 regime. Since it may
block for a long time, these guarantees do *not* apply. The
user may want to interrupt a hanging process using a signal,
so blocking until all of the requested bytes are returned
would be unfriendly.
For this reason, the user of getrandom(2) MUST always check
the return value, in case it returns some error, or if fewer
bytes than requested was returned. In the case of
!GRND_RANDOM and small request, the latter should never
happen, but the careful userspace code (and all crypto code
should be careful) should check for this anyway!
Finally, unless you are doing long-term key generation (and
perhaps not even then), you probably shouldn't be using
GRND_RANDOM. The cryptographic algorithms used for
/dev/urandom are quite conservative, and so should be
sufficient for all purposes. The disadvantage of GRND_RANDOM
is that it can block, and the increased complexity required to
deal with partially fulfilled getrandom(2) requests.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Zach Brown <zab@zabbo.net>
2014-07-17 16:13:05 +08:00
|
|
|
}
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
/********************************************************************
|
|
|
|
*
|
|
|
|
* Sysctl interface
|
|
|
|
*
|
|
|
|
********************************************************************/
|
|
|
|
|
|
|
|
#ifdef CONFIG_SYSCTL
|
|
|
|
|
|
|
|
#include <linux/sysctl.h>
|
|
|
|
|
2017-02-01 00:36:07 +08:00
|
|
|
static int random_min_urandom_seed = 60;
|
2022-02-05 21:00:58 +08:00
|
|
|
static int random_write_wakeup_bits = POOL_MIN_BITS;
|
|
|
|
static int sysctl_poolsize = POOL_BITS;
|
2005-04-17 06:20:36 +08:00
|
|
|
static char sysctl_bootid[16];
|
|
|
|
|
|
|
|
/*
|
2013-11-30 03:58:16 +08:00
|
|
|
* This function is used to return both the bootid UUID, and random
|
2005-04-17 06:20:36 +08:00
|
|
|
* UUID. The difference is in whether table->data is NULL; if it is,
|
|
|
|
* then a new UUID is generated and returned to the user.
|
|
|
|
*
|
2013-11-30 03:58:16 +08:00
|
|
|
* If the user accesses this via the proc interface, the UUID will be
|
|
|
|
* returned as an ASCII string in the standard UUID format; if via the
|
|
|
|
* sysctl system call, as 16 bytes of binary data.
|
2005-04-17 06:20:36 +08:00
|
|
|
*/
|
2022-01-15 21:57:22 +08:00
|
|
|
static int proc_do_uuid(struct ctl_table *table, int write, void *buffer,
|
|
|
|
size_t *lenp, loff_t *ppos)
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
2013-06-14 10:37:35 +08:00
|
|
|
struct ctl_table fake_table;
|
2005-04-17 06:20:36 +08:00
|
|
|
unsigned char buf[64], tmp_uuid[16], *uuid;
|
|
|
|
|
|
|
|
uuid = table->data;
|
|
|
|
if (!uuid) {
|
|
|
|
uuid = tmp_uuid;
|
|
|
|
generate_random_uuid(uuid);
|
2012-04-13 03:49:12 +08:00
|
|
|
} else {
|
|
|
|
static DEFINE_SPINLOCK(bootid_spinlock);
|
|
|
|
|
|
|
|
spin_lock(&bootid_spinlock);
|
|
|
|
if (!uuid[8])
|
|
|
|
generate_random_uuid(uuid);
|
|
|
|
spin_unlock(&bootid_spinlock);
|
|
|
|
}
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2009-12-15 10:01:11 +08:00
|
|
|
sprintf(buf, "%pU", uuid);
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
fake_table.data = buf;
|
|
|
|
fake_table.maxlen = sizeof(buf);
|
|
|
|
|
2009-09-24 06:57:19 +08:00
|
|
|
return proc_dostring(&fake_table, write, buffer, lenp, ppos);
|
2005-04-17 06:20:36 +08:00
|
|
|
}
|
|
|
|
|
2022-01-22 14:12:18 +08:00
|
|
|
static struct ctl_table random_table[] = {
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
.procname = "poolsize",
|
|
|
|
.data = &sysctl_poolsize,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0444,
|
2009-11-16 19:11:48 +08:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 06:20:36 +08:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "entropy_avail",
|
random: use linear min-entropy accumulation crediting
30e37ec516ae ("random: account for entropy loss due to overwrites")
assumed that adding new entropy to the LFSR pool probabilistically
cancelled out old entropy there, so entropy was credited asymptotically,
approximating Shannon entropy of independent sources (rather than a
stronger min-entropy notion) using 1/8th fractional bits and replacing
a constant 2-2/√𝑒 term (~0.786938) with 3/4 (0.75) to slightly
underestimate it. This wasn't superb, but it was perhaps better than
nothing, so that's what was done. Which entropy specifically was being
cancelled out and how much precisely each time is hard to tell, though
as I showed with the attack code in my previous commit, a motivated
adversary with sufficient information can actually cancel out
everything.
Since we're no longer using an LFSR for entropy accumulation, this
probabilistic cancellation is no longer relevant. Rather, we're now
using a computational hash function as the accumulator and we've
switched to working in the random oracle model, from which we can now
revisit the question of min-entropy accumulation, which is done in
detail in <https://eprint.iacr.org/2019/198>.
Consider a long input bit string that is built by concatenating various
smaller independent input bit strings. Each one of these inputs has a
designated min-entropy, which is what we're passing to
credit_entropy_bits(h). When we pass the concatenation of these to a
random oracle, it means that an adversary trying to receive back the
same reply as us would need to become certain about each part of the
concatenated bit string we passed in, which means becoming certain about
all of those h values. That means we can estimate the accumulation by
simply adding up the h values in calls to credit_entropy_bits(h);
there's no probabilistic cancellation at play like there was said to be
for the LFSR. Incidentally, this is also what other entropy accumulators
based on computational hash functions do as well.
So this commit replaces credit_entropy_bits(h) with essentially `total =
min(POOL_BITS, total + h)`, done with a cmpxchg loop as before.
What if we're wrong and the above is nonsense? It's not, but let's
assume we don't want the actual _behavior_ of the code to change much.
Currently that behavior is not extracting from the input pool until it
has 128 bits of entropy in it. With the old algorithm, we'd hit that
magic 128 number after roughly 256 calls to credit_entropy_bits(1). So,
we can retain more or less the old behavior by waiting to extract from
the input pool until it hits 256 bits of entropy using the new code. For
people concerned about this change, it means that there's not that much
practical behavioral change. And for folks actually trying to model
the behavior rigorously, it means that we have an even higher margin
against attacks.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-03 20:28:06 +08:00
|
|
|
.data = &input_pool.entropy_count,
|
2005-04-17 06:20:36 +08:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0444,
|
random: use linear min-entropy accumulation crediting
30e37ec516ae ("random: account for entropy loss due to overwrites")
assumed that adding new entropy to the LFSR pool probabilistically
cancelled out old entropy there, so entropy was credited asymptotically,
approximating Shannon entropy of independent sources (rather than a
stronger min-entropy notion) using 1/8th fractional bits and replacing
a constant 2-2/√𝑒 term (~0.786938) with 3/4 (0.75) to slightly
underestimate it. This wasn't superb, but it was perhaps better than
nothing, so that's what was done. Which entropy specifically was being
cancelled out and how much precisely each time is hard to tell, though
as I showed with the attack code in my previous commit, a motivated
adversary with sufficient information can actually cancel out
everything.
Since we're no longer using an LFSR for entropy accumulation, this
probabilistic cancellation is no longer relevant. Rather, we're now
using a computational hash function as the accumulator and we've
switched to working in the random oracle model, from which we can now
revisit the question of min-entropy accumulation, which is done in
detail in <https://eprint.iacr.org/2019/198>.
Consider a long input bit string that is built by concatenating various
smaller independent input bit strings. Each one of these inputs has a
designated min-entropy, which is what we're passing to
credit_entropy_bits(h). When we pass the concatenation of these to a
random oracle, it means that an adversary trying to receive back the
same reply as us would need to become certain about each part of the
concatenated bit string we passed in, which means becoming certain about
all of those h values. That means we can estimate the accumulation by
simply adding up the h values in calls to credit_entropy_bits(h);
there's no probabilistic cancellation at play like there was said to be
for the LFSR. Incidentally, this is also what other entropy accumulators
based on computational hash functions do as well.
So this commit replaces credit_entropy_bits(h) with essentially `total =
min(POOL_BITS, total + h)`, done with a cmpxchg loop as before.
What if we're wrong and the above is nonsense? It's not, but let's
assume we don't want the actual _behavior_ of the code to change much.
Currently that behavior is not extracting from the input pool until it
has 128 bits of entropy in it. With the old algorithm, we'd hit that
magic 128 number after roughly 256 calls to credit_entropy_bits(1). So,
we can retain more or less the old behavior by waiting to extract from
the input pool until it hits 256 bits of entropy using the new code. For
people concerned about this change, it means that there's not that much
practical behavioral change. And for folks actually trying to model
the behavior rigorously, it means that we have an even higher margin
against attacks.
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-03 20:28:06 +08:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 06:20:36 +08:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "write_wakeup_threshold",
|
2013-12-07 10:28:03 +08:00
|
|
|
.data = &random_write_wakeup_bits,
|
2005-04-17 06:20:36 +08:00
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
2022-02-05 21:00:58 +08:00
|
|
|
.proc_handler = proc_dointvec,
|
2005-04-17 06:20:36 +08:00
|
|
|
},
|
2013-09-23 03:14:32 +08:00
|
|
|
{
|
|
|
|
.procname = "urandom_min_reseed_secs",
|
|
|
|
.data = &random_min_urandom_seed,
|
|
|
|
.maxlen = sizeof(int),
|
|
|
|
.mode = 0644,
|
|
|
|
.proc_handler = proc_dointvec,
|
|
|
|
},
|
2005-04-17 06:20:36 +08:00
|
|
|
{
|
|
|
|
.procname = "boot_id",
|
|
|
|
.data = &sysctl_bootid,
|
|
|
|
.maxlen = 16,
|
|
|
|
.mode = 0444,
|
2009-11-16 19:11:48 +08:00
|
|
|
.proc_handler = proc_do_uuid,
|
2005-04-17 06:20:36 +08:00
|
|
|
},
|
|
|
|
{
|
|
|
|
.procname = "uuid",
|
|
|
|
.maxlen = 16,
|
|
|
|
.mode = 0444,
|
2009-11-16 19:11:48 +08:00
|
|
|
.proc_handler = proc_do_uuid,
|
2005-04-17 06:20:36 +08:00
|
|
|
},
|
2009-11-06 06:34:02 +08:00
|
|
|
{ }
|
2005-04-17 06:20:36 +08:00
|
|
|
};
|
2022-01-22 14:12:18 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* rand_initialize() is called before sysctl_init(),
|
|
|
|
* so we cannot call register_sysctl_init() in rand_initialize()
|
|
|
|
*/
|
|
|
|
static int __init random_sysctls_init(void)
|
|
|
|
{
|
|
|
|
register_sysctl_init("kernel/random", random_table);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
device_initcall(random_sysctls_init);
|
2022-01-15 21:57:22 +08:00
|
|
|
#endif /* CONFIG_SYSCTL */
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2014-06-15 11:38:36 +08:00
|
|
|
/* Interface for in-kernel drivers of true hardware RNGs.
|
|
|
|
* Those devices may produce endless random bits and will be throttled
|
|
|
|
* when our pool is full.
|
|
|
|
*/
|
2022-02-09 21:43:25 +08:00
|
|
|
void add_hwgenerator_randomness(const void *buffer, size_t count,
|
2014-06-15 11:38:36 +08:00
|
|
|
size_t entropy)
|
|
|
|
{
|
2018-04-12 01:27:52 +08:00
|
|
|
if (unlikely(crng_init == 0)) {
|
2021-12-30 05:10:05 +08:00
|
|
|
size_t ret = crng_fast_load(buffer, count);
|
2022-01-13 00:18:08 +08:00
|
|
|
mix_pool_bytes(buffer, ret);
|
2021-12-30 05:10:05 +08:00
|
|
|
count -= ret;
|
|
|
|
buffer += ret;
|
|
|
|
if (!count || crng_init == 0)
|
|
|
|
return;
|
2016-06-13 06:11:51 +08:00
|
|
|
}
|
2016-06-13 06:13:36 +08:00
|
|
|
|
2022-01-26 04:14:57 +08:00
|
|
|
/* Throttle writing if we're above the trickle threshold.
|
2022-02-05 21:00:58 +08:00
|
|
|
* We'll be woken up again once below POOL_MIN_BITS, when
|
|
|
|
* the calling thread is about to terminate, or once
|
|
|
|
* CRNG_RESEED_INTERVAL has elapsed.
|
2016-06-13 06:13:36 +08:00
|
|
|
*/
|
2022-01-26 04:14:57 +08:00
|
|
|
wait_event_interruptible_timeout(random_write_wait,
|
random: fix crash on multiple early calls to add_bootloader_randomness()
Currently, if CONFIG_RANDOM_TRUST_BOOTLOADER is enabled, multiple calls
to add_bootloader_randomness() are broken and can cause a NULL pointer
dereference, as noted by Ivan T. Ivanov. This is not only a hypothetical
problem, as qemu on arm64 may provide bootloader entropy via EFI and via
devicetree.
On the first call to add_hwgenerator_randomness(), crng_fast_load() is
executed, and if the seed is long enough, crng_init will be set to 1.
On subsequent calls to add_bootloader_randomness() and then to
add_hwgenerator_randomness(), crng_fast_load() will be skipped. Instead,
wait_event_interruptible() and then credit_entropy_bits() will be called.
If the entropy count for that second seed is large enough, that proceeds
to crng_reseed().
However, both wait_event_interruptible() and crng_reseed() depends
(at least in numa_crng_init()) on workqueues. Therefore, test whether
system_wq is already initialized, which is a sufficient indicator that
workqueue_init_early() has progressed far enough.
If we wind up hitting the !system_wq case, we later want to do what
would have been done there when wqs are up, so set a flag, and do that
work later from the rand_initialize() call.
Reported-by: Ivan T. Ivanov <iivanov@suse.de>
Fixes: 18b915ac6b0a ("efi/random: Treat EFI_RNG_PROTOCOL output as bootloader randomness")
Cc: stable@vger.kernel.org
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
[Jason: added crng_need_done state and related logic.]
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2021-12-30 05:10:03 +08:00
|
|
|
!system_wq || kthread_should_stop() ||
|
2022-02-05 21:00:58 +08:00
|
|
|
input_pool.entropy_count < POOL_MIN_BITS,
|
2022-01-26 04:14:57 +08:00
|
|
|
CRNG_RESEED_INTERVAL);
|
2022-01-13 00:18:08 +08:00
|
|
|
mix_pool_bytes(buffer, count);
|
|
|
|
credit_entropy_bits(entropy);
|
2014-06-15 11:38:36 +08:00
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(add_hwgenerator_randomness);
|
2019-08-23 14:24:51 +08:00
|
|
|
|
|
|
|
/* Handle random seed passed by bootloader.
|
|
|
|
* If the seed is trustworthy, it would be regarded as hardware RNGs. Otherwise
|
|
|
|
* it would be regarded as device data.
|
|
|
|
* The decision is controlled by CONFIG_RANDOM_TRUST_BOOTLOADER.
|
|
|
|
*/
|
2022-02-09 21:43:25 +08:00
|
|
|
void add_bootloader_randomness(const void *buf, size_t size)
|
2019-08-23 14:24:51 +08:00
|
|
|
{
|
|
|
|
if (IS_ENABLED(CONFIG_RANDOM_TRUST_BOOTLOADER))
|
|
|
|
add_hwgenerator_randomness(buf, size, size * 8);
|
|
|
|
else
|
|
|
|
add_device_randomness(buf, size);
|
|
|
|
}
|
2019-10-02 01:50:23 +08:00
|
|
|
EXPORT_SYMBOL_GPL(add_bootloader_randomness);
|