Commit Graph

155 Commits

Author SHA1 Message Date
Hannes Frederic Sowa 79a8468747 random: check for increase of entropy_count because of signed conversion
The expression entropy_count -= ibytes << (ENTROPY_SHIFT + 3) could
actually increase entropy_count if during assignment of the unsigned
expression on the RHS (mind the -=) we reduce the value modulo
2^width(int) and assign it to entropy_count. Trinity found this.

[ Commit modified by tytso to add an additional safety check for a
  negative entropy_count -- which should never happen, and to also add
  an additional paranoia check to prevent overly large count values to
  be passed into urandom_read().  ]

Reported-by: Dave Jones <davej@redhat.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
2014-07-19 01:42:13 -04:00
Linus Torvalds 5ee22beeb2 random: fix entropy accounting bug introduced in v3.15
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABCAAGBQJToH5YAAoJENNvdpvBGATwtuIQAOHsQAPDHbo7iSullr/tOTRd
 BZhFfdiG47tS4FkVYsrqSFCloROkneSCIIN0HLeTRbt4hA4SjN+jEkM2mtQ0dA/t
 ++DVgzFxMUvb7yOIA4uQk1C3kxlvPdx9EeGMHnSZ9u/uNUwfgqvlQ7r+k+kldtGp
 J+Ouaoy7w+XeXPy3JrFnKmvvFTjC94h0T7VWPJlqXRFmu8fN6sCxgXPfsdQkxcXw
 q75sD11nuVhUDy8CQbFfT1IHDshiBnFMm6muIipZcY0zu/ecutBkwpA+//ommxnM
 xPWf1vt3hJj3IGqgz9I0pJhBTHkpmmqVlW8pDMgNVwbAu7kEVrJ0YKfQLkP1JRbF
 lJe5G0Iy27y1Lx+UBw8WnGe/BxAE+8Ljq1p2gE5qbVZfB7w5/zgZDbREGdZG/+8K
 kZrYth4gKNVJEZBu1S6g0NSYG6DkF3voMRSan5U+t6pXR7PhEDMl+m4ablUnZjCQ
 tNK4rPKVtbisfOHcAEd5FNmHOat3hJ6WNAa3dzv7LEH6v2PPU7q1JVDr5tbvmhZr
 qW63+TvIpfX2kA0DkPnMnj8f3gXrRtZdUXeQF4RTMZRe26Sg262/bx2nR9h4H77n
 +x75tswu0epo9x/Ip/m9sC6MOzB2s4MUrCEZjpBVzvbgueIo7A16kMKsJbHRRtos
 4nMa2AnoMMkfoDn7uQtm
 =UeTe
 -----END PGP SIGNATURE-----

Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random

Pull randomness bugfix from Ted Ts'o:
 "random: fix entropy accounting bug introduced in v3.15"

* tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
  random: fix nasty entropy accounting bug
2014-06-17 14:23:14 -10:00
Theodore Ts'o e33ba5fa7a random: fix nasty entropy accounting bug
Commit 0fb7a01af5 "random: simplify accounting code", introduced in
v3.15, has a very nasty accounting problem when the entropy pool has
has fewer bytes of entropy than the number of requested reserved
bytes.  In that case, "have_bytes - reserved" goes negative, and since
size_t is unsigned, the expression:

       ibytes = min_t(size_t, ibytes, have_bytes - reserved);

... does not do the right thing.  This is rather bad, because it
defeats the catastrophic reseeding feature in the
xfer_secondary_pool() path.

It also can cause the "BUG: spinlock trylock failure on UP" for some
kernel configurations when prandom_reseed() calls get_random_bytes()
in the early init, since when the entropy count gets corrupted,
credit_entropy_bits() erroneously believes that the nonblocking pool
has been fully initialized (when in fact it is not), and so it calls
prandom_reseed(true) recursively leading to the spinlock BUG.

The logic is *not* the same it was originally, but in the cases where
it matters, the behavior is the same, and the resulting code is
hopefully easier to read and understand.

Fixes: 0fb7a01af5 "random: simplify accounting code"
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: Greg Price <price@mit.edu>
Cc: stable@vger.kernel.org  #v3.15
2014-06-15 21:04:32 -04:00
Joe Perches 5eb10d912e random: convert use of typedef ctl_table to struct ctl_table
This typedef is unnecessary and should just be removed.

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-06-06 16:08:15 -07:00
Linus Torvalds 681a289548 Merge branch 'for-3.16/core' of git://git.kernel.dk/linux-block into next
Pull block core updates from Jens Axboe:
 "It's a big(ish) round this time, lots of development effort has gone
  into blk-mq in the last 3 months.  Generally we're heading to where
  3.16 will be a feature complete and performant blk-mq.  scsi-mq is
  progressing nicely and will hopefully be in 3.17.  A nvme port is in
  progress, and the Micron pci-e flash driver, mtip32xx, is converted
  and will be sent in with the driver pull request for 3.16.

  This pull request contains:

   - Lots of prep and support patches for scsi-mq have been integrated.
     All from Christoph.

   - API and code cleanups for blk-mq from Christoph.

   - Lots of good corner case and error handling cleanup fixes for
     blk-mq from Ming Lei.

   - A flew of blk-mq updates from me:

     * Provide strict mappings so that the driver can rely on the CPU
       to queue mapping.  This enables optimizations in the driver.

     * Provided a bitmap tagging instead of percpu_ida, which never
       really worked well for blk-mq.  percpu_ida relies on the fact
       that we have a lot more tags available than we really need, it
       fails miserably for cases where we exhaust (or are close to
       exhausting) the tag space.

     * Provide sane support for shared tag maps, as utilized by scsi-mq

     * Various fixes for IO timeouts.

     * API cleanups, and lots of perf tweaks and optimizations.

   - Remove 'buffer' from struct request.  This is ancient code, from
     when requests were always virtually mapped.  Kill it, to reclaim
     some space in struct request.  From me.

   - Remove 'magic' from blk_plug.  Since we store these on the stack
     and since we've never caught any actual bugs with this, lets just
     get rid of it.  From me.

   - Only call part_in_flight() once for IO completion, as includes two
     atomic reads.  Hopefully we'll get a better implementation soon, as
     the part IO stats are now one of the more expensive parts of doing
     IO on blk-mq.  From me.

   - File migration of block code from {mm,fs}/ to block/.  This
     includes bio.c, bio-integrity.c, bounce.c, and ioprio.c.  From me,
     from a discussion on lkml.

  That should describe the meat of the pull request.  Also has various
  little fixes and cleanups from Dave Jones, Shaohua Li, Duan Jiong,
  Fengguang Wu, Fabian Frederick, Randy Dunlap, Robert Elliott, and Sam
  Bradshaw"

* 'for-3.16/core' of git://git.kernel.dk/linux-block: (100 commits)
  blk-mq: push IPI or local end_io decision to __blk_mq_complete_request()
  blk-mq: remember to start timeout handler for direct queue
  block: ensure that the timer is always added
  blk-mq: blk_mq_unregister_hctx() can be static
  blk-mq: make the sysfs mq/ layout reflect current mappings
  blk-mq: blk_mq_tag_to_rq should handle flush request
  block: remove dead code in scsi_ioctl:blk_verify_command
  blk-mq: request initialization optimizations
  block: add queue flag for disabling SG merging
  block: remove 'magic' from struct blk_plug
  blk-mq: remove alloc_hctx and free_hctx methods
  blk-mq: add file comments and update copyright notices
  blk-mq: remove blk_mq_alloc_request_pinned
  blk-mq: do not use blk_mq_alloc_request_pinned in blk_mq_map_request
  blk-mq: remove blk_mq_wait_for_tags
  blk-mq: initialize request in __blk_mq_alloc_request
  blk-mq: merge blk_mq_alloc_reserved_request into blk_mq_alloc_request
  blk-mq: add helper to insert requests from irq context
  blk-mq: remove stale comment for blk_mq_complete_request()
  blk-mq: allow non-softirq completions
  ...
2014-06-02 09:29:34 -07:00
Theodore Ts'o f9c6d4987b random: fix BUG_ON caused by accounting simplification
Commit ee1de406ba ("random: simplify accounting logic") simplified
things too much, in that it allows the following to trigger an
overflow that results in a BUG_ON crash:

dd if=/dev/urandom of=/dev/zero bs=67108707 count=1

Thanks to Peter Zihlstra for discovering the crash, and Hannes
Frederic for analyizing the root cause.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Reported-by: Peter Zijlstra <peterz@infradead.org>
Reported-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Cc: Greg Price <price@mit.edu>
2014-05-16 22:18:22 -04:00
Christoph Hellwig bdcfa3e57c random: export add_disk_randomness
This will be needed for pending changes to the scsi midlayer that now
calls lower level block APIs, as well as any blk-mq driver that wants to
contribute to the random pool.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Jens Axboe <axboe@fb.com>
2014-04-28 09:29:55 -06:00
H. Peter Anvin 7b878d4b48 random: Add arch_has_random[_seed]()
Add predicate functions for having arch_get_random[_seed]*().  The
only current use is to avoid the loop in arch_random_refill() when
arch_get_random_seed_long() is unavailable.

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2014-03-19 22:24:08 -04:00
H. Peter Anvin 331c6490c7 random: If we have arch_get_random_seed*(), try it before blocking
If we have arch_get_random_seed*(), try to use it for emergency refill
of the entropy pool before giving up and blocking on /dev/random.  It
may or may not work in the moment, but if it does work, it will give
the user better service than blocking will.

Reviewed-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2014-03-19 22:22:06 -04:00
H. Peter Anvin 83664a6928 random: Use arch_get_random_seed*() at init time and once a second
Use arch_get_random_seed*() in two places in the Linux random
driver (drivers/char/random.c):

1. During entropy pool initialization, use RDSEED in favor of RDRAND,
   with a fallback to the latter.  Entropy exhaustion is unlikely to
   happen there on physical hardware as the machine is single-threaded
   at that point, but could happen in a virtual machine.  In that
   case, the fallback to RDRAND will still provide more than adequate
   entropy pool initialization.

2. Once a second, issue RDSEED and, if successful, feed it to the
   entropy pool.  To ensure an extra layer of security, only credit
   half the entropy just in case.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2014-03-19 22:22:06 -04:00
Theodore Ts'o 46884442fc random: use the architectural HWRNG for the SHA's IV in extract_buf()
To help assuage the fears of those who think the NSA can introduce a
massive hack into the instruction decode and out of order execution
engine in the CPU without hundreds of Intel engineers knowing about
it (only one of which woud need to have the conscience and courage of
Edward Snowden to spill the beans to the public), use the HWRNG to
initialize the SHA starting value, instead of xor'ing it in
afterwards.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2014-03-19 22:18:52 -04:00
Greg Price 2132a96f66 random: clarify bits/bytes in wakeup thresholds
These are a recurring cause of confusion, so rename them to
hopefully be clearer.

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2014-03-19 22:18:51 -04:00
Greg Price 7d1b08c40c random: entropy_bytes is actually bits
The variable 'entropy_bytes' is set from an expression that actually
counts bits.  Fortunately it's also only compared to values that also
count bits.  Rename it accordingly.

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2014-03-19 22:18:51 -04:00
Greg Price 0fb7a01af5 random: simplify accounting code
With this we handle "reserved" in just one place.  As a bonus the
code becomes less nested, and the "wakeup_write" flag variable
becomes unnecessary.  The variable "flags" was already unused.

This code behaves identically to the previous version except in
two pathological cases that don't occur.  If the argument "nbytes"
is already less than "min", then we didn't previously enforce
"min".  If r->limit is false while "reserved" is nonzero, then we
previously applied "reserved" in checking whether we had enough
bits, even though we don't apply it to actually limit how many we
take.  The callers of account() never exercise either of these cases.

Before the previous commit, it was possible for "nbytes" to be less
than "min" if userspace chose a pathological configuration, but no
longer.

Cc: Jiri Kosina <jkosina@suse.cz>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2014-03-19 22:18:51 -04:00
Greg Price 8c2aa3390e random: tighten bound on random_read_wakeup_thresh
We use this value in a few places other than its literal meaning,
in particular in _xfer_secondary_pool() as a minimum number of
bits to pull from the input pool at a time into either output
pool.  It doesn't make sense to pull more bits than the whole size
of an output pool.

We could and possibly should separate the quantities "how much
should the input pool have to have to wake up /dev/random readers"
and "how much should we transfer from the input to an output pool
at a time", but nobody is likely to be sad they can't set the first
quantity to more than 1024 bits, so for now just limit them both.

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2014-03-19 22:18:51 -04:00
Greg Price a58aa4edc6 random: forget lock in lockless accounting
The only mutable data accessed here is ->entropy_count, but since
10b3a32d2 ("random: fix accounting race condition") we use cmpxchg to
protect our accesses to ->entropy_count here.  Drop the use of the
lock.

Cc: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2014-03-19 22:18:51 -04:00
Greg Price ee1de406ba random: simplify accounting logic
This logic is exactly equivalent to the old logic, but it should
be easier to see what it's doing.

The equivalence depends on one fact from outside this function:
when 'r->limit' is false, 'reserved' is zero.  (Well, two facts;
the other is that 'reserved' is never negative.)

Cc: Jiri Kosina <jkosina@suse.cz>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2014-03-19 22:18:51 -04:00
Greg Price 19fa5be1d9 random: fix comment on "account"
This comment didn't quite keep up as extract_entropy() was split into
four functions.  Put each bit by the function it describes.

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2014-03-19 22:18:50 -04:00
Greg Price 12ff3a517a random: simplify loop in random_read
The loop condition never changes until just before a break, so we
might as well write it as a constant.  Also since a996996dd7
("random: drop weird m_time/a_time manipulation") we don't do anything
after the loop finishes, so the 'break's might as well return
directly.  Some other simplifications.

There should be no change in behavior introduced by this commit.

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2014-03-19 22:18:50 -04:00
Greg Price 18e9cea749 random: fix description of get_random_bytes
After this remark was written, commit d2e7c96af added a use of
arch_get_random_long() inside the get_random_bytes codepath.
The main point stands, but it needs to be reworded.

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2014-03-19 22:18:50 -04:00
Greg Price f22052b202 random: fix comment on proc_do_uuid
There's only one function here now, as uuid_strategy is long gone.
Also make the bit about "If accesses via ..." clearer.

Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2014-03-19 22:18:50 -04:00
Greg Price dfd38750db random: fix typos / spelling errors in comments
Signed-off-by: Greg Price <price@mit.edu>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2014-03-19 22:18:50 -04:00
Linus Torvalds 0891ad829d The /dev/random changes for 3.13 including a number of improvements in
the following areas: performance, avoiding waste of entropy, better
 tracking of entropy estimates, support for non-x86 platforms that have
 a register which can't be used for fine-grained timekeeping, but which
 might be good enough for the random driver.
 
 Also add some printk's so that we can see how quickly /dev/urandom can
 get initialized, and when programs try to use /dev/urandom before it
 is fully initialized (since this could be a security issue).  This
 shouldn't be an issue on x86 desktop/laptops --- a test on my Lenovo
 T430s laptop shows that /dev/urandom is getting fully initialized
 approximately two seconds before the root file system is mounted
 read/write --- this may be an issue with ARM and MIPS embedded/mobile
 systems, though.  These printk's will be a useful canary before
 potentially adding a future change to start blocking processes which
 try to read from /dev/urandom before it is initialized, which is
 something FreeBSD does already for security reasons, and which
 security folks have been agitating for Linux to also adopt.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.14 (GNU/Linux)
 
 iQIcBAABCAAGBQJShC4MAAoJENNvdpvBGATwC0QQAMujsIxTZnsHwQrbb5eJf1kD
 74TwQyEfWw5qnGQrc8JOoAbe1MG7C4QlfHxRsWxvCD8G+Mft4Q5ZgZOt0/ecAGD6
 Tid58EaZGSfK9+YE6jgvJFekQADCREdPSxBASJ3cECT6dXXBX9IqR9gbAK02mM+w
 QZdbgWBMsPJZiHSsCNeRbZ9oIiPdcNDsMJwzJhirPUeAnKCaX3z+LWc3XcMw7wYi
 q5cSl0ENZd6QsBKs37A1ol5BtLEsoot2t3HKdnpOBsDQKSJ712KduwN5jUfs6h9D
 0fqmVHwfKsge+D8/3NgBKz+yWLQnGkuB4Ibo+09BZXwH3rYU1/gKm0iLNi0yQ5fV
 73bn4pqF6cZdDNgj0Ic+MyYAW+S/NOQ6TcF/3eSAPW6z/wHZOfZ2njCh1GEHBOKI
 6iZZu+Ek7QyFJ/z5Fr1bXFJR7V99r7hRD3gwMCMZ/mjhloB2cyD0a2A9kFP85ykI
 I4tFEnq0FpX/K60ag4hiLnqVx/TsmbdMoz+8OpQckHgQJrZMuRRf1d+T4au47Y6K
 uXGLpSuvkALYW2koo2OoO2d873N/89fqFL8lI8Iy0YlgAxxxm++gl1Mql/E1wPOa
 5jB0lW/jex/CquE7meTgRlM/fTU/HVbe3608ZNUYBJUHS9K/PaSnCCu2ya8/TsSW
 xeVS/vMnNvtGerdEIyKm
 =wla0
 -----END PGP SIGNATURE-----

Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random

Pull /dev/random changes from Ted Ts'o:
 "The /dev/random changes for 3.13 including a number of improvements in
  the following areas: performance, avoiding waste of entropy, better
  tracking of entropy estimates, support for non-x86 platforms that have
  a register which can't be used for fine-grained timekeeping, but which
  might be good enough for the random driver.

  Also add some printk's so that we can see how quickly /dev/urandom can
  get initialized, and when programs try to use /dev/urandom before it
  is fully initialized (since this could be a security issue).  This
  shouldn't be an issue on x86 desktop/laptops --- a test on my Lenovo
  T430s laptop shows that /dev/urandom is getting fully initialized
  approximately two seconds before the root file system is mounted
  read/write --- this may be an issue with ARM and MIPS embedded/mobile
  systems, though.  These printk's will be a useful canary before
  potentially adding a future change to start blocking processes which
  try to read from /dev/urandom before it is initialized, which is
  something FreeBSD does already for security reasons, and which
  security folks have been agitating for Linux to also adopt"

* tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
  random: add debugging code to detect early use of get_random_bytes()
  random: initialize the last_time field in struct timer_rand_state
  random: don't zap entropy count in rand_initialize()
  random: printk notifications for urandom pool initialization
  random: make add_timer_randomness() fill the nonblocking pool first
  random: convert DEBUG_ENT to tracepoints
  random: push extra entropy to the output pools
  random: drop trickle mode
  random: adjust the generator polynomials in the mixing function slightly
  random: speed up the fast_mix function by a factor of four
  random: cap the rate which the /dev/urandom pool gets reseeded
  random: optimize the entropy_store structure
  random: optimize spinlock use in add_device_randomness()
  random: fix the tracepoint for get_random_bytes(_arch)
  random: account for entropy loss due to overwrites
  random: allow fractional bits to be tracked
  random: statically compute poolbitshift, poolbytes, poolbits
  random: mix in architectural randomness earlier in extract_buf()
2013-11-16 10:19:15 -08:00
Hannes Frederic Sowa 4af712e8df random32: add prandom_reseed_late() and call when nonblocking pool becomes initialized
The Tausworthe PRNG is initialized at late_initcall time. At that time the
entropy pool serving get_random_bytes is not filled sufficiently. This
patch adds an additional reseeding step as soon as the nonblocking pool
gets marked as initialized.

On some machines it might be possible that late_initcall gets called after
the pool has been initialized. In this situation we won't reseed again.

(A call to prandom_seed_late blocks later invocations of early reseed
attempts.)

Joint work with Daniel Borkmann.

Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-11-11 14:32:14 -05:00
Theodore Ts'o 392a546dc8 random: add debugging code to detect early use of get_random_bytes()
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-11-03 18:24:08 -05:00
Theodore Ts'o 644008df89 random: initialize the last_time field in struct timer_rand_state
Since we initialize jiffies to wrap five minutes before boot (see
INITIAL_JIFFIES defined in include/linux/jiffies.h) it's important to
make sure the last_time field is initialized to INITIAL_JIFFIES.
Otherwise, the entropy estimator will overestimate the amount of
entropy resulting from the first call to add_timer_randomness(),
generally by about 8 bits.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-11-03 18:20:05 -05:00
Theodore Ts'o ae9ecd92dd random: don't zap entropy count in rand_initialize()
The rand_initialize() function was being run fairly late in the kernel
boot sequence.  This was unfortunate, since it zero'ed the entropy
counters, thus throwing away credit that was accumulated earlier in
the boot sequence, and it also meant that initcall functions run
before rand_initialize were using a minimally initialized pool.

To fix this, fix init_std_data() to no longer zap the entropy counter;
it wasn't necessary, and move rand_initialize() to be an early
initcall.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-11-03 18:18:49 -05:00
Theodore Ts'o 301f0595c0 random: printk notifications for urandom pool initialization
Print a notification to the console when the nonblocking pool is
initialized.  Also printk a warning when a process tries reading from
/dev/urandom before it is fully initialized.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-11-03 18:18:48 -05:00
Theodore Ts'o 40db23e533 random: make add_timer_randomness() fill the nonblocking pool first
Change add_timer_randomness() so that it directs incoming entropy to
the nonblocking pool first if it hasn't been fully initialized yet.
This matches the strategy we use in add_interrupt_randomness(), which
allows us to push the randomness where we need it the most during when
the system is first booting up, so that get_random_bytes() and
/dev/urandom become safe to use as soon as possible.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-11-03 18:18:47 -05:00
Linus Torvalds f715729ee4 These patches are designed to enable improvements to /dev/random for
non-x86 platforms, in particular MIPS and ARM.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.14 (GNU/Linux)
 
 iQIcBAABCAAGBQJSVvO/AAoJENNvdpvBGATwNZ4P+wadRWY/Gdz/p9332qdVrGYs
 nP4DPWSg+n3RH/fOnacEwHF5vqapTe03G82NriCaVGFP8O9j7bo6ByMKKkIR7yvr
 4sHUX4YMc/DwchaIHH2xp8fQoMc3Mv7mn8bodTtPXgveeldEvtuUQM0q+j4DXZUT
 qSLMGElgJYrpIf2Cm8JAIBkt2QuzpZPPX7Z6glZunpvfLSMmgn3Vj2ilNEx1YCFH
 v+Rk1ZYLjg2LzUYqaO7HOXlRJqmE10I7ZmNvPXJZ9fuPmGYD9FU6WeHhmIAFYdFw
 V6bAzou+LbnuNVoW6yiDvrKcOXgh2Spbk6SaKVSrcjVPfc87ocNzGWI4OTfNy1xI
 Kv9u4YfU3pIUWPDGx0mvT/KXAXl/PGVfu7bYXDcN2I2tqlrbBPdIWqpFB2eTn7/j
 //XbatoT6gGZTuseCKhYXWpG8AE5pCfbjGnd9il21fvlUDdkIq42wAs96qjc6Ruj
 tPCi5yYzLiHsn4eau+SJqI1KxPLf6YJw9Qo+f70FGl63wXJU9Vr07ID2rGTwXm1m
 Qf1joTtx900PvfzUaD0ODbQZaTbX6ebSOkriKpKWYwg+26Gdc7JAxIVI3HDOlOR+
 ++r1M4ERwDic/xdVsB6Mngmop3d1BeNU2IAoiRDZwcJpS1+MLivlIbd1PjBAt0bU
 +oOm+wseHEzSnlgucQ0g
 =qnTe
 -----END PGP SIGNATURE-----

Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random

Pull /dev/random changes from Ted Ts'o:
 "These patches are designed to enable improvements to /dev/random for
  non-x86 platforms, in particular MIPS and ARM"

* tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
  random: allow architectures to optionally define random_get_entropy()
  random: run random_int_secret_init() run after all late_initcalls
2013-10-10 12:31:43 -07:00
Theodore Ts'o f80bbd8b92 random: convert DEBUG_ENT to tracepoints
Instead of using the random driver's ad-hoc DEBUG_ENT() mechanism, use
tracepoints instead.  This allows for a much more fine-grained control
of which debugging mechanism which a developer might need, and unifies
the debugging messages with all of the existing tracepoints.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-10-10 14:32:23 -04:00
Theodore Ts'o 6265e169cd random: push extra entropy to the output pools
As the input pool gets filled, start transfering entropy to the output
pools until they get filled.  This allows us to use the output pools
to store more system entropy.  Waste not, want not....

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-10-10 14:32:22 -04:00
Theodore Ts'o 95b709b6be random: drop trickle mode
The add_timer_randomness() used to drop into trickle mode when entropy
pool was estimated to be 87.5% full.  This was important when
add_timer_randomness() was used to sample interrupts.  It's not used
for this any more --- add_interrupt_randomness() now uses fast_mix()
instead.  By elimitating trickle mode, it allows us to fully utilize
entropy provided by add_input_randomness() and add_disk_randomness()
even when the input pool is above the old trickle threshold of 87.5%.

This helps to answer the criticism in [1] in their hypothetical
scenario where our entropy estimator was inaccurate, even though the
measurements in [2] seem to indicate that our entropy estimator given
real-life entropy collection is actually pretty good, albeit on the
conservative side (which was as it was designed).

[1] http://eprint.iacr.org/2013/338.pdf
[2] http://eprint.iacr.org/2012/251.pdf

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-10-10 14:32:21 -04:00
Theodore Ts'o 6e9fa2c8a6 random: adjust the generator polynomials in the mixing function slightly
Our mixing functions were analyzed by Lacharme, Roeck, Strubel, and
Videau in their paper, "The Linux Pseudorandom Number Generator
Revisited" (see: http://eprint.iacr.org/2012/251.pdf).

They suggested a slight change to improve our mixing functions
slightly.  I also adjusted the comments to better explain what is
going on, and to document why the polynomials were changed.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-10-10 14:32:21 -04:00
Theodore Ts'o 655b226470 random: speed up the fast_mix function by a factor of four
By mixing the entropy in chunks of 32-bit words instead of byte by
byte, we can speed up the fast_mix function significantly.  Since it
is called on every single interrupt, on systems with a very heavy
interrupt load, this can make a noticeable difference.

Also fix a compilation warning in add_interrupt_randomness() and avoid
xor'ing cycles and jiffies together just in case we have an
architecture which tries to define random_get_entropy() by returning
jiffies.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Reported-by: Jörn Engel <joern@logfs.org>
2013-10-10 14:32:20 -04:00
Theodore Ts'o f5c2742c23 random: cap the rate which the /dev/urandom pool gets reseeded
In order to avoid draining the input pool of its entropy at too high
of a rate, enforce a minimum time interval between reseedings of the
urandom pool.  This is set to 60 seconds by default.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-10-10 14:32:19 -04:00
Theodore Ts'o c59974aea4 random: optimize the entropy_store structure
Use smaller types to slightly shrink the size of the entropy store
structure.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-10-10 14:32:18 -04:00
Theodore Ts'o 3ef4cb2d65 random: optimize spinlock use in add_device_randomness()
The add_device_randomness() function calls mix_pool_bytes() twice for
the input pool and the non-blocking pool, for a total of four times.
By using _mix_pool_byte() and taking the spinlock in
add_device_randomness(), we can halve the number of times we need
take each pool's spinlock.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-10-10 14:32:17 -04:00
Theodore Ts'o 5910895f0e random: fix the tracepoint for get_random_bytes(_arch)
Fix a problem where get_random_bytes_arch() was calling the tracepoint
get_random_bytes().  So add a new tracepoint for
get_random_bytes_arch(), and make get_random_bytes() and
get_random_bytes_arch() call their correct tracepoint.

Also, add a new tracepoint for add_device_randomness()

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-10-10 14:32:16 -04:00
H. Peter Anvin 30e37ec516 random: account for entropy loss due to overwrites
When we write entropy into a non-empty pool, we currently don't
account at all for the fact that we will probabilistically overwrite
some of the entropy in that pool.  This means that unless the pool is
fully empty, we are currently *guaranteed* to overestimate the amount
of entropy in the pool!

Assuming Shannon entropy with zero correlations we end up with an
exponentally decaying value of new entropy added:

	entropy <- entropy + (pool_size - entropy) *
		(1 - exp(-add_entropy/pool_size))

However, calculations involving fractional exponentials are not
practical in the kernel, so apply a piecewise linearization:

	  For add_entropy <= pool_size/2 then

	  (1 - exp(-add_entropy/pool_size)) >= (add_entropy/pool_size)*0.7869...

	  ... so we can approximate the exponential with
	  3/4*add_entropy/pool_size and still be on the
	  safe side by adding at most pool_size/2 at a time.

In order for the loop not to take arbitrary amounts of time if a bad
ioctl is received, terminate if we are within one bit of full.  This
way the loop is guaranteed to terminate after no more than
log2(poolsize) iterations, no matter what the input value is.  The
vast majority of the time the loop will be executed exactly once.

The piecewise linearization is very conservative, approaching 3/4 of
the usable input value for small inputs, however, our entropy
estimation is pretty weak at best, especially for small values; we
have no handle on correlation; and the Shannon entropy measure (Rényi
entropy of order 1) is not the correct one to use in the first place,
but rather the correct entropy measure is the min-entropy, the Rényi
entropy of infinite order.

As such, this conservatism seems more than justified.

This does introduce fractional bit values.  I have left it to have 3
bits of fraction, so that with a pool of 2^12 bits the multiply in
credit_entropy_bits() can still fit into an int, as 2*(3+12) < 31.  It
is definitely possible to allow for more fractional accounting, but
that multiply then would have to be turned into a 32*32 -> 64 multiply.

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: DJ Johnston <dj.johnston@intel.com>
2013-10-10 14:32:15 -04:00
H. Peter Anvin a283b5c459 random: allow fractional bits to be tracked
Allow fractional bits of entropy to be tracked by scaling the entropy
counter (fixed point).  This will be used in a subsequent patch that
accounts for entropy lost due to overwrites.

[ Modified by tytso to fix up a few missing places where the
  entropy_count wasn't properly converted from fractional bits to
  bits. ]

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2013-10-10 14:32:14 -04:00
H. Peter Anvin 9ed17b70b4 random: statically compute poolbitshift, poolbytes, poolbits
Use a macro to statically compute poolbitshift (will be used in a
subsequent patch), poolbytes, and poolbits.  On virtually all
architectures the cost of a memory load with an offset is the same as
the one of a memory load.

It is still possible for this to generate worse code since the C
compiler doesn't know the fixed relationship between these fields, but
that is somewhat unlikely.

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2013-10-10 14:32:13 -04:00
Theodore Ts'o 85a1f77716 random: mix in architectural randomness earlier in extract_buf()
Previously if CPU chip had a built-in random number generator (i.e.,
RDRAND on newer x86 chips), we mixed it in at the very end of
extract_buf() using an XOR operation.

We now mix it in right after the calculate a hash across the entire
pool.  This has the advantage that any contribution of entropy from
the CPU's HWRNG will get mixed back into the pool.  In addition, it
means that if the HWRNG has any defects (either accidentally or
maliciously introduced), this will be mitigated via the non-linear
transform of the SHA-1 hash function before we hand out generated
output.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-10-10 14:32:13 -04:00
Theodore Ts'o 61875f30da random: allow architectures to optionally define random_get_entropy()
Allow architectures which have a disabled get_cycles() function to
provide a random_get_entropy() function which provides a fine-grained,
rapidly changing counter that can be used by the /dev/random driver.

For example, an architecture might have a rapidly changing register
used to control random TLB cache eviction, or DRAM refresh that
doesn't meet the requirements of get_cycles(), but which is good
enough for the needs of the random driver.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org
2013-10-10 14:30:53 -04:00
Theodore Ts'o 47d06e532e random: run random_int_secret_init() run after all late_initcalls
The some platforms (e.g., ARM) initializes their clocks as
late_initcalls for some unknown reason.  So make sure
random_int_secret_init() is run after all of the late_initcalls are
run.

Cc: stable@vger.kernel.org
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
2013-09-23 06:35:06 -04:00
Martin Schwidefsky 0244ad004a Remove GENERIC_HARDIRQ config option
After the last architecture switched to generic hard irqs the config
options HAVE_GENERIC_HARDIRQS & GENERIC_HARDIRQS and the related code
for !CONFIG_GENERIC_HARDIRQS can be removed.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-09-13 15:09:52 +02:00
Joe Perches a151427ed0 char: Convert use of typedef ctl_table to struct ctl_table
This typedef is unnecessary and should just be removed.

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-06-17 16:43:08 -07:00
Jiri Kosina 10b3a32d29 random: fix accounting race condition with lockless irq entropy_count update
Commit 902c098a36 ("random: use lockless techniques in the interrupt
path") turned IRQ path from being spinlock protected into lockless
cmpxchg-retry update.

That commit removed r->lock serialization between crediting entropy bits
from IRQ context and accounting when extracting entropy on userspace
read path, but didn't turn the r->entropy_count reads/updates in
account() to use cmpxchg as well.

It has been observed, that under certain circumstances this leads to
read() on /dev/urandom to return 0 (EOF), as r->entropy_count gets
corrupted and becomes negative, which in turn results in propagating 0
all the way from account() to the actual read() call.

Convert the accounting code to be the proper lockless counterpart of
what has been partially done by 902c098a36.

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Greg KH <greg@kroah.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-24 16:22:52 -07:00
Jarod Wilson 1e7e2e05c1 drivers/char/random.c: fix priming of last_data
Commit ec8f02da9e ("random: prime last_data value per fips
requirements") added priming of last_data per fips requirements.

Unfortuantely, it did so in a way that can lead to multiple threads all
incrementing nbytes, but only one actually doing anything with the extra
data, which leads to some fun random corruption and panics.

The fix is to simply do everything needed to prime last_data in a single
shot, so there's no window for multiple cpus to increment nbytes -- in
fact, we won't even increment or decrement nbytes anymore, we'll just
extract the needed EXTRACT_SIZE one time per pool and then carry on with
the normal routine.

All these changes have been tested across multiple hosts and
architectures where panics were previously encoutered.  The code changes
are are strictly limited to areas only touched when when booted in fips
mode.

This change should also go into 3.8-stable, to make the myriads of fips
users on 3.8.x happy.

Signed-off-by: Jarod Wilson <jarod@redhat.com>
Tested-by: Jan Stancek <jstancek@redhat.com>
Tested-by: Jan Stodola <jstodola@redhat.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Matt Mackall <mpm@selenic.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-05-24 16:22:52 -07:00
Andy Shevchenko 16c7fa0582 lib/string_helpers: introduce generic string_unescape
There are several places in kernel where modules unescapes input to convert
C-Style Escape Sequences into byte codes.

The patch provides generic implementation of such approach. Test cases are
also included into the patch.

[akpm@linux-foundation.org: clarify comment]
[akpm@linux-foundation.org: export get_random_int() to modules]
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: William Hubbs <w.d.hubbs@gmail.com>
Cc: Chris Brannon <chris@the-brannons.com>
Cc: Kirk Reiser <kirk@braille.uwo.ca>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-04-30 17:04:03 -07:00