Is this assumption correct, that adding a magnetometer, accelerometer, simple GPS, etc to a motherboard would improve its entropy gathering? Or is there a mathematical/cryptographical rule that makes the addition of such sensors useless?
Do smartphones have better entropy gathering abilities? It seems like phones would be able to seed a RNG based on input from a variety of sensors that would all be very different between even phones in the same room. Looking at a GPS Android app like Satstat (2) it feels like there's a huge amount of variability to draw from.
If such sensors would add better entropy, would it really cost that much to add them to PC motherboards?
----
(1) https://news.ycombinator.com/item?id=30848973
(2) https://mvglasow.gitlab.io/satstat/ & https://f-droid.org/en/packages/com.vonglasow.michael.satsta...
Linux is aware of RDSEED and uses it to provide additional randomness when available. You do need to trust the implementation to be free from backdoors and bugs - some CPUs are known to be buggy. [2]
Randomness seeding issues largely does not concern desktop PCs or smartphones (although you can easily catch early booting programs like systemd reading randomness before it has been fully seeded) [3].
It is a much bigger issue on either small embedded devices or VMs, both of which may have very few peripherals to gather entropy from. They can be provided randomness through dedicated hardware support, or from the host, and they probably should be, but that still leaves many real-world systems currently running Linux out in the cold. This is not just a theoretical problem, as has been shown by looking at indicators like RSA keys with colliding primes, which should never happen when generated with good RNG. [4]
[1] https://en.wikipedia.org/wiki/RDRAND
[2] https://github.com/systemd/systemd/issues/18184
[3] https://github.com/systemd/systemd/issues/4167
[4] https://freedom-to-tinker.com/2012/02/15/new-research-theres...
While there are platforms with better and worse hardware sources of unpredictable bits, the problem with Linux /dev/random isn't so much a hardware issue, but rather a software one. The fundamental problem Linux tries to solve isn't that hard, as you can see from the fact that so many other popular platforms running on similar hardware have neatly solved it.
The problem with the LRNG is that it's been architecturally incoherent for a very long time (entropy estimation, urandom vs random, lack of clarity about seeding and initialization status, behavior doing bootup). As a result, an ecosystem of software has grown roots around the design (and bugs) of the current LRNG. Major changes to the behavior of the LRNG breaks "bug compatibility", and, because the LRNG is one of the core cryptographic facilities in the kernel, this is an instance where you really really don't want to break userland.
The basic fact of kernel random number generation is this: once you've properly seeded an RNG, your acute "entropy gathering" problem is over. Continuous access to high volumes of high-entropy bits are nice to have, but the kernel gains its ability to satisfy gigabytes of requests for random numbers from the same source that modern cryptography gains its ability to satisfy gigabytes of requests for ciphertext with a 128 bit key.
People looking to platform hardware (or who fixate on the intricacies of threading the LRNG isn't guest VMs) are mostly looking in the wrong place for problems to solve. The big issue today is that the LRNG is still pretty incoherent, but nobody really knows what would break if it was designed more carefully.
[1] https://en.wikipedia.org/wiki/RDRAND
[2] https://www.electronicdesign.com/resources/article/21796238/...
At boot time, on a server sitting in a rack beside thousands of others ... how are these going to help any? They aint moving and the RF/energy environment around them should be steady state or well within characterize-able bounds of noise.
"Random enough" is a metaphysical question when you get into it. If an RTLSDR stick and a site customized munger script can't provide enough entropy for the entire data center you've fallen into a Purity spiral and will never be happy, anyway.
For hyper important entropy, humans must invest in a macroscopic and very slow spectacle - a publicly prepared experiment broadcast live to a large audience. [3]
0 - https://analog.intgckts.com/noise/thermal-noise-of-a-resisto...
1 - https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat
I don't know if it's still maintained or not, but the developer proposed a public entropy pool[3] which looks interesting. Full disclosure: I haven't looked at it closely enough to understand how the trust model works.
[1]: https://shop.fsf.org/storage-devices/neug-usb-true-random-nu... [2]: https://www.gniibe.org/memo/development/gnuk/rng/neug.html [3]: https://www.gniibe.org/memo/development/kun9be4/idea.html
The problem is knowing when you've collected enough to (re)seed your random bit generator.
Entropy sources usually have failure modes that result in predictable data in the output. The entropy source has to work even when someone is using an arc welder nearby or puts popcorn in the microwave.
Assuming kernel memory stays secret, collecting entropy should only be a problem during boot.
But distro maintainers aggressively optimize boot times, so there's a little time to spend collecting entropy at boot. Systems usually save a bit of entropy on the hard drive across boots, but that has its own issues. Unfortunately, first boot is when the SSH keys are generated, so that's kind of the worst-case scenario.
(See https://news.ycombinator.com/item?id=2947936 from 2011 but the original article link seems to have gone)
Generally speaking, a lot of proposed sources of "randomness" may not be as random as people think. And off-the-shelf hardware may be compromised/influenced by governemnts (see the NSA-NIST scandal).
For safe communication (one-time pad), you need plenty of truly random numbers, not pseudo-random ones.
It would therefore be good to have an open source source of entropy as a USB device, e.g. based on radioactive decay of some harmless but unstable isotope. There are companies offering such devices, but again I would not trust any of them but prefer people build their own from open hardware specs (it is likely that these vendors are infiltrated by intelligence agencies the same way as Swiss-based Crypto AG was - https://en.wikipedia.org/wiki/Crypto_AG).
But just because there are good entropy sources on physical hardware doesn't mean the overwhelming majority of system instances are inheriting those sources well (and not just virtual machines/cloud processing)
6 years ago I wrote about haveged, truerand, and twuewand[0] (specifically in the context if improving the entropy pool for a heavy Java app)
There are also dedicated USB dongles ... but again: they don't help much (if at all) for hosted instances (containers, VMs, etc)
--------------
[0]https://antipaucity.com/2016/04/01/improve-your-entropy-pool...
> Nehemiah stepping 3 and higher offers an electrical noise-based random number generator (RNG) that produces good random numbers for different purposes.
[1] https://en.m.wikipedia.org/wiki/VIA_PadLock [2] http://www.logix.cz/michal/doc/article.xp/padlock-en
Related: And does software detect if a sensor is broken or a poor source on entropy? Like if it broke and locked itself on to the same constant temperature reading?
Extra sensors cost extra money, and people not are willing to pay for no visible benefit to them.
Excuse me, but when did pinging multiple satellites thousands of kilometres away (in orbit) become simple?