I started this week attempting to write a script to download
random bytes from randomserver.dyndns.org. The issue I was having was that a
web page presented a GUI for downloading a given number of bytes but the
website, which streams true random numbers from a hardware RNG connected to the
server, did not have additional resources for automating the streaming. Adam
was able to help me out by pointing me to a script he wrote for a website with
a similar problem. He used the twill extension for Python which automates HTML
form filling and submission, which is exactly what I needed. I modified his
script for the web site I was working on but when I tried to test the script, I
found that the server was down. Since completing the script, the web site has
only been active intermittently, so it has been difficult for me to properly
debug it. I will keep checking the site daily to see if it is back up and
stable so that the script can be finished up.
Meanwhile, Amir sent me two more papers on sources of
entropy that have been recently published. The paper on boot-time entropy in
particular is extremely relevant to our project and provided a lot of insight
into providing sufficient randomness at all times through the Linux kernel.
Welcome to the
Entropics: Boot-time Entropy in Embedded Devices
This paper starts by making the claim that a device cannot
be successfully booted until it is able to provide high-entropy randomness to
applications. It then introduces three methods for providing this high entropy
before the traditional blocking pool for /dev/random is filled. In the first
method, the number of cycles required to run each function in start_kernel is
recorded. The advantage of this method is that it provides high entropy with
little overhead (about 0.00016 seconds added to boot time) and can run with a
single kernel thread and interrupts disabled. While every source that accounted
for the non-determinism in execution time could not be identified, the paper
noted that clock domain crossing and variation in DRAM access latencies among
devices were major factors. The second method creates entropy by measuring the
decay of DRAM storage in a live system when refresh is disabled. The decay rate
of a given bit is affected by manufacturing variations, temperature, the value
of the bit, and other factors. This method is tricky to deploy since it relies
on memory controller operations which differ based on the DUT. The last
technique measures the amount of time required for a PLL to lock onto the new
output frequency. This time varies due to power supply stability, accuracy and
jitter of the source oscillator, and manufacturing process differences. Even
though it provides the highest bitrate of the three methods, it relies on a
strong understanding of the given SoC.
Recommendation for
the Entropy Sources Used for Random Bit Generation
This document specifies the design and requirements for a
NIST-approved entropy source. The entropy originates from a noise source, which
then may need to be digitized to produce random binary output. Optional
conditioning functions are necessary to reduce bias that may arise due to a
number of different factors. The conditioning functions will provide the
overall output of the entropy source. Health testing functionality is required
to detect malfunctions in the noise source or bias beyond the bounds for which
the conditioning functions can remedy. Health tests include startup tests on
all components, continuous tests on the noise source that run as long as the
system is live, and on-demand tests that require more time and resources than
continuous tests. On the simpler end, the Repitition Count Test and Adaptive
Proportion Test do basic checks for abnormal proportions of a given sample in a
fixed bitstring length. Extensive statistical analysis is performed to verify
that the estimated min-entropy provided by the source is sufficient.
No comments:
Post a Comment