Yes, I understand the catastrophic reseeding, I've read that and a
few other of his papers on PRNGs.
> I tried to take a bit more of a moderate position between relying
> solely on crypgraphic randomness and a pure absolute randomness model.
> So we use large pools for mixing, and a catastrophic reseeding policy.
>
> >From a pure theory point of view, I can see where this might be quite
> bothersome. On the other hand, practically, I think what we're doing
> is justifiable, and not really a secucity problem.
That's certainly a reasonable approach (to the extent that all the
attacks we know of are purely theoretical), but a) it's not the
impression one gets from the docs and b) I think we can easily do better.
> That being said, if you really want to use your patch, please do it
> differently. In order to avoid the iterative guessing attack
> described by Bruce Schneier, it is imperative that you extract
> r->poolinfo.poolwirds - r->entropy_count/32 words of randomness from
> the primary pool, and mix it into the secondary.
Ah, I thought we were relying on the extract_count clause of the xfer
function to handle this, the first clause just seemed buggy. I'm not
quite convinced that this isn't sufficient. Do you see a theoretical
problem with that?
It certainly doesn't hurt to transfer a larger chunk of data from the
primary pool (and I'll update my patch to reflect this), but if it
says there are only 8 bits of entropy there, we should only tally 8
bits of credit in the secondary pool. With the current behavior, you
can do 'cat /dev/random | hexdump', wait for it to block, hit a key or
two, and have it spew out another K of data. I think this goes against
everyone's expectations of how this should work.
> P.S. /dev/urandom should probably also be changed to use an entirely
> separate pool, which then periodically pulls a small amount of entropy
> from the priamry pool as necessary. That would make /dev/urandom
> slightly more dependent on the strength of SHA, while causing it to
> not draw down as heavily on the entropy stored in /dev/random, which
> would be a good thing.
Indeed - I intend to take advantage of the multiple pool flexibility
in the current code. I'll have a third pool that's allowed to draw
from the primary whenever it's more than half full. Assuming input
entropy rate > output entropy rate, this will make it exactly as
strong as /dev/random, while not starving /dev/random when there's a
shortage.
-- "Love the dolphins," she advised him. "Write by W.A.S.T.E.." - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/