Indeed. However, we have to be careful about how we do this. See
Schneier, et al's paper on catastrophic reseeding. It's better to
batch up entropy transfers rather than trickle it in.
> Are you thinking of (semi) decoupling the two pools?
I'm thinking of changing urandom read to avoid pulling entropy from
the primary pool (via xfer) if it falls below a given low
watermark. The important part is to prevent starvation of
/dev/random, it doesn't have to be fair.
> I sort of agree with you that we should make random "stricter" and push
> some of the sillier uses onto urandom... but only to a degree or else
> /dev/random grows worthless. Right now, on my workstation with plenty
> of user input and my netdev-random patches, I have plenty of entropy and
> can safely use random for everything... and I like that.
My patches should provide sufficient entropy for any workstation use
with or without network sampling. It's only the headless case that's
problematic - see my compromise patch with trust_pct.
-- "Love the dolphins," she advised him. "Write by W.A.S.T.E.." - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/