Okay, the standards are a bit more stringent for hash functions. You
basically want the total number of collisions to be minimized.
That said, standard statistical things are useful for first-pass
discrimination.
This looks basically okay, but if it's really just even with respect
to cache pressure the need for a different data structure in the
eventual future is clear. I'd need a baseline (pre-hlist) dcache
snapshot to compare against to get a better idea.
-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/