> 1. potentially long hash chains are walked with the page cache
> lock held for the entire duration of the operation
not the case.
> 2. multiple cache lines are touched while holding the page cache
> lock
not the case because the problem with the pagecache_lock was its bouncing
between CPUs, not processes waiting on it.
> 3. sequential lookups involve reaquiring the page cache lock
the granularity of the pagecache directly influences the number of
accesses to the pagecache locking mechanizm. Neither patch solves this,
the number of lock operations does not change - but the lock data
structures are spread out more.
i think it's a separate (and just as interesting) issue to decrease the
granularity of the pagecache - this not only decreases locking (and other
iteration) costs, it also decreases the size of the hash (or whatever
other data structure is used).
> 4. the page cache hash is too large, virtually assuring that
> lookups will cause a cache miss
this does not appear to be the case (see my other replies). Even if the
hash table is big and assuming the worst-case (we miss on every hash table
access), mem_map is *way* bigger in the cache because it has a much less
compressed format. The compression ratio between mem_map[] and the hash
table is 1:8 in the stock kernel, 1:4 with the page buckets patch.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/