inode_lock hold times are a problem for other reasons. Leaving this
unfixed makes the preepmtible kernel rather pointless.... An ideal
fix would be to release inodes based on VM pressure against their backing
page. But I don't think anyone's started looking at inode_lock yet.
The big one is lru_list_lock, of course. I'll be releasing code in
the next couple of days which should take that off the map. Testing
would be appreciated.
I have a concern about the lockmeter results. Lockmeter appears
to be measuring lock frequency and hold times and contention. But
is it measuring the cost of the cacheline transfers? For example,
I expect that with delayed allocation and radix-tree pagecache, one
of the major remaining bottlenecks will be ownership of the superblock
semaphore's cacheline. Is this measurable? (Actually, we may
then be at the point where copy_from_user costs dominate).
-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/