Ugh. 100ms max holdtime really looks fairly disgusting.
Since that code only does list manipulation, that implies that the
inode_unused list is d*mn long.
>looks a little distressing - the hold times on inode_lock by prune_icache
>look bad in terms of latency (contention is still low, but people are still
>waiting on it for a very long time). Is this a transient thing, or do people
>think this is going to be a problem?
That already _is_ a problem: 0.1 seconds of hickups is nasty (and
obviously while it wasn't called that many times, several other users
actually hit it for almost that long).
It _sounds_ like we may put inodes on the unused list when they really
aren't unused. I don't think you can get the mean time that high
otherwise (63ms _mean_ lock hold time? Do I really read that thing
correctly?).
Actually, looking at "prune_icache()", I think we have two problems:
- inodes may be too often on the unuse-list when they cannot be unused
- thse kinds of inodes tend to cluster at the _head_ of the list, and
we never correct for that. So once we get into that situation, we
stay there.
Al, comments? I note:
- in __iget(), if the inode count goes from 0 to 1, we'll leave it on
the unused list if it is locked (if it is dirty it should already
have been on the dirty list, so that case should be ok)
But I don't really see where there would be any bigger bug..
- is this the time to get rid of the "inode_unused" thing entirely? If
we drop the thing from the dcache, we might as well drop the inode
too.
Ehh..
Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/