I've got an idea about how to handle this situation generally (without
sending 'tips' to kernel via madvice() or anything similar).
Instead of sorting chached pages (i mean blocks of files) by last touch
time, and dropping the oldest page(s) if we're sort on memory, i would
propose this nicer algorithm: (i this is relevant only to the read cache)
Suppose that f1,f2,...fN files cached, their sizes are s1,s2,...sN and
that they were last touched t1,t2,...tN seconds ago. (t1<t2<...<tN)
Now we shouldn't automatically choose pages of fN to drop, instead a
probability (chance) could be assigned to each file, for example:
fI*sI*tI/SUM where I is one of 1,2,...,N, and SUM is the SUM of fI*sI*tI.
With this, mostly newer files would stay in cache, but older files would
still have a chance.
This could also be tuned, for example to take into account 't' more, the
fI*sI*tI*tI could be used... and so on, we have infinite possibilities.
have a nice day,
Balazs Pozsar.
ps: If 'my' idea is the which is already used in the kernel, then tell me
:) and give me some points were to read more before telling stupid things.
> I personally don't feel that the cache should be allowed to grow over
> 50% of the system's memory at all, we've got so much in the cache at
> that point, that we're probably not hitting it all that much.
>
> This is why the discussion on the other cache scanning algorithm
> (2Q+?) was so interesting, since it looked to handle both the LRU
> vs. FIFO tradeoffs very nicely.
--
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/