The file _could_ be a temporary file, which gets removed
before we'd get around to writing it to disk. Sure, the
chances of this happening with a single file are close to
zero, but having 100MB from 200 different temp files on a
shell server isn't unreasonable to expect.
> > This is due to this smarter handling of the flushing of
> > dirty pages and due to a more subtle bug where the system
> > ended up doing synchronous IO on too many pages, whereas
> > now it only does synchronous IO on _1_ page per scan ;)
>
> And this is definately a noticeable fix, thanks for your continued
> work. I know it's hard to get everything balanced out right, and I
> only wrote this email to describe some behavior I was seeing and make
> sure it was expected in the current VM. You've let me know that it
> is, and it's really minor compared to problems some of the earlier
> kernels had.
I'll be sure to keep this problem in mind. I really want
to fix it, I just haven't figured out how yet ;)
Maybe we should just see if anything in the first few MB
of inactive pages was freeable, limiting the first scan to
something like 1 or maybe even 5 MB maximum (freepages.min?
freepages.high?) and flushing as soon as we find more unfreeable
pages than that ?
Maybe another solution with bdflush tuning ?
I'll send a patch as soon as I figure this one out...
regards,
Rik
-- Virtual memory is like a game you can't win; However, without VM there's truly nothing to lose...http://www.surriel.com/ http://distro.conectiva.com/
Send all your spam to aardvark@nl.linux.org (spam digging piggy)
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/