Re: [RFC] Defragmentation proposal: preventative maintenance and cleanup [LONG]
Daniel Phillips (phillips@bonn-fries.net)
Fri, 7 Sep 2001 23:56:33 +0200
On September 7, 2001 10:58 am, Alex Bligh - linux-kernel wrote:
> Some comments in line - if you are modelling this, vital you
> understand the first!
>
> >> The buddy allocator will attempt (by looking at lowest order lists first)
> >> to allocate pages from fragmented areas first. Assuming pages are freed
> >> at random, this would act as a defragmentation process. However, if a
> >> system is taken to high utilization and back again to idle, the
> >> dispersion of persistent pages (for instance InactiveDirty pages)
> >> becomes great, and the buddy allocator performs poorly at coalescing
> >> blocks.
> >
> > It becomes effectively useless. The probability of all 8 pages of a given
> > 8 page unit being free when only 1% of memory is free is (1/100)**8 =
> > 1/(10**16).
>
> I thought that, then I tested & measured, and it simply isn't true.
> Your mathematical model is wrong.
Yes, a simple thought experiment show this. Suppose we start with an intial
state of every second 0 order page allocated. Now, the next 0 order
allocation must coalesce to a 1 order unit but the next allocate will come
from a half-allocated allocated unit. If we continue randomly in this way,
allocating one page and freeing one, we will eventually arrive at a state
where half the pages are in 1 order units and the other half are fully
allocated.
So, the fragmentation is far from uniformly random. This is going to require
deeper analysis. IMO, it's worth putting in the effort to get a handle on
this.
--
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/