Yes, now we're on the same page, so to speak. Personally, I don't have much
interest in working on improving the allocator per se. I'd love to see
somebody else take a run at that, and I will occupy myself with the gritty
details of how to move pages without making the system crater.
> The point which I am getting across, quite
> badly, is that by having order0 pages in slab, you are guarenteed to be
> able to quickly find a cluster of pages to move which will free up a
> contiguous block of 2^MAX_ORDER pages, or at least 2^MAX_GFP_ORDER with
> the current slab implementation.
I don't see that it's guaranteed, but I do see that organizing pages in
slab-like chunks is a good thing to do - a close reading of my original
response to you shows I was thinking about that.
I also don't see that the slab cache in its current incarnation is the right
tool for the job. It handles things that we just don't need to handle, such
as objects of arbitary size and alignment. I'm sure you could make it work,
but why not just tweak alloc_pages to know about chunks instead?
Regards,
Daniel
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/