On Thu, Jun 06, 2002 at 02:19:02PM -0700, David Mosberger wrote:
> We don't do anything special. I'm not sure what the fragmentation
> statistics look like on machines with 1+GB memory; it's something I
> have been wondering about and hoping to look into at some point (if
> someone has done that already, I'd love to see the results). In
> practice, every ia64 linux distro as of today ships with 16KB page
> size, so you only get order-1 allocations for stacks.
I've been collecting information on this as well, as I've been
maintaining a patch to support deferred coalescing in the page-level
allocator. Martin Bligh contributed the code to collect some
fragmentation statistics for that patch, which was originally written
for mainline 2.4. It's slightly less expensive in lazy_buddy, as the
algorithm requires some accounting anyway, but some other statistics
besides population counts for various block sizes might also be useful
to track here. I'm holding off until I read up on seq_file() and convert
the /proc/ reporting over to it before the next release of it, though I
do have more current diffs than I've announced. I wouldn't mind at all
hearing from those who have more stringent fragmentation requirements.
Cheers,
Bill
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/