There are many test scenarios. The one I use is:
- 64 megs of memory.
- Process A loops across N 10-megabyte files, reading 4k from each one
and terminates when all N files are fully read.
- Process B loops, repeatedly reading a one gig file off another disk.
The total wallclock time for process A exhibits *massive* step jumps
as you vary N. In stock 2.5.6 the runtime jumps from 40 seconds to
ten minutes when N is increased from 40 to 60.
With my changes, the rate of increase of runtime-versus-N is lower,
and happens at later N. But it's still very sudden and very bad.
Yes, it's a known-and-nasty corner case. Worth fixing if the
fix is clean. But IMO the problem is not common enough to
justify significantly compromising the common case.
-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/