So the problem here is starvation if I understand well.
This one isn't related to the VM, so it's normal that you don't see much
difference among the different VM, it's more likely either related to
netatalk or tcp or I/O elevator.
Anyways you can pretty well rule out the elevator using elvtune -r 1 -w
1 /dev/hd[abcd] and see if the starvation goes away.
> poor throughput that is seen in this test I associate with poor elevator
> performance. If the elevator doesnt group requests enough you get disk
> behaviour like "small read, seek, small read, seek" instead of grouping
> things into large reads or multiple reads between seeks.
If you hear the seeks that's very good for the fairness, making the
elevator even more aggressive could only increase starvation of some
client.
> The problem where one client gets all the bandwidth has to be some kind
> of livelock.
netatalk may be processing the I/O requests not in a fair manner and if
the unfariness is introduced by netatalk no matter what tcp and I/O
subsystem do, we can do nothing to fix it from the kernel side. OTOH you
said that in the "cached" test netatalk was providing a fair
fileserving, but I'd still prefer if you could reproduce without using
netatalk, you can just use a rsh pipe to do the read and writes of the
files over the network for example, it should stress tcp and I/O
subsystem the same way. If you can't reproduce with rsh please file a
report to the netatalk people.
I doubt it's the tcp congestion control, of course it's unfair too
across multiple streams but I wouldn't expect it to generate that bad
fariness results.
> that they are just doing 8k reads and writes. The files are not opened
> O_SYNC and the file server process arent doing any fsync calls. This is
ok.
> supported by the fact that the performance is fine with 256 Megs of
> memory.
yes.
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/