The variance is presumably because of the naive read/write
implementation. It sucks in 16 megs and writes out out again.
With a 100 megabyte file you'll get aliasing effects between
the sampling interval and the client's activity.
You will get more repeatable results using smaller files. I'm
just sending /usr/local/bin/* ten times, with
./zcc -s otherhost -c /usr/local/bin/* -n10 -N2 -S
Maybe that 16 meg buffer should be shorter... Yes, making it
smaller smooths things out.
Heh, look at this. It's a simple read-some, send-some loop.
Plot CPU utilisation against the transfer size:
Size %CPU
256 31
512 25
1024 22
2048 18
4096 17
8192 16
16384 18
32768 19
65536 21
128k 22
256k 22.5
8192 bytes is best.
I've added the `-b' option to zcc to set the transfer size. Same
URL.
-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/