It's Linux's way of saving you money :)
> > - Disable CONFIG_HIGHMEM
>
> That seems to have no effect on performance. Btw., the 64GB support in
> 2.4.10 seemed to be buggy ("0-order allocation failed", then the DB
> crashed), so we were using the 4GB setting.
There have been many reports of this happening, but they
are tapering off now.
> > - Try linux-2.4.13
>
> This helped - now performance is up to par with 2.2.19 (~ 85MB/s) - thanks!
It's surprising that 2.4.10->2.4.13 actually doubled throughput.
> > - Profile the kernel. [...] With something like:
> > ~/kern-prof.sh cp some_huge_file /dev/null
>
> I tried this, but with
> dd if=/dev/sda of=/dev/null bs=1024k count=3000
>
> (the fs is too slow - so "cp" peaks out at 17MB/s!)
Really? Are you saying that on a controller which can
do 85 megs/second, you can't read files through the filesystem
at greater than 17? Which filesystem?
> The result (last 4 lines):
> c01388fc try_to_free_buffers 55 0.1511
> c0128b10 file_read_actor 1179 14.0357
> c01053b0 default_idle 6784 130.4615
> 00000000 total 8695 0.0065
OK, that's normal and proper. Almost all the kernel time
is spent copying data.
> Does this suggest that the kernel isn't the bottleneck?
Well... We seem to have three issues here:
1: Why isn't the controller achieving the manufacturer's
claimed throughput?
Don't know. Maybe it's the software copy. Maybe it's the
device driver. Maybe they lied :). It'd be interesting
to test it on the same machine with the vendor's drivers
and win2k.
2: 0-order allocation failures.
3: Poor `cp' throughput. This one is strange. Perhaps
`cp' is using a small transfer-size and the kernel's
readhead isn't working properly. Could you experiment
with this some more? For example, what happens with
dd if=large_file of=/dev/null bs=4096k
-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/