welcome to the world of PC hardware, real-world performance and theoretical
numbers.
in theory, a 32/33 PCI bus can get 132mbyte/sec.
in reality, the more cards you have on a bus, the more arbitration you
have, the less overall efficiency.
in theory, with neither the initiator or target inserting wait-states, and
with continual bursting, you can achieve maximum throughput.
in reality, continual bursting doesn't happen very often and/or many
hardware devices are not designed to either perform i/o without some
wait-states in some conditions or provide continual bursting.
in short: you're working on theoretical numbers. reality is typically far
far different!
something you may want to try:
if your motherboard supports it, change the "PCI Burst" settings and see
what effect this has.
you can probably extract another 20-25% performance by changing the PCI
Burst from 32 to 64.
>Second, why are the md devices so slow? I would have expected it to reach
>130+ MB/s on both md0 and md1. It even has spare CPU time to do it with.
you don't mention actually what your motherboard or chipset actually is --
and where the 32/33 and 64/66 PCI connect in.
you also don't mention what your FSB & memory clock-speed are, or how these
are connected to the PCI busses.
it is likely that you have a motherboard where the throughput between PCI
to memory will also contend with the FSB.
given you're using "time dd if=/dev/hdo1 of=/dev/null bs=1M count=1" as
your test, you're effectively issuing read() and write() system-calls from
user-space to kernel.
this implies a memory-copy.
count the number of times you're doing a memory-copy (or, more correctly,
moving data across the front-side-bus), and you should be able to see
another reason for the bottlenecks you see.
cheers,
lincoln.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/