This is an awfully PC-centric way of putting things. You assume that the
only ones who use Linux are those with a 1 ghz CPU and those 66 mhz PCI
boards and whatever. You simply cannot make that assumption anymore; the
diversity of Linux HW these days is so broad that the sweet spot between
CPU cycles, memory bandwidth etc which controls the code optimization
fluctuates wildly.
A simple kernel profile of one of our embedded Linux systems for example
show csum_partial_copy limiting the performance. Now for us zero-copy
cannot be implemented anyway because we don't have a checksumming ethernet
controller but if we had, we could enhance performance by 50% by skipping
the copy perhaps. And there definitely are no 1 GHZ embedded CPU's in the
same price range to choose instead, or Rambus memories etc.. raw power
simply is not an option sometimes.
It's still true of course that it's not obvious that the cycles spent on
copying can be used for anything better in all cases.
However, the beauty of open-source is that there is no need to debate over
whether something should be done or not. If someone feels the need, it
will be coded and if it's good people will use it. In this case, if anyone
gets a 200% boost in performance, they probably won't listen to the
argument that "it's academic" afterwards :) And some others might go
twiddle their hardware and skip the zero-copy mechanism altogether.
-BW
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/