I got an other question related to this. I got a System with 7
Filesystems on LVM. One Filesystem contains only one file, a 10 GB
disk-image which is exported via ftp. Sometimes, 15 clients are
downloading the file at the same time. What would be theoretical the
best way to tune these downloads. It would be wise if linux would try to
read the file in big blocks to minimize seeking on the disk. The file is
usually only downloaded in one piece. So if a ftpd-process access the
file, I can be very sure that it will read the whole file. The best
would be if data is requested from the file if linux would read a block
of 256k data from the file before serving the next request from an other
process.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/