at least with my patches, I also made this change:
-#define ELV_LINUS_SEEK_COST 16
+#define ELV_LINUS_SEEK_COST 1
#define ELEVATOR_NOOP \
((elevator_t) { \
@@ -93,8 +93,8 @@ static inline int elevator_request_laten
#define ELEVATOR_LINUS \
((elevator_t) { \
- 2048, /* read passovers */ \
- 8192, /* write passovers */ \
+ 128, /* read passovers */ \
+ 512, /* write passovers */ \
\
you didn't change the I/O scheduler at all compared to mainline, so
there can be quite a lot of difference in the bandwidth average per
process between my patches and mainline and your patches (unless you run
elvtune or unless you backed out the above).
Anyways the 130671 < 100, 0 < 200, 0 < 300, 0 < 400, 0 < 500 from your
patch sounds perfectly fair and that's unrelated to I/O scheduler and
size of runqueue. I believe the most interesting difference is the
blocking of tasks until the waitqueue is empty (i.e. clearing the
waitqueue-full bit only when nobody is waiting). That is the right thing
to do of course, that was a bug in my patch I merged by mistake from
Nick's original patch, and that I'm going to fix immediatly of course.
Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/