I would have said just the opposite. That if it you have a large number of
handles you're waiting on, and you have to go back through and set the bits
everytime you timeout that you would incur a larger overhead. From the
perspective of my application, it would have been more efficient to not zero
them (I was waiting on a number of serial channels, and the timeout was used
to periodically pump more data to the serial channel. When I received data,
I buffered it, and another thread took care of processing it).
It all really depends on the coding style of your program, and what you need
to do on a timeout. Certain types of applications would benefit from
non-zero'ing, others from zeroing.
All what is *most* important is that the behavior is clearly understood and
well documented. A google search made it pretty clear that it was a source
of confusion.
--John
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/