> We don't have sensors that are accurate to 1/10 of a K and certainly not
> to 1/100 of a K. Knowing the CPU temperature "precise" to .01 K when
> the accuracy of the best sensor we are likely to see is no better than
> +- 1 K is just about as relevant as negative absolute temperatures.
...
> Even if we had or could, anticiplate, sensors with a +- .01 K,
> the relevance of knowing the CPU temperature to that precision is
> lost on me. I see no sense in stuffing a field with meaningless
> bits just because the field will hold them. In fact, this "false precision"
> quickly leads to the false impression of accuracy. Based on several
> messages I have seen on this thread and in private E-Mail, there are a
> number of people who don't seem to grasp the fundamental difference
> between precision and accuracy and truely don't understand that adding
> meaningless precision like this adds nothing to the accuracy.
>
> I can see maybe making it precise to .1 K. But stuffing the bits
> in there to be precise to .01 K just because we have the bits and not
> because we have any realistic information to fill the bits in with, is
> just silly to me. Just as silly as allowing for negative numbers in an
> absolute temperature field. We have the bits to support it, but why?
The bits are free; the API is hard to change.
Sensors might get better, at least on high-end systems.
Rounding gives a constant 0.15 degree error.
Only the truly stupid would assume accuracy from decimal places.
Again, the bits are free; the API is hard to change.
One might provide other numbers to specify accuracy and precision.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/