A fellow metric supporter who admits to being a little weak on mathematics owned up to not understanding the difference between accuracy and precision when it comes to measurement.
He is probably right in saying that he is not alone and that many people fail to see advantages with metric in this respect.
I offer here two examples in an attempt to clarify the issue, one purely numeric, the other practical involving an everyday example of measurement.
I will take that the reader will readily understand that to square a number you multiply it by itself. As such there is a number which when squared will yield 2 (referred to as the square root of 2). The value of this number is somewhere between 1.4 and 1.5 Actual arithmetic shows that:
1.4 x 1.4 = 1.96
1.5 x 1.5 = 2.25
Two things to notice here:
(a) 1.4 is nearer to the square root of 2 than 1.5, so 1.4 is more accurate than 1.5
(b) 1.4 and 1.5 are both to one decimal place and two significant figures. That is they have the same precision numerically.
My second example involves actual personal experience when using digital bathroom scales. Some years ago I bought an instrument that gave readings to the nearest 0.5 kg The manufacturer stated that the accuracy was to within 1% of applied weight. This means that the scales should give a reading that is within the range 0.5% below the correct weight and 0.5% above.
Now supposing a child were to use it. If her true weight was say 21.3 kg the instrument would measure (in theory) a value somewhere between 21.2 kg and 21.4 kg But because it can’t display either of these two figures it would show up as either 21.0 kg or 21.5 kg which is only accurate to within 1.8%. For heavier weights the instrument can achieve the stated accuracy but this example is a case of the instrument not being precise enough for the given accuracy.
At a much later date I invested in an instrument that displays to the nearest 0.1 kg The manufacturer declined to state the accuracy in the product description but I’m sceptical that when I get on them I am being weighed to an accuracy of 100 g. If I’m right then we have the converse mismatch, namely that the display is over-precise for the accuracy of measurement.
Also as a more graphic demonstration of the difference between accuracy and precision (one that we were taught when I first started studying Chemistry at uni) one can imagine playing darts and aiming for the bullseye.
A precise and accurate turn would result in the three darts being clustered closely around the bull.
A precise but inaccurate turn would result in all three darts being clustered away from the bull.
An accurate but imprecise turn would for example have the three darts spread around the edge of the board so that their average position was the bull.
I always found this an easy way of remembering the difference. In summary, in scientific terms, precission refers to the ‘reproduceability’ of the result but makes no reference to how correct it is. Accuracy refers to the ‘correctness’ of the result, even though it may have been achieved as an average of imprecise measurements.
LikeLike
This seemingly technical issue is quite important in persuading people to use the metric system. A lot of resistance comes from people who think that imperial measurements need to be exact, so translate litterally to metric and then wonder why they get numbers they can’t remember. For example, if a recipe calls for 8oz of flour, and you litterally translate that to 227 g and then try to weigh that quantity, you’re going to find it really tricky. In fact you could easily get away with anywhere between 200 and 250 g, because cooking doesn’t require that level of precision. It’s the same in other areas – you don’t need 25.4 mm, normally 20, 25 or 30 mm is precise enough to replace an inch; 250 or 300 ml is often precise enough to replace half a pint etc.
LikeLike