National Physical Laboratory

Is a measurement uncertainty of, say, 0.95 % meaningfully better than 1.0 %? (FAQ - Force)

No, the difference is not very meaningful at all.

Even in a well ordered and detailed uncertainty budget, where the statistical distribution of the component parts is well known, the uncertainty in the calculated uncertainty will probably be at least 50 %. It gets larger too, as the component parts become less statistically sound - as is often the case where little or no statistical data exists and best guesses have to be used instead. This is a very large percentage and essentially means that not too much notice should be taken of a calculated uncertainty budget halving or doubling.

This point is not well known, however, and those seeking calibration services mostly assume otherwise, perceiving that a figure of, say, 0.95 % is meaningfully better than 1.0 %. Metrologically the difference is insignificant but the lower figure attracts customers and some calibration service providers go to great lengths to show tiny reductions in their advertised best measurement capability figures. There is thus a commercially driven tendency for uncertainty estimates to get smaller which has little to do with practical needs or robust metrology.

It is also worth pointing out that measurement uncertainties are essentially calculated by summing all the effects, known to the author of a particular budget, that can influence an instrument's performance. Forgetting to account for an effect (and additional research invariably discovers more effects) therefore leads to a lower (falsely 'better') uncertainty claim. It is wise to be cautious of impressively low values of uncertainty - they can be caused by lack of knowledge!

Purchasers of force measuring equipment and the related calibration services are advised to make judgements that are based on factors beyond small differences between numbers.

Last Updated: 25 Mar 2010
Created: 8 Oct 2007