National Physical Laboratory

Computing and Measurement


Introduction

Measurement science is changing and becoming more complex, and these days, mathematics and software are having an increasingly important role to play. Software is used in almost all measuring instruments, from domestic devices such as bathroom scales and gas meters, to pressure sensors in nuclear reactors and roadside breathalysers. While the hardware is becoming smaller and cheaper, the software is getting more complex. Sensors are becoming more commonplace, with larger quantities of data being collected, and this data must be processed and presented in a meaningful way. It is no longer acceptable to produce results as tables of numbers; instead, graphical representations of the results are required, sometimes involving virtual reality.

In some cases (e.g. because of cost, time, or safety risks), it is impracticable to take measurements, so as an alternative, the physics can be simulated in software using mathematical modelling, and the measurements can be taken from the simulations.

Need for Validation
The need for validation 

Validation of Software in Instruments

Software can behave unexpectedly so how can we be confident that the software embedded in measurement instruments, and the instruments themselves, are working properly? Validation is the term used for determining whether an instrument does what it is supposed to do i.e. whether it is fit for purpose.

The obvious approach is to test the instrument: get different people of different (known) weights to stand on the bathroom scales and see if the scales give the right answer. This is the usual way to test instruments, and it may be sufficient for instruments that use software, if the software is simple enough. However, if the software is complicated or if the instrument is used in important applications (e.g. nuclear reactors, or law enforcement) then it is necessary to 'take the lid off' the software and see how it works. How much validation of the software should be done is a balance between the costs of doing the validation, the likelihood that further validation will find further errors, and the risks if errors are left undetected. There may be legal requirements that certain validation is done, for instance, from safety regulations or weights and measures legislation.

Importance of Validation
The importance of validation 

There are many techniques for software validation: getting independent software experts to read the code and verify that it is doing what was intended, or testing the software in isolation from the instrument, perhaps testing for mistakes that have been made in similar code before.We can also validate the algorithms that have been implemented in the code by performing a mathematical analysis, to ensure that the algorithms are stable and don't 'blow up' when given strange data.

There is no magic solution to the problem of validating software; we just apply a number of sensible techniques to reduce the risk of instrument malfunction to an acceptable level.

Software Testing Process
Software testing processs 

Software Testing

We need to test software to understand the extent to which the results returned are numerically correct. One approach is to use reference data sets, and associated reference results, that have been prepared beforehand.

Where possible the reference data sets are determined using a data generator, a special piece of software which, when given a set of results, produces corresponding data sets.

The software under test is run on the reference data sets, and the results are compared with the expected reference results.

Virtual Modelling
Virtual modelling 

Modelling: The Key to Extracting Information from Data

To extract knowledge from measurements, we need a mathematical model of the system. For an (apparently) simple measurement task such as using a steel rule to measure the length of a wooden rod, there is maybe a straightforward relationship between the measurements and the quantities of interest, e.g., the reading from the ruler scale is an estimate of the length of the rod. In more complicated measurement experiments, the relationship between the measurements and the parameters of interest is generally less straightforward.

To get an accurate estimate of the rod length, we must take into account the effect of temperature and bending on the rule and rod, the squareness of the ends of the rod, the effect of humidity on the wood, etc.

In general terms, the mathematical model predicts the response of the system (e.g. the scale reading) as a function of the variables (e.g., temperature, humidity) and unknown model parameters. These unknown parameters can be estimated by conducting a measurement experiment. The response of the system is recorded for different values of the variables, and a set of equations is solved to determine estimates of the unknown parameters.

Uncertainties in the measurements further complicate matters, and lead to uncertainties in the parameter estimates. They must be taken into account when solving the equations, in order to determine estimates that are most consistent with, and make best use of, the data.

Robot
Robot 

Soft Computing

Soft computing differs from normal (hard) computing by exploiting uncertainty and partial truths. Its most important principle is that it takes account of the inherent imprecision of models to give solutions that are robust, easily manageable and can be obtained at low cost.

Soft computing techniques are modelled on the human mind and as such systems that employ it are inherently adaptive and intelligent in nature.

There are three main branches of soft computing:

  • neural networks, which are collections of mathematical models that emulate some of the observed biological learning properties of the human brain and nervous system
  • fuzzy logic, which resembles human reasoning in its use of approximate information and uncertainty to make decisions
  • methods that employ probabilistic techniques (e.g. genetic algorithms), to evolve populations of possible solutions

Typical uses of soft computing are in pattern recognition, robotics, decision making, and various control problems.

Internet Calibration
Internet calibration 

Internet Calibration

Measurement instruments need to be calibrated to check that they are measuring correctly.

Typically, a laboratory will send an artefact to NPL where its characteristics are measured against the national standard and a calibration certificate will be issued. The laboratory may use this artefact to calibrate its own devices, comparing the results with those from NPL.

The internet allows NPL to perform and request some calibrations remotely, with minimal physical transportation of artefacts or equipment. Using no more than a standard connection (the same as that used on any browser), software at NPL controls the measuring equipment located at the laboratory, analyses the measured data, and can issue a certificate.

Computing History

 

Caveman C.5000 BC Tally stick, first recorded use of a counting aid. Marks carved into a wolf's shinbone.
da Vinci C.1500 AD Leonardo da Vinci designed a mechanical calculator. Working models have been created.
  1822 AD Charles Babbage designed a mechanically operated Difference Engine. This was the first 'computer'. A modern construction has been made: it contains 4000 components, weighs 3 tons, and calculates to 31 digits of accuracy.
  1946 AD John W Mauchly and J Presper Eckert Jr built ENIAC at the University of Pennsylvania. This weighed 30 tons, and contained 18 000 vacuum tubes. It could do 100 000 calculations per second.
  1978 AD Cray-1, the first commercially produced supercomputer. Contained 200 000 integrated circuits, and ran at 150 million floating point operations per second.
First IBM PC 1981 AD IBM PC model 5150 released, with 64 kilobytes RAM, and a single sided 5 1/4" floppy disc. Employed a 4.77 MHz Intel 8088 processor.



Registration

Please note that the information will not be divulged to third parties, or used without your permission

Login