When describing sensors and instrumentation systems we make use of a range of terms to quantify their characteristics and performance. It is important to have a clear understanding of this terminology, so we will look briefly at some of the more important terms.
• Range:
This defines the maximum and minimum values of the quantity that the sensor or instrument is designed to measure.
• Resolution or discrimination:
This is the smallest discernible change in the measured quantity that the sensor is able to detect. This is usually expressed as a percentage of the range of the device; for example, the resolution might be given as 0.1 percent of the full-scale value (that is, one-thousandth of the range).
• Error:
This is the difference between a measured value and its true value. Errors may be divided into random errors and systematic errors. Random errors produce scatter within repeated readings. The effects of such errors may be quantified by comparing multiple readings and noting the amount of scatter present. The effects of random errors may also be reduced by taking the average of these repeated readings. Systematic errors affect all readings in a similar manner and are caused by factors such as calibration. Since all readings are affected in the same way, taking multiple readings does not allow quantification or reduction of such errors.
• Accuracy, inaccuracy and uncertainty:
The term accuracy describes the maximum expected error associated with a measurement (or a sensor) and may be expressed as an absolute value or as a percentage of the range of the system. For example, the accuracy of a vehicle speed sensor might be given as ±1 mph or as ±0.5 percent of the full-scale reading. Strictly speaking, this is actually a measure of its inaccuracy, and for this reason the term uncertainty is sometimes used.
• Precision.
This is a measure of the lack of random errors (scatter) produced by a sensor or