Ele312 Mod 1b

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

1.5.

3 Factors influencing measurement errors

Errors arise in measurement systems due to several causes, such as human errors or errors in using an
instrument in an application for which it has not been designed. Several definitions are now introduced
which define the factors that influence measurement errors.

1.5.3.1 Accuracy

Accuracy refers to how closely the measured value agrees with the true value of the parameter being
measured. For electrical instruments the accuracy is usually defined as a percentage of full-scale
deflection. 14 Digital and analogue instrumentation

1.5.3.2 Precision

Precision means how exactly or sharply an instrument can be read. It is also defined as how closely
identically performed measurements agree with each other. As an example, suppose that a resistor,
which has a true resistance of 26 863, is measured by two different meters. The first meter has a scale
which is graduated in k, so that the closest one can get to reading of resistance is 27 k. The instrument is
fairly accurate but it is very imprecise. The second instrument has a digital readout which gives values of
resistance to the nearest ohm. On this instrument the same resistor measures 26 105. Clearly this
instrument has high precision but low accuracy.

1.5.3.3 Resolution

The resolution of an instrument is the smallest change in the measured value to which the instrument
will respond. For a moving pointer instrument the resolution depends on the deflection per unit input.
For a digital instrument the resolution depends on the number of digits on the display.

1.5.3.4 Range and bandwidth

The range of an instrument refers to the minimum and maximum values of the input variable for which
it has been designed. The range chosen should be such that the reading is large enough to give close to
the required precision. For example, with a linear scale an instrument which has 1 per cent precision at
full scale will have 4 per cent precision at quarter scale. The bandwidth of an instrument is the
difference between the minimum and maximum frequencies for which it has been designed. If the signal
is outside the bandwidth of the instrument, it will not be able to follow changes in the quantity being
measured. A wider bandwidth usually improves the response time of an instrument, but it also makes
the system more prone to noise interference.

1.5.3.5 Sensitivity

Sensitivity is the degree of response of a measuring device to the change in input quantity. The
sensitivity of an instrument is defined as the ratio of the output signal of response of the instrument to
the input signal or measured variable.

1.5.3.6 Uncertainty

Uncertainty is an estimate of the possible error in a measurement. More precisely, it is an estimate of


the range of values which contains the true value of a measured quantity. Uncertainty is usually
reported in terms of the probability that the true value lies within a stated range of values.
Measurement uncertainty has traditionally been defined as a range of values, usually centred on the
measured value, that contains the true value with stated probability. A measurement result and its
uncertainty traditionally were reported as quantity = value ± U. (1.12).

So, the number usually reported and called ‘uncertainty’ was actually half the range defined here. The
ISO Guide [10] redefines uncertainty to be the equivalent of a standard deviation, and thus avoids this
problem.

1.5.3.7 Confidence interval and confidence level

1.5.3.7.1 Confidence interval

When uncertainty is defined as above, the confidence interval is the range of values that corresponds to
the stated uncertainty.

1.5.3.7.2 Confidence level

Confidence level is the probability associated with a confidence interval. For example, one could indicate
that the true value can be expected to lie within ±x units of the measured value with 99 per cent
confidence.

1.5.3.8 Repeatability

Repeatability is defined as the degree of agreement among independent measurements of a quantity


under the same condition.

1.5.3.9 Reproducibility

Measurement reproducibility is the closeness of agreement between the results of measurements of


the same measurand at different locations by different personnel using the same measurement method
in similar environments.

1.5.4 Types of error

Measurement errors could be divided into four types: human, systematic, random, and applicational.
Human errors (gross errors) are generally the fault of the person using the instruments and are caused
by such things as incorrect reading of instruments, incorrect recording of experimental data or incorrect
use of instruments. Human errors can be minimised by adopting proper practices and by taking several
readings, etc. Systematic errors result from problems with instruments, environmental effects or
observational errors. Instrument errors may be due to faults on the instruments such as worn bearings
or irregular spring tension on analogue meters. Improper calibration also falls into instrumental errors.
Environmental errors are due to the environmental conditions in which instruments may be used.
Subjecting instruments to harsh environments such as high temperature, pressure or humidity or strong
electrostatic or electromagnetic fields may have detrimental effects, thereby causing error.
Observational errors are those errors introduced by the observer. Probably most common observational
errors are the parallax error introduced in reading a meter scale and the error of estimation when
obtaining a reading from a meter scale. Random errors are unpredictable and occur even when all the
known systematic errors have been accounted for. These errors are usually caused by noise and
environmental factors. They tend to follow laws of chance. They can be minimised by taking many
readings and using statistical techniques. Applicational errors are caused by using the instrument for
measurements for which it has not been designed. For example, the instrument may be used to
measure a signal that is outside the bandwidth. Another common example is to use an instrument with
an internal resistance that is comparable in value to that of the circuit 16 Digital and analogue
instrumentation being measured. Applicational errors can be avoided by being fully aware of the
characteristics of the instrument being used.

1.5.5.1 Controlling measurement uncertainties

Figure 1.3 illustrates the three parameters of interest to a metrologist.

• Error: the difference between the measured value and the true value.

• Uncertainty: the range of values that will contain the true value.

• Offset: the difference between a target (normal) value and the actual value.

Offset is calculated by means of an experiment to determine the absolute value of a parameter, the
simplest such experiment being a calibration against a superior standard.

Uncertainty is determined by an uncertainty analysis that takes into consideration the effects of
systematic and random errors in all the processes that lead to the assignment of a value to a
measurement result.

You might also like