Lecture-6-Uncertainity and Errorin Measurement

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

UNCERTAINTY IN MEASUREMENTS

INTRODUCTION
 Data may become the foundation of a new theory or the
undoing of an existing one.
 They may form a critical test of a structural member in an
aircraft wing that must never fail during operation.
 There fore before a data set can be used in an engineering
or scientific application, its quality must be established.
 “How good are the Data?”

 The data is good if they agree well with a theoretically


derived results???
 Theory, however , is simply a model intended to mimic the
behavior of the real system being studied; there is no guarantee
it actually does represent the physical system well.
 Accuracy of most fundamental theories is limited both by
accuracy of the data from which the theory was developed and
by the accuracy of the data and assumptions used when
calculating with it.
CONT…

 Thus measurement should not be compared to a theory in


order to assess their quality.
 What we are really after is the actual value of the physical
quantity being measured, and that is the standard against
which data should be tested.
 The error of a measurement is thus defined as “the
difference between the measured value and the true physical
value of the quantity”.
 The definition of error is helpful, but it suffers from one
major flaw: The error cannot be calculated exactly unless
we know the true value of the quantity being measured!
CONT…
 But we can usually estimate the likelihood that the error
exceeds some specific value. Eg: 95% of the readings from a
particular flow meter will have an error less than 1L/s.
 I.e. We can say with 95% confidence that a reading taken from
that meter has an error of 1L/s or less
 Or equivalently the reading has an uncertainty of 1L/s at
confidence level of 95%.
 Error or uncertainty may be estimated with statistical tools
when a large number of measurements are taken
 However, the experimentalist must also bring to bear his or her
own knowledge of how the instruments perform and of how well
they are calibrated in order to establish the possible errors and
their probable magnitudes.
 This chapter describes how to estimate the uncertainty in a
measurement and how to present the corresponding
experimental data in an easily interpreted way.
TYPES OF ERRORS
 No measurement can be made with perfect accuracy, but it is
important to find out what the accuracy actually is and how
different errors have entered into the measurement.
 A study of errors is a first step in finding ways to reduce them.
Such a study also allows us to determine the accuracy of the
final test result.
 Errors may come from different sources and are usually
classified under three main headings:
 Gross Errors: Largely human errors
 misreading of instruments, incorrect adjustment and improper
application of instruments, and computational mistakes.
 Systematic Errors: Shortcomings of the instruments
 defective or worn parts, and effects of the environment on the
equipment or the user.
 Random Errors: Those due to causes that cannot be directly
established because of random variations in the parameter or
the system of measurement.
GROSS ERRORS

 This class of errors mainly covers human mistakes in


reading or using instruments and in recording and
calculating measurement results.
 As long as human beings are involved, some gross errors
will inevitably be committed. Although complete
elimination of gross errors is probably impossible, one
should try to anticipate and correct them. Some gross
errors are easily detected; others may be very elusive.
 A large number of gross errors can be attributed to
carelessness or bad habits, such as improper reading of an
instrument, recording the result differently from the
actual reading taken, or adjusting the instrument
incorrectly.
CONT…

 A gross error may also occur when the instrument is not


set to zero before the measurement is taken; then all the
readings are off.
Remedy
 Errors like these cannot be treated mathematically. They
can be avoided only by taking care in reading and
recording the measurement data.
 Good practice requires making more than one reading of
the same quantity, preferably by a different observer.
 Never place complete dependence on one reading but take
at least three separate readings, preferably under
conditions in which instruments are switched off-on.
SYSTEMATIC ERRORS
 This type of errors is usually divided into two
different categories:
 Instrumental errors, defined as shortcomings of the
instrument;
 Environmental errors, due to external conditions affecting
the measurement.
 Instrumental errors are errors inherent in measuring
instruments because of their mechanical structure.
 Example:
 Friction in bearings of various moving components may
cause incorrect readings.
 Irregular spring tension, stretching of the spring, or
reduction in tension due to improper handling or
overloading of the instrument will result in errors.
 Calibration errors, causing the instrument to read high or
low along its entire scale. (Failure to set the instrument to
zero before making a measurement has a similar effect.)
CONT…
 There are many kinds of instrumental errors, depending
on the type of instrument used. The experimenter should
always take precautions to insure that the instrument he
is using is operating properly and does not contribute
excessive errors for the purpose at hand.
 Faults in instruments may be detected by checking for
erratic(not regular) behavior, and stability and
reproducibility of results.
 A quick and easy way to check an instrument is to
compare it to another with the same characteristics or to
one that is known to be more accurate.
 Instrumental errors may be avoided by:
 Selecting a suitable instrument for a particular measurement
application;
 Applying correction factors after determining the amount of
instrumental error;
 Calibrating the instrument against a standard.
CONT…
 Environmental errors are due to conditions external to the
measuring device, including conditions in the area surrounding
the instrument, such as the effects of changes in temperature,
humidity, barometric pressure, or of magnetic or electrostatic
fields.
 Eg: a change in ambient temperature at which the instrument is
used causes a change in the elastic properties of the spring in a
moving-coil mechanism and so affects the reading of the
instrument.
 Corrective measures to reduce these effects include air
conditioning, hermetically sealing certain components in the
instrument, use of magnetic shields, and the like.
 Systematic errors can also be subdivided into static or dynamic
errors.
 Static errors are caused by limitations of the measuring device
or the physical laws governing its behavior.
 Eg: A static error is introduced in a micrometer when excessive
pressure is applied in torqueing the shaft.
 Dynamic errors are caused by the instrument’s not responding
fast enough to follow the changes in a measured variable.
RANDOM ERRORS
 Are due to unknown causes and occur even when all
systematic errors have been accounted for.
 Is the portion of the measurement error that varies
randomly in repeated measurements throughout the
conduct of a test.
 May arise from uncontrolled test conditions and non
repeatability's in the measurement system, measurement
methods, environmental conditions, data reduction
techniques, etc.
 In well-designed experiments, few random errors usually
occur, but they become important in high-accuracy work.
 The only way to offset these errors is by increasing the
number of readings and using statistical means to obtain
the best approximation of the true value of the quantity
under measurement.
STATISTICAL ANALYSIS OF MEASUREMENT DATA
SUBJECT TO RANDOM ERRORS
Mean & Median
 The average value of a set of measurements of a constant
quantity can be expressed as either the mean value or the
median value.
 As the number of measurements increases, the difference
between the mean value and median values becomes very
small.
 However, for any set of n measurements , … of a
constant quantity, the most likely true value is the mean
given by:

 This is valid for all data sets where the measurement


errors are distributed equally about the zero error value,
i.e. where the positive errors are balanced in quantity and
magnitude by the negative errors.
CONT…
 Median: is an approximation to the mean that can be
written down without having to sum the measurements.
 The median is the middle value when the measurements
in the data set are written down in ascending order of
magnitude. For a set of n measurements , … of a
constant quantity, written down in ascending order of
magnitude, the median value is given by:

 Thus, for a set of 9 measurements , … arranged in


order of magnitude, the median value is .
 For an even number of measurements, the median value is
midway between the two center values, i.e. for 10
measurements , … , the median value is given by:
+ /2.
CONT…
Standard deviation and variance
 Expressing the spread of measurements simply as the range
between the largest and smallest value is not in fact a very good
way of examining how the measurement values are distributed
about the mean value.
 A much better way of expressing the distribution is to calculate
the variance or standard deviation of the measurements.
 The starting point for calculating these parameters is to
calculate the deviation (error) of each measurement from
the mean value :

• The variance (V) is then given by:

• The standard deviation (σ) is


simply the square root of the
variance. Thus:
CONT…
 Example: Calculate σ and V for measurement sets A, B
and C.
 398 420 394 416 404 408 400 420 396 413 430 Measurement set A
 409 406 402 407 405 404 407 404 407 407 408 Measurement set B
 409 406 402 407 405 404 407 404 407 407 408 406 410 406 405 408
406 409 406 405 409 406 407 Measurement set C
 Solution: From the above equ. the mean and median are
 For Measurement set A (11 data's)
 = 409.0
 = 408
 For Measurement set B (11 data's)
 = 406.0
 = 407
 For Measurement set C (23 data's)
 = 406.5
 = 406
CONT…

 To calculate the deviation and variance lets draw a table


 For measurement set A ( = 409.0 as calculated earlier):

 For measurement set B ( = 406 as calculated earlier):


CONT…
 For measurement set C ( = 406.5 as calculated earlier):
CONT…
 We have observed so far that random errors can be reduced by
taking the average (mean or median) of a number of
measurements.
 However, although the mean or median value is close to the
true value, it would only become exactly equal to the true value
if we could average an infinite number of measurements. As we
can only make a finite number of measurements in a practical
situation, the average value will still have some error.
 It is now widely recognized that, when all of the known or
suspected components of error have been evaluated and the
appropriate corrections have been applied, there still remains
an uncertainty about the correctness of the stated result, i.e., a
doubt about the quality of the result of the measurement.
 The word uncertainty means doubt, and thus in its broadest
sense uncertainty of measurement means doubt about the
validity of the result of a measurement.
Uncertainty is defined as:
 non-negative parameter characterizing the dispersion of the
quantity values being attributed to a measurand, based on the
information used.
GRAPHICAL DATA ANALYSIS TECHNIQUES -
FREQUENCY DISTRIBUTIONS

 Graphical techniques are a very useful way of analyzing


the way in which random measurement errors are
distributed.
 The simplest way of doing this is to draw a histogram, in
which bands of equal width across the range of
measurement values are defined and the number of
measurements within each band is counted.
 Eg: Figure below shows a histogram for set C of the length
measurement data given in previous example, in which the
bands chosen are 2mm wide.
 For instance, there are 11 measurements in the range
between 405.5 and 407.5 and so the height of the
histogram for this range is 11 units.
 Also, there are 5 measurements in the range from 407.5 to
409.5 and so the height of the histogram over this range is
5 units.
CONT…
 The rest of the histogram is completed in a similar
fashion. (N.B. The scaling of the bands was deliberately
chosen so that no measurements fell on the boundary
between different bands and caused ambiguity about
which band to put them in.).

 Such a histogram has


the characteristic
shape shown by truly
random data, with
symmetry about the
mean value(406.5) of
the measurements.
CONT…
 As it is the actual value of measurement error that is
usually of most concern, it is often more useful to draw a
histogram of the deviations of the measurements from the
mean value rather than to draw a histogram of the
measurements themselves.
How to draw this?
 The starting point for this is to calculate the deviation of
each measurement away from the calculated mean value.
 Then a histogram of deviations can be drawn by defining
deviation bands of equal width and counting the number
of deviation values in each band.
 This histogram has exactly the same shape as the
histogram of the raw measurements except that the
scaling of the horizontal axis has to be redefined in terms
of the deviation values.
CONT…

 As the number of measurements increases, smaller


bands can be defined for the histogram, which retains its
basic shape but then consists of a larger number of
smaller steps on each side of the peak.
 In the limit, as the number of
measurements approaches
infinity, the histogram becomes
a smooth curve known as a
frequency distribution curve as
shown in Figure below.
 The sharper and narrower the
curve, the more definitely an
observer may state that the
most probable value of the true
reading is the central value or
mean reading.
CONT…

 The ordinate of this curve is the frequency of occurrence of


each deviation value, F(D), and the abscissa is the
magnitude of deviation, D.
 The symmetry of Figures about the zero deviation value is
very useful for showing graphically that the measurement
data only has random errors.
 If the height of the frequency distribution curve is
normalized such that the area under it is unity, then the
curve in this form is known as a probability curve, and the
height F(D) at any particular deviation magnitude D is
known as the probability density function (p.d.f.).
CONT…

 The condition that the area under the curve is unity can be
expressed mathematically as:
 The probability that the error in
any one particular
measurement lies between two
levels D1 and D2 can be
calculated by measuring the
area under the curve contained
between two vertical lines
drawn through D1 and D2, as
shown by the right-hand
hatched area in Figure below.
This can be expressed
mathematically as:
CONT…
 For assessing the maximum error likely in any one measurement
is the cumulative distribution function (c.d.f.). This is defined as
the probability of observing a value less than or equal to D0, and
is expressed mathematically as:

 Thus, the c.d.f. is the area under the curve to the left of a vertical
line drawn through D0, as shown by the left-hand hatched area
on Figure above.
 The deviation magnitude Dp corresponding with the peak of the
frequency distribution curve (Figure above) is the value of
deviation that has the greatest probability.
 If the errors are entirely random in nature, then the value of Dp
will equal zero.
 Any non-zero value of Dp indicates systematic errors in the data,
in the form of a bias that is often removable by recalibration.
CONT…
AGGREGATION OF MEASUREMENT SYSTEM ERRORS
 Errors in measurement systems often arise from two or more
different sources, and these must be aggregated in the correct
way in order to obtain a prediction of the total likely error in
output readings from the measurement system.
 Two different forms of aggregation are required.
 Firstly, a single measurement component may have both
systematic and random errors and,
 Secondly, a measurement system may consist of several
measurement components that each have separate errors.
Combined effect of systematic and random errors
 If a measurement is affected by both systematic and random
errors that are quantified as ± x (systematic errors) and ± y
(random errors), the total possible error e is expressed as follows:
AGGREGATION OF ERRORS FROM
SEPARATE MEASUREMENT SYSTEM COMPONENTS

 A measurement system often consists of several separate


components, each of which is subject to errors.
 Therefore, what remains to be investigated is how the errors
associated with each measurement system component
combine together, so that a total error calculation can be
made for the complete measurement system.
 All four mathematical operations of addition, subtraction,
multiplication and division may be performed on
measurements derived from different
instruments/transducers in a measurement system.
 Appropriate techniques for the various situations that arise
are covered below.
CONT…
Error in a sum
 If the two outputs y and z of separate measurement system
components are to be added together, we can write the sum as
= + . If the maximum errors in y and z are ± ±
respectively, we can express the maximum and minimum
possible values of S as:

 Where e is given by:


CONT…

 Example: A circuit requirement for a resistance of 550 is


satisfied by connecting together two resistors of nominal
values 220 and 330 in series. If each resistor has a
tolerance of ±2%, the error in the sum calculated according
to the above equations is given by:
CONT…
Error in a difference
 If the two outputs y and z of separate measurement systems are
to be subtracted from one another, and the possible errors are
± ± , then the difference S can be expressed as:

Example:
 A fluid flow rate is calculated from the difference in
pressure measured on both sides of an orifice plate. If the
pressure measurements are 10.0 bar and 9.5 bar and the
error in the pressure measuring instruments is specified
as ± 0.1%, then values for e and f can be calculated as:
CONT…
Error in a product
 If the outputs y and z of two measurement system components are
multiplied together, the product can be written as = . If the
possible error in y is ± ± ,then the maximum and minimum
values possible in P can be written as:

 Whilst this expresses the maximum possible error in P, it tends to


overestimate the likely maximum error since it is very unlikely that
the errors in y and z will both be at the maximum or minimum value
at the same time. A statistically better estimate of the likely
maximum error e in the product P

 Example: If the power in a circuit is calculated from measurements of


voltage and current in which the calculated maximum errors are
respectively ± 1% and ± 2%, then the maximum likely error in the
calculated power value is
CONT…
Error in a quotient
 If the output measurement y of one system component with
possible error ± is divided by the output measurement z of
another system component with possible error ± , then the
maximum and minimum possible values for the quotient can be
written as:

 However, using the same argument as made above for the


product of measurements, a statistically better estimate of the
likely maximum error in the quotient Q is
CONT…

Example
 If the density of a substance is calculated from
measurements of its mass and volume where the
respective errors are ± 2% and ± 3%, then the maximum
likely error in the density value is:
~END~

You might also like