Basic Engineering Measurement AGE 2310

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 30

Basic Engineering Measurement

AGE 2310
Lecture

2 King Saud University


Al Muzahimiyah Branch
College of Engineering

Dr. Rihem FARKH

1
Topic 2: Measurement
Calibration & Uncertainty
Analysis

2
Course Objectives

 To know Error & Uncertainty Analysis.

3
Basic Terminology of Measurement

 Measurement
The International Vocabulary of Basic and General Terms in
Metrology (VIM), using International Organization for
Standardization (ISO) norms, has defined measurement as " a
set of operations having the object of determining the value of
a quantity". In other words, a measurement is the evaluation
of a quantity made after comparing it to a quantity of the same
type which we use as a "unit".

 Metrology
the science and "grammar" of measurement is defined as "the
field of knowledge concerned with measurement“.
Standardized measurement units mean that scientific and
economic figures can be understood, reproduced, and
converted with a high degree of certitude.

4
Basic Terminology of Measurement

 Instrumentation
refers to a group of permanent systems which help us measure
objects. In this sense, instruments and systems of
measurement constitute the "tools" of measurement and
metrology.

 Load Effects
 measurement operations may require connection or without
contact.
 This linking of an instrument to an object or site of
investigation means that a transfer of energy and/or
information termed "a load effect" takes place.
 An example of this is shown by the insertion of a measuring
probe into a cup of tea which takes some heat from the tea,
leading to a difference between the "true“ value and the value
to be measured.

5
Calibration

 The relationship between the value of the input to the


measurement system and the system’s indicated output value
is established during calibration of the measurement system.

 The known value used for the calibration is called the standard.

 The quantity to be measured being the measurand, which we


call m, the sensor must convert m into an electrical variable
called s. The expression s = F(m) is established by calibration.
By using a standard or unit of measurement, we discover for
these values of m (m1, m2 … mi ) electrical signals sent by the
sensor (s1, s2 ... si ) and we trace the curve s(m), called the
sensor calibration curve.

6
Accuracy & Precision

 Accuracy of a system can be estimated during calibration. If


the input value of calibration is known exactly, then it can
called the true value. The accuracy of a measurement system
refers to its ability to indicate a true value exactly.
 Accuracy = It is the ability of instrument to tell the truth
 Accuracy is related to absolute error, ε:
ε = true value – indicated value
from which the percent accuracy is found by :

 Precision: or repeatability of a measuring system refers to the


ability of the system to indicate a particular value upon
repeated but independent applications of a specific value input.
Precision of a measurement describes the units used to
measure something.
 Precision = It is the ability of the instrument to give the
same output for the same input under the same
conditions
7
Precision Example: How long is the pencil?

 It is impossible to make a perfectly precise measurement.


 Accuracy can be improved up to but not beyond the precision
of the instrument by calibration.

8
Precision & Bias Errors

 Precision Error - Random error


 is a measure of the random variation found during repeated
measurements.
 random error = reading - average of readings
 Random error causes a random variation in measured values
found during repeated measurements of a variable

 Bias (Systematic) Error


 is the difference between the average value in a series of
repeated calibration measurements and the true value.
 Systematic error causes an offset between the mean value of
the data set and its true value
 systematic error = average value - true value

 Both random and systematic errors affect a system’s accuracy.

9
Inter - Bayamon
Precision & Bias Errors
MECN 4600

10
Precision & Bias Errors

Effects of precision and bias errors on calibration readings

11
Term Used in Instrument Rating

 Resolution: The smallest increment of change in the measured


value that can be determined from the instrument’s readout
scale. The resolution is often on the same order as the
precision; sometimes it is smaller.

 Sensitivity: The change of an instrument’s output per unit


change in the measured quantity. Typically, an instrument with
higher sensitivity will have also finer resolution, better
precision, and higher accuracy.

 Range: The proper procedure for calibration is to apply known


inputs ranging from the minimum to the maximum values for
which the measurement system is to be used. These limits the
operating range of the system.

12
Error Classifications

 Systematic, Fixed or Bias Errors:


 Insidious in nature, exist unnoticed unless deliberately searched.
 Repeated readings to be in error by the same amount.
 Not susceptible to statistical analysis.
 Calibration errors
 Certain consistently recurring human error
 Technique error
 Uncorrected loading error
 Limitations of system resolution

 Precision or Random Errors:


 Distinguished by their lack of consistency. Usually (not always)
follow a certain statistical distribution.
 In many instances very difficult to distinguish from bias errors.
 Error stemming from environmental variations
 Certain type of human error
 Error resulting from variations in definition .

13
Error Classifications

 Illegitimate Errors
are simply mistakes on the part of experimenter
 Can be eliminated through the exercise of care and repetition of the
measurement.
 Blunders and mistakes
 Computational errors
 Chaotic errors.

 Uncertainty
 The uncertainty is a numerical estimate of the possible range of the
error in a measurement.
 In any measurement, the error is not known exactly since the true
value is rarely known exactly.
 that the error is within certain bounds, a plus or minus range of the
indicated reading

14
Inter - Bayamon
Uncertainty

Total Error

Bias Error

Precision
Error
MECN 4600

X True

15
Inter - Bayamon
Uncertainty

 When we measure some physical quantity with an


instrument and obtain a numerical value, we want to know
how close this value is to the true value.

 The difference between the true value and the measured


value is the error.

 Unfortunately, the true value is unknown and unknowable.


If we knew it, we wouldn’t need the experiment.

 Since this is the case, the exact error is never known. We


can only estimate it.
MECN 4600

16
Inter - Bayamon
Uncertainty

 The estimate of the error is called the uncertainty.


 It includes both bias and precision errors.
 We need to identify all the potential significant
errors for the instrument(s).
 All measurements should be given in three parts
 Mean value
 Uncertainty
 Confidence interval on which that uncertainty is
based (typically 95% C.I.)
 Uncertainty can be expressed in either absolute terms
(i.e., 5 Volts ±0.5 Volts)
or in
percentage terms (i.e., 5 Volts ±10%)
(relative uncertainty = DV / V x 100)
MECN 4600

 We will use a 95 % confidence interval throughout this


course

17
Inter - Bayamon
Uncertainty: How to Estimate Bias Error

 Manufacturers’ Specifications
 If you can’t do better, you may take it from the
manufacturer’s specs.
 Accuracy - %reading, offset, or some combination
(e.g., 0.1% reading + 0.15 counts)
 Unless you can identify otherwise, assume that
these are at a 95% confidence interval
 Independent Calibration
 May be deduced from the calibration process
MECN 4600

18
Inter - Bayamon
Random Uncertainty

 Use Statistics to Estimate Random Uncertainty


 Mean: the sum of measurement values divided by the
number of measurements.
1 N
x   xi
N i 1

 Deviation: the difference between a single result and the


mean of many results.

d i  xi  x
 Standard Deviation: is used to quantify the amount of
variation or dispersion of a set of data values
MECN 4600

 A low standard deviation indicates that the data points


tend to be close to the mean

19
Inter - Bayamon
Uncertainty

 Large sample size


1
1 2 
   xi  x
2

n 

 Small sample size (n<30)


 Slightly larger value

1
 1 2 
 x i  x
2
s 
n  1 
MECN 4600

20
Inter - Bayamon
Uncertainty

 Population: The collection of all items (measurements) of


the group. Represented by a large number of
measurements.
 Gaussian distribution*

3 - 2 - x  2 3

x i  x  1 n 68.3% of the time


xi  x  2 n 95.4% of the time
x i  x  3 n 99.7% of the time
MECN 4600

 Sample: A portion of (or limited number of items in) a


population.

21
Inter - Bayamon
Student t-distribution (small sample sizes)

 The t-distribution was formulated by W.S. Gosset, a


scientist in the Guinness brewery in Ireland, who
published his formulation in 1908 under the pen name
(pseudonym) “Student.”

 The t-distribution looks very much like the Gaussian


distribution, bell shaped, symmetric and centered about
the mean. The primary difference is that it has stronger
tails, indicating a lower probability of being within an
interval. The variability depends on the sample size, n.
MECN 4600
Inter - Bayamon
Student t-distribution

 With a confidence interval of c%

s s
x  t /2,  X  x  t /2,
n n
 Where =1-c and v=n-1 (Degrees of Freedom)
 Don’t apply blindly - you may have better information
about the population than you think.
MECN 4600
Inter - Bayamon Reading Number Volts, mv
Student t-distribution 1 5.30
2 5.73
3 6.77
4 5.26
 Example: t-distribution 5 4.33
 Sample data 6 5.45
7 6.09
 n = 21
8 5.64
 Degrees of Freedom v= n -1= 20 9 5.81
 Desire 95% Confidence Interval 10 5.75

 = 1 - c = 0.05
11
12
5.42
5.31
  / 2  0.025 13
14
5.86
5.70
15 4.91
 Student t-dist chart 16 6.02
 t=2.086 17 6.25
18 4.99
19 5.61
20 5.81
MECN 4600

21 5.60

Mean 5.60
Standard dev. 0.51
Variance 0.26
Inter - Bayamon
Student t-distribution

 Precision error is
 ±0.23 Volts

s
x  t 
2 , n

0.51
5.60  2.086 
21

5.60  0.23
MECN 4600
MECN 4600 Inter - Bayamon
Student t-distribution
Inter - Bayamon
How to combine bias and precision error?

 Rules for combining independent uncertainties for


measurements:
 Both uncertainties MUST be at the same Confidence
Interval (95%)

Ux  B  P 2
x x
2

 Precision error obtained using Student’s-t method

 Bias error determined from calibration,


manufacturers’ specifications, smallest division.
MECN 4600
Inter - Bayamon
Propagation of Error

 For a function y = f(x1,x2,...,xN), the RSS uncertainty is:

  f  2   f 
2
 f 
2

 u RSS    x1     x2   ...    xN  


  x1    x2    xN  

 First determine uncertainty of each variable in the


form ( xN ± xN)
 Use previously established methods, including bias
and precision error
MECN 4600
MECN 4600 Inter - Bayamon
Propagation of Error

29
Inter - Bayamon
Propagation of Error

 Consider the calculation of electrical power, P = EI:


MECN 4600

30

You might also like