UNIT I Introduction ACT

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 56

1153EE113 - TRANSDUCERS AND SENSORS

COURSE OBJECTIVES:
The students impart knowledge on
To understand the structural and functional principles
of sensors and transducers used for various physical and
nonelectric quantities.
To explain the principles of operation of the sensor
parameters
To understand the knowledge about the
implementation of sensors and transducers into a
control system structure.
UNIT I
INTRODUCTION
CONTENTS
Basic method of measurement
Generalized scheme for measurement systems
Units and standards
Errors, classification of errors, error analysis,
Statistical methods
Sensor
Transducer
Classification of transducers
Basic requirement of transducers.
MEASUREMENT:

 Measurement of a given quantity is essentially an


act or result of comparison between the quantity
(whose magnitude is unknown) and predetermined
or predefined standards.
 Two quantities are compared the result is
expressed in numerical values.
BASIC REQUIREMENTS FOR A MEANINGFUL
MEASUREMENT:

The standard used for comparison purposes must be


accurately defined and should be commonly accepted.

The apparatus used and the method adopted must be


provable (verifiable).
SIGNIFICANCE OF MEASUREMENT:

Importance of Measurement is simply and eloquently


expressed in the following statement of famous physicist
Lord Kelvin:

“I often say that when you can measure what you are
speaking about and can express it in numbers, you know
something about it; when you cannot express in it
numbers your knowledge is of meager and
unsatisfactory kind”
METHODS OF MEASUREMENT:
1. Direct Methods
2. Indirect Methods

DIRECT METHODS: In these methods, the unknown


quantity (called the measurand ) is directly compared
against a standard.

INDIRECT METHOD: Measurements by direct


methods are not always possible, feasible and
practicable. In engineering applications measurement
systems are used which require need of indirect method
for measurement purposes.
INSTRUMENTS AS A MEASUREMENT
SYSTEMS:

Measurement involves the use of instruments as a


physical means of determining quantities or variables.

Because of modular nature of the elements within it, it


is common to refer the measuring instrument as a
Measurement System.
EVOLUTION OF INSTRUMENTS:
1. Mechanical
2. Electrical
3. Electronic Instruments
MECHANICAL: These instruments are very reliable
for static and stable conditions. But their disadvantage is
that they are unable to respond rapidly to measurements
of dynamic and transient conditions.
ELECTRICAL: It is faster than mechanical, indicating
the output are rapid than mechanical methods. But it
depends on the mechanical movement of the meters. The
response is 0.5 to 24 seconds.
ELECTRONIC: It is more reliable than other system. It
uses semiconductor devices and weak signal can also be
detected
CLASSIFICATION OF INSTRUMENTS:
1.Absolute Instruments.
2.Secondary Instruments.

ABSOLUTE: These instruments give the magnitude if


the quantity under measurement terms of physical
constants of the instrument.

SECONDARY: These instruments are calibrated by the


comparison with absolute instruments which have
already been calibrated.
FURTHER IT IS CLASSIFIED AS

1.Deflection Type Instruments


(Example: The pressure gauge is a good example of a
deflection-type instrument, where the value of the
quantity being measured is displayed in terms of the
amount of movement of a pointer.)

2.Null Type Instruments


(Example: The DC potentiometer, An instrument in
which zero or null indication determines the magnitude
of measured quantity such type of instrument is called a
null type instrument.)
Functions of instrument and measuring system can
be classified into three. They are:
1. Indicating function.
2. Recording function.
3. Controlling function.

Application of measurement systems are:


1. Monitoring of process and operation.
2. Control of processes and operation.
3. Experimental engineering analysis.
TYPES OF INSTRUMENTATION SYSTEM:

Intelligent Instrumentation

Dumb Instrumentation
(where the instrument measures the variable and it is up to
the observer to process the data)
ELEMENTS OF GENERALIZED MEASUREMENT
SYSTEM:
To understand a measuring instrument/system, it is
important to have a systematic organization and analysis
of measurement systems. The operation of a measuring
instrument or a system could be described in a
generalized manner in terms of functional elements.
Each functional element is made up of a component or
groups of components which perform required and
definite steps in the measurement.
1. Primary sensing element

 The quantity or the variable which is being measured


makes its first contact with the primary sensing
element of a measurement system.
 The measurement is thus first detected by primary
sensor or detector.
 The measurement is then immediately converted into
an analogous electrical signal. This is done by a
transducer.
 The first stage of a measurement system is known as a
detector transducer stage.
2. Variable conversion element

The output signal of the variable sensing element may


be any kind. It could be a mechanical or electrical signal.
It may be a deflection of elastic member or some
electrical parameter, such as, voltage, frequency etc.
Sometimes, the output from the sensor is not suited to the
measurement system.
For the instrument to perform the desired function, it
may be necessary to convert this output signal from the
sensor to some other suitable form while preserving the
information content of the original signal.
3. Variable manipulation element

Variable manipulation means a change in numerical


value of the signal.
The function of a variable manipulation element is to
manipulate the signal presented to this element while
preserving the original nature of the signal.
For example, a voltage amplifier acts as a variable
manipulation element. The amplifier accepts a small
voltage signal as input and produces an output signal
which is also voltage but of greater magnitude.
4. Signal conditioning element

If the signal after being sensed contains unwanted


contamination or distortion, there is a need to remove the
interfering noise / sources before its transmission to next
stage. Otherwise we may get highly distorted results
which are far from its true value.
The solution to these problems is to prevent or remove
the signal contamination or distortion.
The operations performed on the signal, to remove the
signal contamination or distortion, is called Signal
Conditioning.
5. Data transmission element

There are several situations where the elements of an


instrument are actually physically separated. In such
situations it becomes necessary to transmit data from one
element to another.
The element that performs this function is called a Data
Transmission Element.
For example satellites or the air planes are physically
separated from the control stations at earth.
6. Data presentation element

The function of data presentation element is to convey


the information about the quantity under measurement to
the personnel handling the instrument. The information
conveyed must be in a convenient form.
In case data is to be monitored, visual display devices
are needed. These devices may be ammeters, voltmeters,
etc. In case the data is to be recorded, recorders like
magnetic tapes, high speed camera and T.V. For control
and analysis purpose computers and the control elements
are used.
The final stage in a measurement system is known as
terminating stage.
CHARACTERISTICS OF INSTRUMENTS AND
MEASUREMENT SYSTEMS:
The performance characteristics of an instrument are
mainly divided into two categories:
1. Static characteristics
2. Dynamic characteristics

1. Static characteristics: The set of criteria defined for


the instruments, which are used to measure the
quantities which are slowly varying with time or mostly
constant, i.e., do not vary with time, is called ‘static
characteristics’.
The various static characteristics are:
a) Accuracy
b) Precision
c) Sensitivity
d) Linearity
e) Reproducibility
f) Repeatability
g) Resolution
h) Threshold
i) Drift
j) Stability
k) Tolerance
l) Range or span
Accuracy: It is the degree of closeness with which the
reading approaches the true value of the quantity to be
measured.
The accuracy can be expressed in following ways:
a) Point accuracy: Such accuracy is specified at only
one particular point of scale.
b) Accuracy as percentage of scale span: When an
instrument as uniform scale, its accuracy may be
expressed in terms of scale range.
c) Accuracy as percentage of true value: The best way
to conceive the idea of accuracy is to specify it in terms
of the true value of the quantity being measured.
Precision: It is the measure of reproducibility i.e., given
a fixed value of a quantity, precision is a measure of the
degree of agreement within a group of measurements.
The precision is composed of two characteristics:
a) Conformity: Consider a resistor having true value as
2385692 , which is being measured by an ohmmeter.
But the reader can read consistently, a value as 2.4 M
due to the non availability of proper scale. The error
created due to the limitation of the scale reading is a
precision error.
b) Number of significant figures: The precision of the
measurement is obtained from the number of significant
figures, in which the reading is expressed. The
significant figures convey the actual information about
the magnitude & the measurement precision of the
quantity.
The precision can be mathematically expressed as:

Where, P = precision
Xn = Value of nth measurement
Xn = Average value the set of measurement values
Sensitivity: The sensitivity denotes the smallest change
in the measured variable to which the instrument
responds. It is defined as the ratio of the changes in the
output of an instrument to a change in the value of the
quantity to be measured. Mathematically it is expressed
as,
Reproducibility: It is the degree of closeness with
which a given value may be repeatedly measured. It is
specified in terms of scale readings over a given period
of time.

Repeatability: It is defined as the variation of scale


reading & random in nature

Drift: Drift may be classified into three categories:

a) zero drift: If the whole calibration gradually shifts


due to slippage, permanent set, or due to undue warming
up of electronic tube circuits, zero drift sets in.
b) span drift or sensitivity drift If there is proportional
change in the indication all along the upward scale, the
drifts is called span drift or sensitivity drift.

c) Zonal drift: In case the drift occurs only a portion of


span of an instrument, it is called zonal drift.
Resolution: If the input is slowly increased from some
arbitrary input value, it will again be found that output
does not change at all until a certain increment is
exceeded. This increment is called resolution.

Threshold: If the instrument input is increased very


gradually from zero there will be some minimum value
below which no output change can be detected. This
minimum value defines the threshold of the instrument.

Stability: It is the ability of an instrument to retain its


performance throughout is specified operating life.
Tolerance: The maximum allowable error in the
measurement is specified in terms of some value which
is called tolerance.

Range or span: The minimum & maximum values of a


quantity for which an instrument is designed to measure
is called its range or span.
Dynamic characteristics: The set of criteria defined for
the instruments, which are changes rapidly with time, is
called ‘dynamic characteristics’.

The various static characteristics are:


i)Speed of response
ii)Measuring lag
iii)Fidelity
iv)Dynamic error
Speed of response: It is defined as the rapidity with
which a measurement system responds to changes in the
measured quantity.
Measuring lag: It is the retardation or delay in the
response of a measurement system to changes in the
measured quantity. The measuring lags are of two types:
a) Retardation type: In this case the response of the
measurement system begins immediately after the
change in measured quantity has occurred.
b) Time delay lag: In this case the response of the
measurement system begins after a dead time after the
application of the input.
Fidelity: It is defined as the degree to which a
measurement system indicates changes in the measurand
quantity without dynamic error.

Dynamic error: It is the difference between the true


value of the quantity changing with time & the value
indicated by the measurement system if no static error is
assumed. It is also called measurement error.
Unit of measurement
A unit of measurement is a definite magnitude of
a physical quantity, defined and adopted by convention
and/or by law, that is used as a standard for
measurement of the same physical quantity. Any other
value of the physical quantity can be expressed as a
simple multiple of the unit of measurement.

For example, length is a physical quantity. The metre is


a unit of length that represents a definite predetermined
length. When we say 10 metres, we actually mean 10
times the definite predetermined length called "metre".
STANDARD:
A standard is a physical representation of a unit of
measurement. A known accurate measure of physical
quantity is termed as standard. These standards are
used to determine the values of other physical
quantities by the comparison methods.
In fact , a unit is realized by reference to a material
standard or to natural phenomena, including physical
and atomic constants. For example, the fundamental
unit of length in the international system (SI) is the
meter defined as the distance between two fine lines.
Based on the functions of an application. Standards are
classified into four categories as
1. International standards
2. Primary standards
3. Secondary standards
4. Working standards
International standards:
International standards are defined by international
agreement. They are periodically evaluated and checked
by absolute measurement in terms of fundamental units
of physics. They represent certain units of measurement
to the closest possible accuracy attainable by the science
and technology of measurement. These international
standards are not available to ordinary users for
measurements and calibrations.
International ohms:
It is defined as the resistance offered by a column of
mercury having a mass of 14.4521gms, uniform cross
sectional area and length of 106.300 cm, to the flow
at constant current at the melting point of ice.

International Amperes:
It is an unvarying current, which when passed through a
solution of silver nitrate in water deposits silver at the
rate of 0.00111gm/5.
Primary standards:
The Principle function of primary standards is the
calibration and verification of secondary standards.
Primary standards are maintained at the National
standards Laboratories in different countries. They are
not available for use outside the National Laboratory.
These Primary standards are absolute standards of high
accuracy that can be used as ultimate reference standard.
Secondary standards:
Secondary standards are basic reference standards used
by measurement and calibration laboratories in
industries. These secondary standards are maintained by
the particular industry to which they belong. Each
industry has its own secondary standard to the National
Standards Laboratory for calibration, the National
Standards Laboratory returns the secondary standards
to the particular industrial laboratory with a
certification of measuring accuracy in terms of a
primary standards.
Working standards:
Working standards are the principal tools of a
measurement laboratory. These standards are used to
check and calibrate laboratory instruments for accuracy
and performance for example, manufactures of
electronic components Such as capacitors resistors etc.
Use a standard called a working standard for checking
the component values being manufactured a standard
resistor for checking of resistance value manufactured.
1.3 ERRORS IN MEASUREMENT
Difference between the actual value of a quantity and the value
obtained by a measurement is called an error.

Error = actual value – measured value

The types of errors are follows


i) Gross errors
ii) Systematic errors
iii) Random errors

1. Gross Errors: The gross errors mainly occur due to carelessness


or lack of experience of a human begin. These errors also occur due
to incorrect adjustments of instruments .
These errors cannot be treated mathematically .
These errors are also called “personal errors”.
Ways to minimize gross errors:
The complete elimination of gross errors is not possible but
one can minimize them by the following ways:
1. Taking great care while taking the reading, recording the
reading & calculating the result
2. Without depending on only one reading, at least three or
more readings must be taken & preferably by different persons.

2. Systematic errors:
A constant uniform deviation of the operation of an
instrument is known as a Systematic error
The Systematic errors are mainly due to the short comings of
the instrument & the characteristics of the material used in the
instrument, such as defective or worn parts, ageing effects,
environmental effects, etc.
Types of Systematic errors:
There are three types of Systematic errors as:
i) Instrumental errors
ii) Environmental errors
iii) Observational errors

i) Instrumental errors:
These errors can be mainly due to the following three
reasons:
a) Short comings of instruments:
These are because of the mechanical structure of the
instruments. For example friction in the bearings of various moving
parts; irregular spring tensions, reductions in due to improper
handling , hysteresis, gear backlash, stretching of spring, variations
in air gap, etc .,
Ways to minimize this error:
These errors can be avoided by the following methods:
Selecting a proper instrument and planning the proper procedure for
the measurement recognizing the effect of such errors and applying
the proper correction factors calibrating the instrument carefully
against a standard .

b) Misuse of instruments:
A good instrument if used in abnormal way gives misleading
results. Poor initial adjustment, Improper zero setting, using leads of
high resistance etc.,
C) Loading effects
Loading effects due to improper way of using the instrument
cause the serious errors. The best example of such loading effect error
is connecting a well calibrated voltmeter across the two points of high
resistance circuit. The same voltmeter connected in a low resistance
circuit gives accurate reading.
Ways to minimize this error:
Thus the errors due to the loading effect can be avoided by using an
instrument intelligently and correctly.
ii) Environmental errors:
These errors are due to the conditions external to the measuring
instrument. The various factors resulting these environmental errors are
temperature changes, pressure changes, thermal emf, ageing of equipment
and frequency sensitivity of an instrument.
Ways to minimize this error:
The various methods which can be used to reduce these errors are:
i) Using the proper correction factors and using the information supplied by
the manufacturer of the instrument
ii) Using the arrangement which will keep the surrounding conditions
constant
iii) Reducing the effect of dust ,humidity on the components by hermetically
sealing the components in the instruments
iv) The effects of external fields can be minimized by using the magnetic or
electro static shields or screens
iii) Observational errors:
These are the errors introduced by the observer.
These are many sources of observational errors such as parallax error
while reading a meter, wrong scale selection, etc.

Ways to minimize this error :


To eliminate such errors one should use the instruments with
mirrors, knife edged pointers, etc.,

Note:
The systematic errors can be subdivided as static and dynamic
errors.
The static errors are caused by the limitations of the
measuring device while the dynamic errors are caused by the
instrument not responding fast enough to follow the changes in the
variable to be measured.
3. Random errors:
Some errors still result, though the systematic and
instrumental errors are reduced or atleast accounted for. The causes of
such errors are unknown and hence the errors are called random
errors.

Ways to minimize this error :


The only way to reduce these errors is by increasing the
number of observations and using the statistical methods to obtain the
best approximation of the reading.
1.4 STATISTICAL EVALUATION OF MEASUREMENT
DATA
Out of the various possible errors, the random errors cannot be
determined in the ordinary process of measurements. Such errors are
treated mathematically.
The mathematical analysis of the various measurements is
called “statistical analysis of the data”.

For such statistical analysis, the same reading is taken number


of times, generally using different observers, different instruments &
by different ways of measurement. The statistical analysis helps to
determine analytically the uncertainty of the final test results.
Arithmetic mean & median:
When the number of readings of the same measurement are
taken, the most likely value from the set of measured value is the
arithmetic mean of the number of readings taken.
The arithmetic mean value can be mathematically obtained as,

This mean is very close to true value, if number of readings is


very large.

But when the number of readings is large, calculation of mean


value is complicated. In such a case, a median value is obtained
which is obtained which is a close approximation to the arithmetic
mean value. For a set of measurements X1, X2, X3, Xn written down
in the ascending order of magnitudes, the median value is given by,

Xmedian=X (n+1)/2
Average deviation:
The deviation tells us about the departure of a given reading
from the arithmetic mean of the data set
Where
di=xi- X
di = deviation of ith reading
Xi= value of ith reading
X = arithmetic mean

The average deviation is defined as the sum of the absolute


values of deviations divided by the number of readings. This is also
called mean deviation
Range:
It is the simplest measure of dispersion. It is the differences
between greatest and least values of data.

Standard Deviation (S.D):


It is one of the important term in the analysis of random
error. It is also called. Root mean square deviation. It is defined as
the square root of the sum of the individual deviations squared
divided by the number of readings.
No of observations are greater than 20, S.D is denoted as “σ”
no of observations are less than 20 S.D is denoted as “S”
Variance:
Variance is the mean square deviation.
v = (S.D)^2

You might also like