Real Time Systems Notes
Real Time Systems Notes
Real Time Systems Notes
1 https://vtupro.com
PART – A
UNIT – 1
Introduction to Real – Time Systems:
Historical Background, RTS Definition, Classification of Real – Time Systems, Time constraints,
Classification of programs.
6 Hours
UNIT - 2
Concepts of Computers Control:
Introduction, Sequence Control, Loop Control, Supervisory Control, Centralized Computer Control,
Distributed System, Human-Computer interface, Benefits of Computer Control Systems.
6 Hours
UNIT- 3
Computer Hardware Requirements for RTS:
Introduction, General Purpose Computer, Single Chip Microcontroller, Specialized Processors,
Process –Related Interfaces, Data Transfer Techniques, Communications, Standard Interface.
6 Hours
UNIT- 4
Languages For Real –Time Applications:
Introduction, Syntax Layout and Readability, Declaration and Initialization of Variables and
Constants, Modularity and Variables, Compilation , Data Type, Control Structure, Exception
Handling, Low –Level Facilities, Co routines, Interrupts and Device Handling, Concurrency, Real –
Time Support, Overview of Real –Time Languages.
8 Hours
PART –B
UNIT -5&6
Operating Systems:
Introduction, Real –Time Multi –Tasking OS, Scheduling Strategies, Priority Structures, Task
Management, Scheduler and Real –Time Clock Interrupt Handles, Memory Management ,Code
Sharing, Resource control, Task Co-operation and Communication, Mutual Exclusion, Data
Transfer, Liveness, Minimum OS Kernel, Examples.
12 Hours
UNIT-7
Design of RTSS General Introduction:
Introduction, Specification documentation, Preliminary design, Single –Program Approach,
Foreground /Background, Multi- Tasking approach, Mutual Exclusion Monitors.
8 Hours
UNIT -8
RTS Development Methodologies:
Introduction, Yourdon Methodology, Requirement definition For Drying Oven, Ward and Mellor
Method, Hately and Pirbhai Method.
6 Hours
Text Books:
1. Real –Time Computer control –An Introduction, Stuart Bennet, 2nd Edn. Pearson
Education 2005.
Reference: Books:
1. Real-Time Systems Design and Analysis, Phillip. A. Laplante, Second Edition, PHI, 2005.
Real time Systems Development, Rob Williams, Elsevier, 2006.
2. Embedded Systems, Raj kamal, Tata MC Graw Hill, INDIA, 2005.
UNIT – 1
Introduction to Real – Time Systems
Historical Background, RTS Definition, Classification of Real – Time Systems, Time constraints,
Classification of programs.
The origin of the term Real –Time Computing is unclear. It was probably first used either
with project whirlwind, a flight simulator developed by IBM for the U.S. Navy in 1947, or with
SAGE, the Semiautomatic Ground Environment air defense system developed for the U.S. Air
force in the early 1950s. Modern real-time systems, such as those that control Nuclear Power
stations, Military Aircraft weapons systems, or Medical Monitoring Equipment, are complex and
they exhibit characteristics of systems developed from the 1940s through the 1960s. Moreover,
today’s real time systems exist because the computer industry and systems requirements grew.
The earliest proposal to use a computer operating in real time as part of a control system
was made in a paper by Brown and Campbell in 1950. It shows a computer in both the feedback
and feed forward loops. The diagram is shown below:
In the late 1960s real time operating system were developed and various PROCESS
FORTRAN Compilers made their appearance. The problems and the costs involved in attempting
to do everything in one computer led users to retreat to smaller system for which the newly
developing minicomputer (DEC PDP-8, PDP-11, Data General Nova, Honey well 316 etc.) was to
prove ideally suited.
1.5.1 SEQUENTIAL:
1.5.2 MULTI-TASKING:
A multi-task program differs from the classical sequential program in that the actions it is
required to perform are not necessarily disjoint in time; it may be necessary for several actions to be
performed in parallel. Note that the sequential relationships between the actions may still be
important. Such a program may be built from a number of parts (processes or tasks are the names
used for the parts) which are themselves partly sequential, but which are executed concurrently and
which communicate through shared variables and synchronization signals.
Verification requires the application of arguments for sequential programs with some additions. The
task (processes) can be verified separately only if the constituent variables of each task (process) are
distinct. If the variables are shared, the potential concurrency makes the effect of the program
unpredictable (and hence not capable of verification) unless there is some further rule that governs the
sequencing of the several actions of the tasks (processes). The task can proceed at any speed: the
correctness depends on the actions of the synchronizing procedure.
1.5.3 REAL-TIME:
A real-time program differs from the previous types in that, in addition to its actions not
necessarily being disjoint in time, the sequence of some of its actions is not determined by the
designer but by the environment - that is, by events occurring in the outside world which occur in real
time and without reference to the internal operations of the computer. Such events cannot be made to
conform to the intertask synchronization rules.
A real-time program can still be divided into a number of tasks but communication between
the tasks cannot necessarily wait for a synchronization signal: the environment task cannot be
delayed. (Note that in process control applications the main environment task is usually that of
keeping real time, that is a real-time clock task. It is this task which provides the timing for the
SJBIT, Dept of ECE
Page 16
[Real time system/ 06EC762]
17 https://vtupro.com
scanning tasks which gather information from the outside world about the process.) In real-time
programs, in contrast to the two previous types of program, the actual time taken by an action is an
essential factor in the process of verification. We shall assume that we are concerned with real-time
software and references to sequential and multi-tasking programs should be taken to imply that the
program is real time. Non-real-time programs will be referred to as standard program.
Consideration of the types of reasoning necessary for the verification of programs is
important, not because we, as engineers, are seeking a method of formal proof, but because we are
seeking to understand the factors which need to be considered when designing real-time software.
Experience shows that the design of real-time software is significantly more difficult than the design
of sequential software. The problems of real-time software design have not been helped by the fact
that the early high-level languages were sequential in nature and they did not allow direct access to
many of the detailed features of the computer hardware.
As a consequence, real-time features had to be built into the operating system which was
written in the assembly language of the machine by teams of specialist programmers. The cost of
producing such operating systems was high and they had therefore to be general purpose so that they
could be used in a wide range of applications in order to reduce the unit cost of producing them.
These operating systems could be tailored, that is they could be reassembled to exclude or include
certain features, for example to change the number of tasks which could be handled, or to change the
number of input/output devices and types of device. Such changes could usually only be made by the
supplier.
Excepted Question:
1. Explain the difference between a real-time program and a non-real-time program.
Why are real-time programs more difficult to verify than non-real-time programs?
2 To design a computer-based system to control all the operations of a retail petrol
(gasoline) station (control of pumps, cash receipts, sales figures, deliveries, etc.).
What type of real-time system would you expect to use?
3. Classify any of the following systems as real-time?
In each case give reasons for your answer and classify the real-time systems as
hard or soft.
(a) A simulation program run by an engineer on a personal computer.
4 An automatic bank teller works by polling each teller in turn. Some tellers are located
outside buildings and others inside. How the polling system could be organized to ensure that
the waiting time at the outside locations was less than at the inside locations?
5 .Explain the precision required for the analog-to-digital and digital-to-analog converters taking hot-
air blower as an example?
BATCH:
CONTINUOUS:
The term continuous is used for systems in which production is maintained for long periods of
time without interruption, typically over several months or even years. An example of a continuous
system is the catalytic cracking of oil in which the crude oil enters at one end and the various
products - fractionates – are removed as the process continues. The ratio of the different fractions can
be changed but this is done without halting the process.
Continuous systems may produce batches, in that the product composition may be changed
from time to time, but they are still classified as continuous since the change in composition is made
without halting the production process.
A problem which occurs in continuous processes is that during change-over from one
specification to the next, the output of the plant is often not within the product tolerance and must be
scrapped. Hence it is financially important that the change be made as quickly and smoothly as
possible. There is a trend to convert processes to continuous operation - or, if the whole process
cannot be converted, part of the process.
LABORATORY SYSTEMS:
Laboratory-based systems are frequently of the operator-initiated type in that the computer is
used to control some complex experimental test or some complex equipment used for routine testing.
A typical example is the control and analysis of data from a vapour phase chromatograph.
Another example is the testing of an audiometer, a device used to lest hearing. The audiometer
has to produce sound levels at different frequencies; it is complex in that the actual level produced is
a function of frequency since the sensitivity of the human ear varies with frequency. Each audiometer
has to be tested against a sound-level meter and a test certificate produced. This is done by using a
sound-level meter connected to a computer and using the output from the computer to drive the
audiometer through its frequency range. The results printed out from the test computer provide the
test certificate.
As with attempts to classify systems as batch or continuous so it can be difficult at times to
classify systems solely as laboratory. The production of steel using the electric arc furnace involves
complex calculations to determine the appropriate mix of scrap, raw materials and alloying additives.
As the melt progresses samples of the steel are taken and analyzed using a spectrometer. Typically
this instrument is connected to a computer which analyses the results and calculates the necessary
adjustment to the additives. The computer used may well be the computer which is controlling the arc
furnace itself.
In whatever way the application is classified the activities being carried out will include:
• Data acquisition;
• Sequence control;
• Loop control (DDC);
• Supervisory control;
• Data analysis;
• Data storage; and
SJBIT, Dept of ECE
Page 21
[Real time system/ 06EC762]
22 https://vtupro.com
• Human-computer interfacing (HCI).
The objectives of using a computer to control the process will include:
• Efficiency of operation;
• Ease of operation;
• Safety;
•Improved products;
• Reduction in waste;
• Reduced environmental impact; and
• A reduction in direct labour.
GENERAL EMBEDDED SYSTEMS:
In the general range of systems which use embedded computers – from domestic appliances,
through hi-fi systems, automobile management systems, intelligent instruments, active control of
structures, to large flexible manufacturing systems and aircraft control systems - we will find that the
activities that are carried out in the computer and the objectives of using a computer are similar to
those listed above. The major differences will lie in the balance between the different activities, the
time-scales involved, and the emphasis given to the various objectives.
A similar mixture of sequence, loop and supervisory control can be found in continuous
systems. Consider the float glass process shown in Figure 2.3. The raw material - sand, powdered
glass and fluxes (the frit) - is mixed in batches and fed into the furnace. It melts rapidly to form a
PID CONTROL:
The PID control algorithm has the general form
m(t) = Kp [e(t) + 1/Ti ∫01 e(t)dt + Td de(t)/dt]
Where e (t) = r (t) - c (t) and c (t) is the measured variable, r (i) is reference value or set point, and e
(t) is error; Kp is the overall controller gain; T; is the integral action time; and Td is the derivative
action time.
For a wide range of industrial processes it is difficult to improve on the control performance
that can be obtained by using either PI or PID control (except at considerable expense) or it is for this
reason that the algorithms are widely used. For the majority of systems PI control is all that is
necessary. Using a control signal that is made proportional to the error between the desired value of
an output and the actual value of the output is an obvious and (hopefully) a reasonable strategy.
Choosing the value of Kp involves a compromise: a high value of Kp gives a small steady-state error
and a fast response, but the response will be oscillatory and may be unacceptable in many
applications; a low value gives a slow response and a large steady-state error. By adding the integral
action term the steady-state error can be reduced to zero since the integral term, as its name implies,
integrates the error signal with respect to time. For a given error value the rate at which the integral
term increases is determined by the integral action time Ti. The major advantage of incorporating an
integral term arises from the fact that it compensates for changes that occur in the process being
controlled.
SJBIT, Dept of ECE
Page 28
[Real time system/ 06EC762]
29 https://vtupro.com
A purely proportional controller operates correctly only under one particular set of process
conditions: changes in the load on the process or some environmental condition will result in a
steady-state error; the integral term compensates for these changes and reduces the error to zero. For a
few processes which are subjected to sudden disturbances the addition of the derivative term can give
improved performance. Because derivative action produces a control signal that is related to the rate
of change of the error signal, it anticipates the error and hence acts to reduce the error that would
otherwise arise from the disturbance.
In fact, because the PID controller copes perfectly adequately with 90070 of all control
problems, it provides a strong deterrent to the adoption of new control system design techniques.
DDC may be applied either to a single-loop system implemented on a small microprocessor or to a
large system involving several hundred loops. The loops may be cascaded, that is with the output or
actuation signal of one loop acting as the set point for another loop, signals may be added together
(ratio loops) and conditional switches may be used to alter signal connections.
A typical industrial system is shown in Figure 2.5. This is a steam boiler control system. The
steam pressure is controlled by regulating the supply of fuel oil to the burner, but in order to comply
with the pollution regulations a particular mix of air and fuel is required. We are not concerned with
how this is achieved but with the elements which are required to implement the chosen control
system.
DDC APPLICATIONS:
DDC may be applied either to a single-loop system implemented on a small microprocessor or
to a large system involving several hundred loops. The loops may be cascaded, that is with the output
or actuation signal of one loop acting as the set point for another loop, signals may be added together
(ratio loops) and conditional switches may be used to alter signal connections. A typical industrial
system is shown in Figure 2.5. This is a steam boiler control system.
The steam pressure control system generates an actuation signal which is fed to an
auto/manual bias station. If the station is switched to auto then the actuation signal is transmitted; if it
is in manual mode a signal which has been entered manually (say, from keyboard) is transmitted.
The signal from the bias station is connected to two units, a high signal selector and a low signal
selector each of which has two inputs and one output. The signal from the low selector provides the
set point for the DDC loop controlling the oil flow, the signal from the high selector provides the set
point for the air flow controller (two cascade loops). A ratio unit is installed in the air flow
measurement line.
DDC is not necessarily limited to simple feedback control as shown in Figure 2.6. It is
possible to use techniques such as inferential, feed forward and adaptive or self-tuning control.
Inferential control, illustrated in Figure 2.7, is the term applied to control where the variables on
which the feedback control is to be based cannot be measured directly, but have to be 'inferred' from
measurements of some other quantity.
ADAPTIVE CONTROL:
Adaptive control can take several forms. Three of the most common are:
• Preprogrammed adaptive control (gain 5cheduled control);
• Self-tuning; and
• Model-reference adaptive control.
Programmed adaptive control is illustrated in Figure 2.10a. The adaptive, or adjustment,
mechanism makes preset changes on the basis of changes in auxiliary process measurements. For
example, in a reaction vessel a measurement of the level of liquid in the vessel (an indicator of the
volume of liquid in the vessel) might be used to change the gain of the temperature controller; in
many aircraft controls the measured air speed is used to select controller parameters according to a
preset schedule.
An alternative form is shown in Figure 2.10b in which measurements of changes in the
external environment are used to select the gain or other controller parameters. For example, in an
aircraft auto stabilizer, control parameters may be changed according to the external air pressure.
The adoption of computers for process control has increased the range of activities that can be
performed, for not only can the computer system directly control the operation of the plant, but also it
can provide managers and engineers with a comprehensive picture of the status of the plant
operations. It is in this supervisory role and in the presentation of information to the plant operator -
large rooms full of dials and switches have been replaced by VDUs and keyboards - that the major
Recommended Question:
1. List the advantages and disadvantages of using DDC.
2. In the section on human-computer interfacing we made the statement 'the design
of user interfaces is a specialist area'. Can you think of reasons to support this statement and
suggest what sort of background and training a specialist in user interfaces might require?
3. What are the advantages/disadvantages of using a continuous oven? How will the
control of the process change from using a standard oven on a batch basis to
SJBIT, Dept of ECE
Page 42
[Real time system/ 06EC762]
43 https://vtupro.com
using an oven in which the batch passes through on a conveyor belt? Which will
be the easier to control?
4. List the advantages of using several small computers instead of one large
computer in control applications. Are there any disadvantages that arise from
using several computers?
5. List the characteristics of Batch process and continuous process.
UNIT- 3
Computer Hardware Requirements for RTS
Introduction, General Purpose Computer, Single Chip Microcontroller, Specialized Processors,
Process –Related Interfaces, Data Transfer Techniques, Communications, Standard Interface.
STORAGE:
The storage used on computer control systems divides into two main categories: fast access
storage and auxiliary storage. The fast access memory is that part of the system which contains data,
programs and results which are currently being operated on. The major restriction with current
SJBIT, Dept of ECE
Page 46
[Real time system/ 06EC762]
47 https://vtupro.com
computers is commonly the addressing limit of the processor. In addition to RAM (random access
memory - read/write) it is now common to have ROM (read-only memory), PROM (programmable
read-only memory) or EPROM (electronically programmable read only memory) for the storage of
critical code or predefined functions. The use of ROM has eased the problem of memory protection to
prevent loss of programs through power failure or corruption by the malfunctioning of the software
(this can be a particular problem during testing).
An alternative to using ROM is the use of memory mapping techniques that trap instructions
which attempt to store in a protected area. This technique is usually only used on the larger systems
which use a memory management system to map program addresses onto the physical address space.
An extension of the system allows particular parts of the physical memory to be set as read only, or
even locked out altogether: write access can be gained only by the use of 'privileged' instructions. The
auxiliary storage medium is typically disk or magnetic tape. These devices provide bulk storage for
programs or data which are required infrequently at a much lower cost than fast access memory. The
penalty is a much longer access time and the need for interface boards and software to connect them
to the CPU. In a real-time system use of the CPU to carry out the transfer is not desirable as it is slow
and no other computation can take place during transfer. For efficiency of transfer it is sensible to
transfer large blocks of data rather than a single word or byte and this can result in the CPU not being
available for up to several seconds in some cases. The approach frequently used is direct memory
access (DMA). For this the interface controller for the backing memory must be able to take control
of the address and data buses of the computer.
BUS STRUCTURE:
Buses are characterized into three ways:
• Mechanical (physical) structure;
• Electrical; and
• Functional.
In mechanical or physical terms a bus is a collection of conductors which carry electrical
signals, for example tracks on a printed circuit board or the wires in a ribbon cable. The physical form
of the bus represents the mechanical characteristic of the bus system. The electrical characteristics
of the bus are the signal levels, loading (that is, how many loads the line can support), and type of
output gates (open-collector, tri-state). The functional characteristics describe the type of information
which the electrical signals flowing along the bus conductors represent. The bus lines can be divided
into three functional groups:
• Address lines;
• Data lines; and
• Control and status lines.
An individual chip can be used as a stand-alone computing device; however, the power of the
transputer is obtained when several transputers are interconnected to form a parallel processing
network. INMOS developed a special programming language, occam, for use with the transputer.
Occam is based on the assumption that the application to be implemented on the transputer can be
modelled as a set of processes (actions) that communicate with each other via channels. A channel is
a unidirectional link between two processes which provides synchronized communication. A process
can be a primitive process, or a collection of processes; hence the system supports a hierarchical
structure. Processes are dynamic in that they can be created, can die and can create other processes.
3.6 DIGITIAL SIGNAL PROCESSORS:
In applications such as speech processing, telecommunications, radar and hi-fi systems analog
techniques have been used for modifying the signal characteristics. There are advantages to be gained
if such processing can be done using digital techniques in that the digital devices are inherently more
In its simplest form a pulse input interface consists of a counter connected to a line from the
plant. The counter is reset under program control and after a fixed length of time the contents are read
by the computer. A typical arrangement is shown in Figure 3.7, which also shows a simple pulse
output interface. The transfer of data from the counter to the computer uses techniques similar to
those for the digital input described above. The measurement of the length of time for which the
count proceeds can be carried out either by a logic circuit in the counter interface or by the computer.
If the timing is done by the computer then the 'enable' signal must inhibit the further counting of
pulses. If the computing system is not heavily loaded, the external interface hardware required can be
reduced by connecting the pulse input to an interrupt and counting the pulses under program control.
3.13 COMMUNCIATIONS:
The use of distributed computer systems implies the need for communication: between
instruments on the plant and the low-level computers (see Figure 3.20); between the Level land Level
2 computers; and between the Level 2 and the higher level computers. At the plant level
communications systems typically involve parallel analog and digital signal transmission techniques
since the distances over which communication is required are small and high-speed communication is
usually required. At the higher levels it is more usual to use serial communication methods since, as
communication distances extend beyond a few hundred yards, the use of parallel cabling rapidly
becomes cumbersome and costly. As the distance between the source and receiver increases it
becomes more difficult, when using analog techniques, to obtain a high signal-to-noise ratio; this is
particularly so in an industrial environment where there may be numerous
Most of the companies which supply computers for real-time control have developed their
own 'standard' interfaces, such as the Digital Equipment Corporation's Q-bus for the PDP-ll series,
and, typically, they, and independent suppliers, will be able to offer a large range of interface cards
for such systems. The difficulty with the standards supported by particular manufacturers is that they
are not compatible with each other; hence a change of computer necessitates a redesign of the
interface. An early attempt to produce an independent standard was made by the British Standards
Institution (BS 4421, 1969). Unfortunately the standard is limited to the concept of how the devices
should interconnect and the standard does not define the hardware. It is not widely used and has been
overtaken by more recent developments.
An interface which was originally designed for use in atomic energy research laboratories -
the computer automated measurement and control (CAMAC) system - has been widely adopted in
laboratories, the nuclear industry and some other industries. There are also FORTRAN libraries
which provide software to support a wide range of the interface modules. One of the attractions of the
system is that the CAMAC data highway) connects to the computer by a special card; to change to a
different computer only requires that the one card be changed. A general purpose interface bus
(GPIB) was developed by the Hewlett Packard Company in the early 1970s for connecting laboratory
instruments to a computer. The system was adopted by the IEEE and standardized as the IEEE 488
bus system.
Recommended questions:
1. There are a number of different types of analog-to-digital converters. List them and discuss
typical applications for each type (see, for example, Woolvet (1977) or Barney (1985)).
2. The clock on a computer system generates an interrupt every 20 ms. Draw a flowchart for
the interrupt service routine. The routine has to keep a 24 hour clock in hours, minutes and
seconds.
3. Twenty analog signals from a plant have to be processed (sampled and digitized) every 1s.
the analog-to-digital converter and multiplexer which is available can operate in two modes:
automatic scan and computer-controlled scan. In the automatic scan mode, on receipt of a
'start' signal the converter cycles through each channel in turn.
4. A turbine flow meter generates pulses proportional to the flow rate of a liquid. What
methods can be used to interface the device to a computer?
5. Why is memory protection important in real-time systems?
6. What methods can be used to provide memory protection?
4.1.1 SECURITY:
Security of a language is measured in terms of how effective the compiler and the run-time support
system is in detecting programming errors automatically. Obviously there are some errors which
cannot be detected by the compiler regardless of any features provided by the language: for example,
errors in the logical design of the program. The chance of such errors occurring is reduced if the
language encourages the programmer to write clear, well-structured, code. Language features that
assist in the detection of errors by the compiler include:
• good modularity support;
• enforced declaration of variables;
• good range of data types, including sub-range types;
• typing of variables; and
• unambiguous syntax.
It is not possible to test software exhaustively and yet a fundamental requirement of real-time systems
is that they operate reliably. The intrinsic security of a language is therefore of major importance for
the production of reliable programs. In real-time system development the compilation is often
performed on a different computer than the one used in the actual system, whereas run-time testing
has to be done on the actual hardware and, in the later stages, on the hardware connected to plant.
Run-time testing is therefore expensive and can interfere with the hardware development program.
Economically it is important to detect errors at the compilation stage rather than at run-time since the
4.1.2 READABILITY:
Readability is a measure of the ease with which the operation of a program can be understood without
resort to supplementary documentation such as flowcharts or natural language descriptions. The
emphasis is on ease of reading because a particular segment of code will be written only once but will
be read many times. The benefits of good readability are:
• Reduction in documentation costs: the code itself provides the bulk of the
documentation. This is particularly valuable in projects with a long life
expectancy in which inevitably there will be a series of modifications. Obtaining
up-to-date documentation and keeping documentation up to date can be very
difficult and costly.
• Easy error detection: clear readable code makes errors, for example logical
errors, easier to detect and hence increases reliability.
• Easy maintenance: it is frequently the case that when modifications to a program
are required the person responsible for making the modifications was not
involved in the original design - changes can only be made quickly and safely if
the operation of the program is clear.
4.1.3 FLEXIBILITY:
A language must provide all the features necessary for the expression of all the operations
required by the application without requiring the use of complicated constructions and tricks, or resort
to assembly level code inserts. The flexibility of a language is a measure of this facility. It is
particularly important in real-time systems since frequently non-standard I/O devices will have to be
controlled. The achievement of high flexibility can conflict with achieving high security. The
compromise that is reached in modern languages is to provide high flexibility and, through the
module or package concept, a means by which the low-level (that is, insecure) operations can be
hidden in a limited number of self-contained sections of the program.
4.1.4 SIMPLICITY:
4.1.5 PORTABLITILY:
Portability, while desirable as a means of speeding up development, reducing costs and
increasing security, is difficult to achieve in practice. Surface portability has improved with the
standardization agreements on many languages. It is often possible to transfer a program in source
code form from one computer to another and find that it will compile and run on the computer to
which it has been transferred. There are, however, still problems when the word lengths of the two
machines differ and there may also be problems with the precision with which numbers are
represented even on computers with the same word length.
Portability is more difficult for real-time systems as they often make use of specific features
of the computer hardware and the operating system. A practical solution is to accept that a real-time
system will not be directly portable, and to Restrict the areas of non-portability to specific modules by
restricting the use of low level features to a restricted range of modules. Portability can be further
enhanced by writing the application software to run on a virtual machine, rather than for a specific
operating system.
4.1.6 EFFICIENCY:
In real-time systems, which must provide a guaranteed performance and meet specific time
constraints, efficiency is obviously important. In the early computer control systems great emphasis
was placed on the efficiency of the coding - both in terms of the size of the object code and in the
speed of operation - as computers were both expensive and, by today's standards, very slow. As a
consequence programming was carried out using assembly languages and frequently 'tricks' were
used to keep the code small and fast. The requirement for generating efficient object code was carried
over into the designs of the early real-time languages and in these languages the emphasis was on
efficiency rather than security and readability. The falling costs of hardware and the increase in the
computational speed of computers have changed the emphasis. Also in a large number of real-time
SJBIT, Dept of ECE
Page 70
[Real time system/ 06EC762]
71 https://vtupro.com
applications the concept of an efficient language has changed to include considerations of the security
and the costs of writing and maintaining the program; speed and compactness of the object code have
become, for the majority of applications, of secondary importance.
4.8 CO ROUTINES:
In Modula-2 the basic form of concurrency is provided by co routines. The two procedures NEW
PRO C E S sand T RAN S FE R exported by S Y S T EM are defined as follows:
PROCEDURE NEWPROCESS (ParameterLessProcedure: PROC);
workspace Address: ADDRESS;
workspace Size: CARDINAL;
VAR co routine: ADDRESS (* PROCESS *));
PROCEDURE TRANSFER (VAR source, destination: ADDRESS
(*PROCESS*));
Any parameter less procedure can be declared as a PROCESS. The procedure NEW PRO C E S S
associates with the procedure storage for the process parameters The amount to be allocated depends
on the number and size of the variables local to the procedure forming the coroutine, and to the
procedures which it calls. Failure to allocate sufficient space will usually result in a stack overflow
error at run-time. The variable co routine is initialized to the address which identifies the newly
created co routine and is used as a parameter in calls to T RAN S FER. The transfer of control
between co routines is made using a standard procedure T RAN SF ER which has two arguments of
type ADD RES S (PROCESS) . The first is the calling co routine and the second is the co routine to
which control is to be transferred. The mechanism is illustrated in Example 5.13. In this example the
two parameter less procedures form the two co routines which pass control to each other so that the
message
Co routine one and Co routine two
is printed out 25 times. At the end of the loop, Co routine 2 passes control back
to Main Program.
CONCURRENCY:
Wirth (1982) defined a standard module Processes s which provides a higher-level mechanism than
co routines for concurrent programming. The module makes no assumption as to how the processes
Operating Systems
Introduction, Real –Time Multi –Tasking OS, Scheduling Strategies, Priority Structures, Task
Management, Scheduler and Real –Time Clock Interrupt Handles, Memory Management ,Code
Sharing, Resource control, Task Co-operation and Communication, Mutual Exclusion, Data Transfer,
Liveness, Minimum OS Kernel, Examples.
The operating system is constructed, in these cases, as a monolithic monitor. In single-job operating
systems access through the operating system is not usually enforced; however, it is good
programming practice and it facilitates portability since the operating system entry points remain
constant across different implementations. In addition to supporting and controlling the basic
activities, operating systems provide various utility programs, for example loaders, linkers,
assemblers and debuggers, as well as run-time support for high-level languages.
A general purpose operating system will provide some facilities that are not required in a
particular application, and to be forced to include them adds unnecessarily to the system overheads.
Usually during the installation of an operating system certain features can be selected or omitted. A
general purpose operating system can thus be 'tailored' to meet a specific application requirement.
Recently operating systems which provide only a minimum kernel or nucleus have become popular;
additional features can be added by the applications programmer writing in a high-level language.
This structure is shown in Figure 6.2. In this type of operating system the distinction between the
operating system and the application software becomes blurred. The approach has many advantages
for applications that involve small, embedded systems.
A real-time multi-tasking operating system has to support the resource sharing and the timing
requirements of the tasks and the functions can be divided as follows:
Task management: the allocation of memory and processor time (scheduling) to tasks.
Memory management: control of memory allocation.
Resource control: control of all shared resources other than memory and CPU time.
Intertask communication and synchronization: provision of support mechanisms to provide safe
communication between tasks and to enable tasks to synchronize their activities.
Interrupt level:
As we have already seen an interrupt forces a rescheduling of the work of the CPU and the system
has no control over the timing of the rescheduling. Because an interrupt-generated rescheduling is
outside the control of the system it is necessary to keep the amount of processing to be done by the
interrupt handling routine to a minimum. Usually the interrupt handling routine does sufficient
processing to preserve the necessary information and to pass this information to a further handling
routine which operates at a lower-priority level, either clock level or base level. Interrupt handling
routines have to provide a mechanism for task swapping, that is they have to save the volatile
environment.
Clock level:
One interrupt level task will be the real-time clock handling routine which will be entered at
some interval, usually determined by the required activation rate for the most frequently required
task. Typical values are I to 200 ms. Each clock interrupt is known as a tick and represents the
smallest time interval known to the system. The function of the clock interrupt handling routine is to
update the time-of-day clock in the system and to transfer control to the dispatcher. The scheduler
selects which task is to run at a particular clock tick. Clock level tasks divide into two categories:
SJBIT, Dept of ECE
Page 92
[Real time system/ 06EC762]
93 https://vtupro.com
1. CYCLIC: these are tasks which require accurate synchronization with the outside world.
2. DELA Y: these tasks simply wish to have a fixed delay between successive repetitions or to delay
their activities for a given period of time.
Cyclic tasks:
The cyclic tasks are ordered in a priority which reflects the accuracy of timing required for the
task, those which require high accuracy being given the highest priority. Tasks of lower priority
within the clock level will have some jitter since they will have to await completion of the higher-
level tasks.
Delay tasks:
The tasks which wish to delay their activities for a fixed period of time, either to allow some
external event to complete (for example, a relay may take 20 ms to close) or because they only need
to run at certain intervals (for example, to update the operator display), usually run at the base level.
When a task requests a delay its status is changed from runnable to suspended and remains suspended
until the delay period has elapsed.
One method of implementing the delay function is to use a queue of task descriptors, say identified
by the name DELAYED. This queue is an ordered list of task descriptors, the task at the front of the
queue being that whose next running time is nearest to the current time.
Base level:
The tasks at the base level are initiated on demand rather than at some predetermined time interval.
The demand may be user input from a terminal, some process event or some particular requirement of
the data being processed. The way in which the tasks at the base level are scheduled can vary; one
simple way is to use time slicing on a round-robin basis. In this method each task in the runnable
queue is selected in turn and allowed to run until either it suspends or the base level scheduler is again
entered. For real-time work in which there is usually some element of priority this is not a particularly
satisfactory solution. It would not be sensible to hold up a task, which had been delayed waiting for a
relay to close but was now ready to run, in order to let the logging task run.
Most real-time systems use a priority strategy even for the base level tasks. This may be either
a fixed level of priority or a variable level. The difficulty with a fixed level of priority is in
determining the correct priorities for satisfactory operation; the ability to change priorities
dynamically allows the system to adapt to particular circumstances. Dynamic allocation of priorities
can be carried out using a high-level scheduler or can be done on an ad hoc basis from within
SJBIT, Dept of ECE
Page 93
[Real time system/ 06EC762]
94 https://vtupro.com
specific tasks. The high level scheduler is an operating system task which is able to examine the use
of the system resources; it may for example check how long tasks have been waiting and increase the
priority of the tasks which have been waiting a long time. The difficulty with the high-level scheduler
is that the algorithms used can become complicated and hence the overhead in running can become
significant.
• Active (running): this is the task which has control of the CPU. It will normally be the task with the
highest priority of the tasks which are ready to run.
• Ready (runnable, on): there may be several tasks in this state. The attribute of the task and the
resources required to run the task must be available for the task to be placed in the Ready state.
• Suspended (waiting, locked out, delayed): the execution of tasks placed this state has been
suspended because the task requires some resource which is not available or because the task is
waiting for some signal from the plant for example input from the analog-to-digital converter, or
because the task is waiting for the elapse of time.
• Existent (dormant, off): the operating system is aware of the existence of this task, but the task has
not been allocated a priority and has not been made runnable.
• Non-existent (terminated): the operating system has not as yet been made aware of the existence of
this task, although it may be resident in the. memory of the computer.
Task descriptor:
Information about the status of each task is held in a block of memory by the RTOS. This
block is referred to by various names· task descriptor (TD), process descriptor (PD), task control
block (TCB) or task data block (TDB). The information held in the TD will vary from system to
system, but will typically consist of the following:
• Task identification (10);
• Task priority (P);
• Current state of task;
• Area to store volatile environment (or a pointer to an area for storing the volatile
environment); and
• Pointer to next task in a list.
Mutual exclusion:
A multi-tasking, operating system allows the sharing of resources between several
concurrently active tasks. This does not imply that the resources can be used simultaneously. The use
of some resources is restricted to only one task at a time. For others, for example a re-entrant code
module, several tasks can be using them at the same time. The restriction to one task at a time has to
be made for resource such as input and output devices; otherwise there is a danger that input intended
for one task could get corrupted by input for another task. Similarly problems can arise if two tasks
share a data area and both tasks can write to the data area.
Data transfer:
RTOSs typically support two mechanisms for the transfer or sharing of data between tasks:
these are the pool and the channel.
Pool is used to hold data common to several tasks, for example tables of values or parameters
which tasks periodically consult or update. The write operation on a pool is destructive and the read
operation is non-destructive.
Channel supports communication between producers and consumers of data. It can contain
one or more items of information. Writing to a channel adds an item without changing items already
in it. The read operation is destructive in that it removes an item from the channel. A channel can
become empty and also, because in practice its capacity is finite, it can become full.
It is normal to create a large number of pools so as to limit the use of global common data
areas. To avoid the problem of two or more tasks accessing a pool simultaneously mutual exclusion
on pools is required. The most reliable form of mutual exclusion for a pool is to embed the pool
inside a monitor. Given that the read operation does not change the data in a pool there is no need to
restrict read access to a pool to one task at a time. Channels provide a direct communication link
between tasks, normally on a one-to-one basis. The communication is like a pipe down which
successive collections of items of data - messages - can pass. Normally they are implemented so that
they can contain several messages and so they act as a buffer between the tasks. One task is seen as
the producer of information and the other as the consumer. Because of the buffer function of the
Recommended Questions:
1. Draw up a list of functions that you would expect to find in a real-time operating system.
Identify the functions which are essential for a real-time system.
2. Discuss the advantages and disadvantages of using
(a) Fixed table
(b) Linked list
Methods for holding task descriptors in a multi-tasking real-time operating system.
Software design:
Examining the specification shows that the software has to perform several different
functions:
• DDC for temperature control;
• Operator display;
• Operator input;
• Provision of management information;
• System start-up and shut-down; and
• clock/calendar function.
The various functions and type of time constraint are shown in Figure. The control module
has a hard constraint in that it must run every 40 ms. In practice this constraint may be relaxed a little
to, say, 40 ms ± 1 ms with an average value over 1 minute of, say, 40 ms ± 0.5 ms. In general the
SJBIT, Dept of ECE
Page 110
[Real time system/ 06EC762]
111 https://vtupro.com
sampling time can be specified as Ts ± es with an average value, over time T, of Ts ± ea. The
requirement may also be relaxed to allow, for example, one sample in 100 to be missed. These
constraints will form part of the test specification. The clock/calendar module must run every 20 ms
in order not to miss a clock pulse. The operator display, as specified, has a hard constraint in that an
update interval of 5 seconds is given. Common sense suggests that this is unnecessary and an average
time of 5 seconds should be adequate; however, a maximum time would also have to be specified, say
10 seconds.. These would have to be decided upon and agreed with the customer. They should form
part of the specification in the requirements document. The start-up module does not have to operate
in real time and hence can be considered as a standard interactive module. The sub-problems will
have to share a certain amount of information and how this is done and how the next stages of the
design proceed will depend upon the general approach to the implementation. There are three
possibilities:
• Single program;
• Foreground/background system; and
• Multi-tasking.
7.7 MONITORS:
The basic idea of a monitor is implementation of a monitor in Moouia-2 to protect access to a
buffer area is shown. Monitors themselves do not provide a mechanism for synchronizing tasks and
hence for this purpose the monitor construct has to be supplemented by allowing, for example, signals
to be used within it.
The standard monitor construction outlined above, like the semaphore, does not reflect the priority of
the task trying to use a resource; the first task to gain entry can lock out other tasks until it completes.
Hence a lower-priority task could hold up a higher-priority. The lack of priority causes difficulties for
real-time systems. Traditional operating systems built as monolithic monitors avoided the problem by
ensuring that once an operating system call was made (in other words, when a monitor function was
invoked) then the call would be completed without interruption from other tasks. The monitor
function is treated as a critical section. This does not mean that the whole operation requested was
necessarily completed without interruption. For example, a request for access to a printer for output
would be accepted and the request queued; once this had been done another task could enter the
4. With a neat flow chart, describe the single program approach, with reference to RTS design.
6. Considering a system comprising of several hot air blowers. Prepare specification documents
of the same.
The production of robust, reliable software of high quality for real-time computer control
applications is a difficult task which requires the application of engineering methods. During the last
ten years increasing emphasis has been placed on formalizing the specification, design and
construction of such software, and several methodologies are now extant. All of the methodologies
address the problem in three distinct phases. The production of a logical or abstract model - the
process of specification; the development of an implementation model for a virtual machine from the
logical model - the process of design; and the construction of software for the virtual machine
together with the implementation of the virtual machine on a physical system - the process of
implementation. These phases, although differently named, correspond to the phases of development
generally recognized in software engineering texts. Abstract model: the equivalent of a requirements
specification, it is the result of the requirements capture and analysis phase. Implementation model:
this is the equivalent of the system design; it is the product of the design stages - architectural design
and the detail design
The Yourdon methodology has been developed over many years. It is a structured methodology based
on using data-flow modeling techniques and junctional decomposition. It supports development from
the initial analysis stage through to implementation. Both Ward and Mellor (1986) and Hatley and
Pirbhai (1988) have introduced extensions to support the use of the Yourdon approach for the
development of real-time systems and the key ideas of their methodologies are:
• Subdivision of system into activities;
• Hierarchical structure;
• Separation of data and control flows;
• No early commitment to a particular technology; and
• Traceability between specification, design and implementation.
8.3 REQUIREMENT DEFINITION FOR DRYING OVEN:
Components are dried by being passed through an oven. The components are placed on a
conveyor belt which conveys them slowly through the drying oven. The oven is heated by three gas-
fired burners placed at intervals along the oven. The temperature in each of the areas heated by the
burners is monitored and controlled. An operator console unit enables the operator to monitor and
control the operation of the unit. The system is presently controlled by a hard wired control system.
The requirement is to replace this hard wired control system with a computer-based system. The new
computer-based system is also to provide links with the management computer over a communication
link.
The inputs come from a plant interface cubicle and from the operator. There will need to be inputs
obtained from the communication interface.
Plant Inputs
A thermocouple is provided in each heater area - the heater areas are pre-heat, drying, and cooling.
The inputs are available as voltages in the range 0 to 10 volts at pins 1 to 9 on socket j22 in the
interface cubicle.
The conveyor speed is measured by a pulse counting system and is available on pin 3 at socket j23 in
the interface cubicle. It is referred to as con-speed..
There are three interlocked safety guards on the conveyor system and these are in-guard, out-guard,
and drop-guard. Signals from these guards are provided on pins 4, 5, 6 of socket j23. These signals
are set at logic HIGH to indicate that the guards are locked in place.
A conveyor-halted signal is provided on pin I of socket j23. This signal is logic HIGH when the
conveyor is running.
Plant Outputs
Heater Control: each of the three heaters has a control unit. The input to the control unit is a voltage
in the range 0 to 10 volts which corresponds to no heat output to maximum heat output.
Conveyor Start-up: a signal convey-start is output to the conveyor motor control unit.
Guard Locks: asserting the guard-lock line, pin 8 on j10 , causes the guards to be locked in position
and turns on the red indicator light on the outside of the unit.
Operator Inputs
The operator sends the following command inputs: Start, Stop, Reset, Re-start, and Pause. The
operator can also adjust the desired set point for each area of the dryer.
Operator Outputs
The operator VDU displays the temperature in each area, the conveyor belt speed, and the alarm
status. It should also display the current date and time and the last operator command issued.
8.4 WARD AND MELLOR METHOD:
The outline of the Ward and Mellor method is shown in Figure. The starring point is to build,
from the analysis of the requirements, a software model representing the requiremel1ls in terms of the
Recommended Questions:
1. With a general arrangement for a drying oven, explain its requirements.
2. Write about environmental model, with context diagram for drying oven.
4. Show the outline of abstract modeling approach of ward and Mellor and explain.
5. Differentiate between Ward Mellor and hatley and pirbhai mythologies.
4. Explain the CFDO drying over controller using Hatley and pirbhai notation.
6. What do you mean by enhancing the model? Explain with a neat diagram, the
relationship between real environment and virtual environment.
7. Write short notes on: i) PSPECs and CSPECs ii) Software modeling iii) YOUR DON methodology