Co Set-2
Co Set-2
Co Set-2
The hardware components of a computer are classified into five groups are Input unit, Central
processing unit,Output Unit, Control Unit, Arithmetic & logical unit.
Input unit
Input units are used by the computer to read the information. The most frequently used input devices are
keyboards, mouse, joysticks, trackballs, microphones, and so on. Whenever a key is clicked, the matching
letter or digit is necessarily interpreted into its equivalent binary code and communicate over a cable to
either the memory or the processor.
Output unit
The output unit is the reverse of the input unit. When the processor sends the output to the output unit. The
output unit modifies the data supported by a computer system from binary language to human language. In
this process, data is transmitted in an external environment such as a monitor and sound.
A central processing unit is referred to as a computer circuitry within a computer that transfers out the
instructions given by a computer program by executing the basic arithmetic, logical, control, and
input/output (I/O) operations determined by the instructions.
Memory unit
The Memory unit can be defined as the storage location in which programs are stored which are running,
and that includes information required by the running programs.
Primary Memory − Primary memory includes a huge number of semiconductor storage cells, suited for
saving a bit of data. The word length of a computer is between 16-64 bits. It is also referred to as the
volatile form of memory. It represents when the system is shut down, anything included in RAM is lost.
Secondary Memory − Secondary memory is used when a huge amount of information and programs have
to be saved for a permanent basis. It is also referred to as the Non-volatile memory form of memory. It
represents the information is saved permanently regardless of shut down.
Control unit
The control unit is a component of a computer's central processing unit that relates to the operation of the
processor. It communicates the computer's memory, arithmetic/logic unit, and input and output devices
how to counter to a program's instructions. The control unit is also referred to as the nerve center of a
computer system.
There are various arithmetic and logical operations of a computer are implemented in the ALU (Arithmetic
and Logical Unit) of the processor. It executes arithmetic operations such as addition, subtraction,
multiplication, division, and also logical operations including AND, OR, NOT operations.
7 7 7
- 6 1 4
1 6 3
1
3 4 2
+ 1 6 4
5 2 6
Here there is no carry, answer is - (8's complement of the sum obtained 526)
Note : 8's complement of a number is 1 added to it's 7's complement number.
7's complement of 526 is
7 7 7
- 5 2 6
2 5 1
2 a) What is a Floating point number? Explain the standard floating point representation with an
example
The first part represents a signed & fixed-point number called the mantissa.The second part designates
the position of the decimal (or binary) point and is called the exponent.
Example:
The decimal number + 6132.789 is represented in floating-point with a fraction and an exponent as
follows:
A floating-point binary number is represented in a similar manner except that it uses base 2 for the
exponent.
m x re = + (.1001110) x 2+4
Normalization: A floating-point number is said to be normalized if the most significant digit of the
mantissa is nonzero. For example, the decimal number 350 is normalized but 00035 is not.
2 b)With a neat flow chart, explain the division of two fixed point numbers with an example
Division of two fixed-point binary numbers in signed-magnitude representation is done with paper and
pencil by a process of successive compare, shift, and subtract operations.
The division process is illustrated by a numerical example in the below figure (q).
The divisor B consists of five bits and the dividend A consists of ten bits. The five most significant
bits of the dividend are compared with the divisor. Since the 5-bit number is smaller than B, we try
again by taking the sixth most significant bits of A and compare this number with B. The 6-bit
number is greater than B, so we place a 1 for the quotient bit. The divisor is then shifted once to the
right and subtracted from the dividend.
The difference is called a partial remainder because the division could have stopped here to obtain
a quotient of 1 and a remainder equal to the partial remainder. The process is continued by
comparing a partial remainder with the divisor.
• If the partial remainder is greater than or equal to the divisor, the quotient bit is equal to 1. The
divisor is then shifted right and subtracted from the partial remainder.
• If the partial remainder is smaller than the divisor, the quotient bit is 0 and no subtraction is needed.
The divisor is shifted once to the right in any case. Note that the result gives both a quotient and a
remainder.
Hardware Implementation for Signed-Magnitude Data:
The hardware for implementing the division operation is identical to that required for multiplication.
The divisor is stored in the B register and the double-length dividend is stored in registers A and Q.
The dividend is shifted to the left and the divisor is subtracted by adding its 2's complement value. The
information about the relative magnitude is available in E.
If E = 1, it signifies that A≥B. A quotient bit 1 is inserted into Q, and the partial remainder is shifted
to the left to repeat the process.
If E = 0, it signifies that A < B so the quotient in Qn remains a 0. The value of B is then added to
restore the partial remainder in A to its previous value. The partial remainder is shifted to the left and the
process is repeated again until all five quotient bits are formed.
Note that while the partial remainder is shifted left, the quotient bits are shifted also and after five
shifts, the quotient is in Q and the final remainder is in A.
The sign of the quotient is determined from the signs of the dividend and the divisor. If the two signs are
alike, the sign o f the quotient is plus. If they are unalike, the sign is minus. The sign of the remainder is
the same as the sign of the dividend.
DivideOverflow
This divide-overflow condition must be avoided in normal computer operations because the entire quotient
will be too long for transfer into a memory unit that has words of standard length, that is, the same as the
length of registers.
This condition detection must be included in either the hardware or the software of the computer, or in a
combination of the two.
ii A division by zero must be avoided. This occurs because any dividend will be greater than or equal to a
divisor which is equal to zero. Overflow condition is usually detected when a special flip-flop is set. We
will call it a divide-overflow flip-flop and label it DVF
Hardware Algorithm:
1 The dividend is in A and Q and the divisor in B . The sign of the result is transferred into Qs to be part of
the quotient. A constant is set into the sequence counter SC to specify the number of bits in the quotient.
2 A divide-overflow condition is tested by subtracting the divisor in B from half of the bits of the dividend
stored in A. If A ≥ B, the divide-overflow flip-flop DVF is set and the operation is terminated prematurely.
If A < B, no divide overflow occurs so the value of the dividend is restored by adding B to A.
3 The division of the magnitudes starts by shifting the dividend in AQ to the left with the high-order bit
shifted into E. If the bit shifted into E is 1, we know that EA > B because EA consists of a 1 followed by n-
1 bits while B consists of only n -1 bits. In this case, B must be subtracted from EA and 1 inserted into Qn
for the quotient bit.
4 If the shift-left operation inserts a 0 into E, the divisor is subtracted by adding its 2's complement value
and the carry is transferred into E . If E = 1, it signifies that A ≥ B; therefore, Qn is set to 1 . If E = 0, it
signifies that A < B and the original number is restored by adding B to A . In the latter case we leave a 0 in
Qn.
This process is repeated again with registers EAQ. After n times, the quotient is formed in register Q and
the remainder is found in register A
Figure (r ): Flowchart for Divide operation
Figure (s): Example of Binary Division
Register transfer microoperations transfer binary information from one register to another.
Addition : Addition micro operation adds the contents of one register to the contents of another register and the
result will be transferred to another register. For example
R3 ← R1 + R2
In the above instruction contents of R1 adds to the contents of R2 and the result will be stored in R3.
3 b) What is an Interrupt and its types? Explain the flowchart of Interrupt cycle
Interrupt : An interrupt is a routine which causes disturbance to normal flow of execution. When
interrupt is occurred, the computer deviates momentarily from what it is doing to take care of the input
or output transfer. It then returns to the current program to continue what it was doing before the
interrupt. The interrupt enable flip-flop IEN can be set and cleared with two instructions.
When IEN is cleared to 0 (with the Interrupt Enable Off instruction), the flags cannot
interrupt the computer.
When IEN is set to 1 (with the Interrupt Enable On instruction), the computer can be
interrupted. An interrupt flip-flop R is included which checks whether R = 0 or 1.
When R = 0
Interrupt Cycle : The interrupt cycle is a hardware implementation of a branch and save return address operation.
The return address available in PC is stored in a specific location where it can be found later when the program
returns to the instruction at which it was interrupted. This location may be a processor register, a memory stack, or
a specific memory location. The way that the interrupt is handled by the computer can be explained by means of
the flowchart
4 a) Describe the data transmission between CPU registers during the execution of instructions
through Register transfer notation
RegisterTransfer:
The information transformed from one register to another register is represented in symbolic form by
replacement operator is called Register Transfer.
ReplacementOperator:
In the statement, R2 <- R1, <- acts as a replacement operator. This statement defines the transfer of
content of register R1 into register R2.
There are various methods of RTL –
General way of representing a register is by the name of the register enclosed in a rectangular box as
shown in(a).
The numbering of bits in a register can be marked on the top of the box as shown in (c).
A 16-bit register PC is divided into 2 parts- Bits (0 to 7) are assigned with lower byte of 16-bit address
and bits (8 to 15) are assigned with higher bytes of 16-bit address as shown in (d).
R1(8-bit)
() Denotes a part of register
R1(0-7)
R1 <- R2
, Specify two micro-operations of Register Transfer
R2 <- R1
P : R2 <- R1
: Denotes conditional operations
if P=1
Naming Operator (:=) Denotes another name for an already existing register/alias Ra := R1
Symbol Description Example
The operation performed on the data stored in the registers are referred to as register transfer
operations.
The content of R1 are copied into R2 without affecting the content of R1. It is an unconditional type of
transfer operation.
2. Conditional Transfer –
It indicates that if P=1, then the content of R1 is transferred to R2. It is a unidirectional operation.
A control function is a Boolean variable that is equal to 1 or 0. If the signal is 1, the action takes place.
This is represented as:
Where P is a control signal generated in the control section. The control condition is terminated with a
colon. It symbolizes the requirement that the transfer operation be executed by the hardware only if P=1.
If two or more operations are to occur simultaneously, they are separated with commas, these are called
as Simultaneous Functions. For example,
Here, if the control function P = 1, load the contents of R5 into R3, and at the same time (clock), load the
contents of register IR into register MAR.
4 b) Write about Control functions and Micro operations of a basic computer
Register transfer microoperations transfer binary information from one register to another.
Logic microoperations perform bit manipulation operations on nonnumeric data stored in registers.
Addition : Addition micro operation adds the contents of one register to the contents of another register and the
result will be transferred to another register. For example
R3 ← R1 + R2
In the above instruction contents of R1 adds to the contents of R2 and the result will be stored in R3.
The binary operations for bit strings in a register are specified by the logic micro operations. Logic micro
operations are bit-wise operations, i.e., they work on the individual bits of data. There are 16 different
logic micro operations as shown below.
5. a) Explain about micro programmed control organization with neat sketch
• The control memory is assumed to be a ROM, within which all control information is
permanently stored.
• The control memory address register specifies the address of the microinstruction, and the
control data register holds the microinstruction read from memory.
• The microinstruction contains a control word that specifies one or more microoperations for the
data processor. Once these operations are executed, the control must determine the next
address.
• The location of the next microinstruction may be the one next in sequence, or it may be located
somewhere else in the control memory.
• While the microoperations are being executed, the next address is computed in the next
address generator circuit and then transferred into the control address register to read the next
microinstruction.
• Thus a microinstruction contains bits for initiating microoperations in the data processor part
and bits that determine the address sequence for the control memory.
• The next address generator is sometimes called a micro-program sequencer, as it determines
the address sequence that is read from control memory.
• Typical functions of a micro-program sequencer are incrementing the control address register
by one, loading into the control address register an address from control memory, transferring
an external address, or loading an initial address to start the control operations.
• The control data register holds the present microinstruction while the next address is computed
and read from memory.
• The data register is sometimes called a pipeline register.
• It allows the execution of the microoperations specified by the control word simultaneously
with the generation of the next microinstruction.
• This configuration requires a two-phase clock, with one clock applied to the address register and
the other to the data register.
• The main advantage of the micro programmed control is the fact that once the hardware
configuration is established; there should be no need for further hardware or wiring changes.
• If we want to establish a different control sequence for the system, all we need to do is specify a
different set of microinstructions for control memory.
5 b) What is a stack? What is the need for stack memory in computers? Discuss the organization of stack
STACK Organization:
• A useful feature that is included in the CPU of most computers is a stack or last-in, first-out
(LIFO) list. A stack is a storage device that stores information in such a manner that the item
stored last is the first item retrieved.
• The operation of a stack can be compared to a stack of trays. The last tray placed on top of the stack
is the first to be taken off.
• The register that holds the address for the stack is called a stack pointer (SP) because its value
always points at the top item in the stack.
• The two operations of a stack are the insertion and deletion of items.
(i) Push or push-down (insertion operation)
Push:
Initially, SP is cleared to 0, EMPTY is set to 1, and FULL is cleared to 0, so that SP points to the word at
address 0 and the stack is marked empty and not full. If the stack is not full (if FULL= 0), a new item is
inserted with a push operation. The push operation is implemented with the following sequence of
microoperations;
Pop:
A new item is deleted from the stack if the stack is not empty (if EMPTY = 0). The pop operation consists
of the following sequence of microoperations:
DR M [SP] Read item from the top of stack
• Memory Stack
• A stack can exist as a stand-alone unit as in Figure (3) or can be implemented in a random-access
memory attached to a CPU.
• The implementation of a stack in the CPU is done by assigning a portion of memory to a stack
operation and using a processor register as a stack pointer.
• Figure (4) shows a portion of computer memory partitioned into three segments: program, data, and
stack.
Figure (5): Computer memory with program, data and stack segments
A stack organization is very effective for evaluating arithmetic expressions. The common mathematical
method of writing arithmetic expressions imposes difficulties when evaluated by a computer.
6. a) Define the terms Microinstruction, Micro program, Control word and Control memory
Micro instruction:
A symbolic microprogram can be translated into its binary equivalent by means
of an assembler.
Each line of the assembly language microprogram defines a symbolic
microinstruction.
Each symbolic microinstruction is divided into five fields: label, microoperations,
CD, BR, and AD.
Micro program:
A sequence of microinstructions constitutes a microprogram.
Since alterations of the microprogram are not needed once the control unit is in
operation, the control memory can be a read-only memory (ROM).
ROM words are made permanent during the hardware production of the unit.
The use of a micro program involves placing all control variables in words of ROM
for use by the control unit through successive read operations.
The content of the word in ROM at a given address specifies a microinstruction.
Control Word:
The control variables at any given time can be represented by a control word string of 1 's
and 0's called a control word.
Control Memory:
Control Memory is the storage in the microprogrammed control unit to store the
microprogram.
6 b) What is the need for designing the micro-instruction sequencing technique? Briefly discuss various micro-
instruction sequencing techniques
• The microinstruction format for the control memory is shown in figure 4.5.
• The 20 bits of the microinstruction are divided into four functional parts as follows:
• The three fields F1, F2, and F3 specify microoperations for the computer.
• The microoperations are subdivided into three fields of three bits each. The three bits in each field are
encoded to specify seven distinct microoperations. This gives a total of 21 microoperations.
• The CD field selects status bit conditions.
• The BR field specifies the type of branch to be used.
• The AD field contains a branch address. The address field is seven bits wide, since
the control memory has 128 = 27 words.
The BR (branch) field consists of two bits. It is used, in conjunction with the address field AD, to choose
the address of the next microinstruction shown in Table
Symbolic MicroInstruction
7 a) Discuss in detail about Cache memory mapping techniques with neat diagram
Cache Memory
The data or contents of the main memory that are used frequently by CPU are stored in the cache
memory so that the processor can easily access that data in a shorter time. Whenever the CPU
needs to access memory, it first checks the cache memory. If the data is not found in cache
memory, then the CPU moves into the main memory.
Cache memory is placed between the CPU and the main memory. The block diagram for a cache
memory can be represented as:
o When the CPU needs to access memory, the cache is examined. If the word is found in the cache,
it is read from the fast memory.
o If the word addressed by the CPU is not found in the cache, the main memory is accessed to read
the word.
o A block of words one just accessed is then transferred from main memory to cache memory. The
block size may vary from one word (the one just accessed) to about 16 words adjacent to the one
just accessed.
o The performance of the cache memory is frequently measured in terms of a quantity called hit
ratio.
o When the CPU refers to memory and finds the word in cache, it is said to produce a hit.
o If the word is not found in the cache, it is in main memory and it counts as a miss.
o The ratio of the number of hits divided by the total CPU references to memory (hits plus misses) is
the hit ratio.
During a read operation, when the CPU determines a word in the cache, the main memory is not
included in the transfer. Thus, there are two ways that the system can proceed when the operation is a
write.
1. Write Through Method : The simplest method is to update the main memory with every memory
write operation when the cache memory is updated in parallel when it contains the word at the
specified address. This can be known as the write-through method.
7 b) What is page replacement mechanism? Discuss about LRU algorithms with example.
When a page that is residing in virtual memory is requested by a process for its execution, the Operating
System needs to decide which page will be replaced by this requested page. This process is known as
page replacement
This algorithm is basically dependent on the number of frames used. Then each frame takes up
the certain page and tries to access it. When the frames are filled then the actual problem starts.
The fixed number of frames is filled up with the help of first frames present. This concept is
fulfilled with the help of Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter the frame. If the
frame is present then, no problem is occurred. Because of the page which is to be searched is
already present in the allocated frames.
If the page to be searched is found among the frames then, this process is known as Page Hit.
If the page to be searched is not found among the frames then, this process is known as Page
Fault.
When Page Fault occurs this problem arises, then the Least Recently Used (LRU) Page
Replacement Algorithm comes into picture.
The Least Recently Used (LRU) Page Replacement Algorithms works on a certain principle. The
principle is:Replace the page with the page which is less dimension
8 a) What is a Virtual Memory? Explain the process of converting virtual addresses to physical addresses with a
neat diagram
Virtual Memory
A virtual memory system provides a mechanism for translating program-generated addresses
into correct main memory locations.
Address space
An address used by a programmer will be called a virtual address, and the set of such
addresses is known as address space.
Memory space
An address in main memory is called a location or physical address. The set of such
locations is called the memory space.
Program 1
Data 1,1
Data 1,2 Program 1
Program 2
Data 2,1
Data 1,1
10
Address space 1024k=2
Figure 9.9: Relation between address and memory space in a virtual memory system
Supposes a computer which has a main-memory capacity of 64K words (K=1024). 16-bits are
required to specify a physical address in memory because 64K = 216.
Assume that computer has auxiliary memory for storing information equivalent to capacity of 16
main memories.
In a multiprogramming computer system, data and programs are transferred to and from auxiliary
memory and main memory based on demands obliged by CPU.
Suppose program 1 is currently being executed in CPU. Program 1 and a part of its associated data
are moved from secondary memory in the main memory as displayed in Figure
Parts of programs and data require not being in contiguous locations in memory because
information is being moved in and out and empty spaces may be available in scattered locations in
memory.
In our illustration address field of an instruction code will comprise 20 bits however physical
memory addresses should be specified with just 16-bits.
So CPU will reference data and instructions with a 20 bits address though information at this
address should be taken from physical memory since access to auxiliary storage for particular
words will be prohibitively long.
A mapping table is then required as displayed in Figure below to map a virtual address of 20 bits
to a physical address of 16 bits.
Mapping is a dynamic operation that means every address is translated instantly as a word is
referenced by CPU.
8 b) What is Asynchronous Data Transfer? Explain any one method to achieve the asynchronous way of data
transfer
In this method two types of techniques are used based on signals before data transfer.
i. Strobe Control
ii. Handshaking
Strobe Signal :
The strobe control method of Asynchronous data transfer employs a single control line to
time each transfer. The strobe may be activated by either the source or the destination
unit.
In the block diagram fig. (a), the data bus carries the binary information from source to
destination unit. Typically, the bus has multiple lines to transfer an entire byte or word. The
strobe is a single line that informs the destination unit when a valid data word is available.
The timing diagram fig. (b) the source unit first places the data on the data bus. The
information on the data bus and strobe signal remain in the active state to allow the
destination unit to receive the data.
In this method, the destination unit activates the strobe pulse, to informing the source to
provide the data. The source will respond by placing the requested binary information on
the data bus.
The data must be valid and remain in the bus long enough for the destination
unit to accept it. When accepted the destination unit then disables the strobe and the source
unit removes the data from the bus.
Disadvantage of Strobe Signal :
The disadvantage of the strobe method is that, the source unit initiates the transfer
has no way of knowing whether the destination unit has actually received the data
item that was places in the bus. Similarly, a destination unit that initiates the transfer
has no way of knowing whether the source unit has actually placed the data on bus.
The Handshaking method solves this problem.
Handshaking:
Principle of Handshaking:
The basic principle of the two-wire handshaking method of data transfer is as follow:
One control line is in the same direction as the data flows in the bus from the source to
destination. It is used by source unit to inform the destination unit whether there a valid data
in the bus. The other control line is in the other direction from the destination to the source.
It is used by the destination unit to inform the source whether it can accept the data. The
sequence of control during the transfer depends on the unit that initiates the transfer.
The sequence of events shows four possible states that the system can be at any given time.
The source unit initiates the transfer by placing the data on the bus and enabling its data
valid signal. The data accepted signal is activated by the destination unit after it accepts the
data from the bus. The source unit then disables its data accepted signal and the system goes
into its initial state.
The name of the signal generated by the destination unit has been changed to ready for
data to reflects its new meaning. The source unit in this case does not place data on the bus
until after it receives the ready for data signal from the destination unit. From there on, the
handshaking procedure follows the same pattern as in the source initiated case.
The only difference between the Source Initiated and the Destination Initiated transfer is in
their choice of Initial sate.
The Handshaking scheme provides degree of flexibility and reliability because the
successful completion of data transfer relies on active participation by both units.
If any of one unit is faulty, the data transfer will not be completed. Such an error can
be detected by means of a Timeout mechanism which provides an alarm if the data
is not completed within time.
The memory-page table consists of eight words, one for each page.
The address in the page table denotes the page number and the content of the word give
the block number where that page is stored in main memory.
The table shows that pages 1, 2, 5, and 6 are now available in main memory in blocks 3, 0, 1,
and 2, respectively.
A presence bit in each location indicates whether the page has been transferred from
auxiliary memory into main memory.
A 0 in the presence bit indicates that this page is not available in main memory
The CPU references a word in memory with a virtual address of 13 bits.
The three high-order bits of the virtual address specify a page number and also an
address for the memory-page table.
The content of the word in the memory page table at the page number address is read out
into the memory table buffer register.
If the presence bit is a 1, the block number thus read is transferred to the two high-order bits
of the main memory address register.
The line number from the virtual address is transferred into the 10 low-order bits of the
memory address register.
A read signal to main memory transfers the content of the word to the main memory
buffer register ready to be used by the CPU.
If the presence bit in the word read from the page table is 0, it signifies that the content of
the word referenced by the virtual address does not reside in main memory.
9 a) List out the benefits of Multiprocessing? Give the important characteristics of multiprocessor
systems
Multiprocessor:
Enhanced performance.
Multiple applications.
Multi-tasking inside an application.
High throughput and responsiveness.
Hardware sharing among CPUs.
Advantages:
Disadvantages:
Basic five-stage pipeline in a RISC machine (IF = Instruction Fetch, ID = Instruction Decode, EX = Execute,
MEM = Memory access, WB = Register write back).
Instruction fetch
The instructions reside in memory that takes one cycle to read. This memory can be dedicated to SRAM, or an
Instruction Cache. The term "latency" is used in computer science often and means the time from when an
operation starts until it completes.
Instruction decode
once fetched from the instruction cache, the instruction bits are shifted down the pipeline,
where simple combinational logic in each pipeline stage produces control signals for the
data path directly from the instruction bits.
Execute:-
The Execute stage is where the actual computation occurs. Typically this stage consists of
an ALU, and also a bit shifter. It may also include a multiple cycle multiplier and divider.
Memory access:-
During this stage, single cycle latency instructions simply have their results forwarded to the
next stage.
Write back:-
During this stage, both single cycle and two cycle instructions write their results into the
register file. Note that two different stages are accessing the register file at the same
time—the decode stage is reading two source registers, at the same time that the write
back stage is writing a previous instruction's destination register
The following diagram shows one possible way of separating the execution unit
into eight functional units operating in parallel.
The operation performed in each functional unit is indicated in each block if the
diagram:
The adder and integer multiplier performs the arithmetic operation with integer
numbers.
The floating-point operations are separated into three circuits operating in parallel.
The logic, shift, and increment operations can be performed concurrently on
different data. All units are independent of each other, so one number can be
shifted while another number is being incremented.
The following block diagram represents the combined as well as the sub-operations
performed in each segment of the pipeline.
Registers R1, R2, R3, and R4 hold the data and the combinational circuits operate
in a particular segment.
In general, the pipeline organization is applicable for two areas of computer design
which includes:
(i)Arithmetic Pipeline
(ii)Instruction Pipeline