Microprocess OR & Computer Architecture: 14CS253 / UE14CS253

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 12

MICROPROCESS

OR
&
COMPUTER
ARCHITECTURE
14CS253 /
UE14CS253
Session -2

Processor design trade-offs (Ch 1.5 of


T3, Page 31 onwards)
What does the processor do?
Misconception - computers spend their time computing, i.e.,
carrying out
arithmetic operations on user data.
Although they do a fair amount of arithmetic, most of this is
with addresses in order to locate the relevant data items
and program routines.
- Then, having found the user's data, most of the work is
in moving it
around rather than processing it in any
transformational sense.
Consider a Scenario : The programmer generally wants to
express his or her program in as abstract a way as possible,
using a high-level language which supports ways of handling
concepts that are appropriate to the problem.
- Modern trends towards functional and object-oriented
languages move the level of abstraction higher.
- The semantic gap between a high-level language

Processor design trade-offs

A typical set of statistics is shown below were gathered


running a print preview program on an ARM instruction
emulator, but are broadly typical of what may be expected
from other programs and instruction sets.

Instructions to optimize are those concerned with data


movement, either between the processor registers and
memory or from register to register.
Secondly, the control flow instructions such as branches and
procedure calls.
Arithmetic operations are down at 15%, as are comparisons
at 13%.

Processor design trade-offs


Now, that it is clear what processors spend their time doing, is
there a way to make the processor work faster ?
The most important of these is pipelining.
A very effective way of exploiting concurrency in a generalpurpose processor.

Another important technique is the use of a cache memory.


A CPUcacheis acacheused by the CPU to reduce the average
time to access data from the mainmemory.

A third technique, super-scalar instruction execution.

A superscalar processor executes more than one instruction


during a clock cycle by simultaneously dispatching multiple
instructions to differentexecution unitson the processor.

RISC
Reduced Instruction Set
Computer

RISC REDUCED INSTRUCTION SET


COMPUTER

John Cocke - (May 30, 1925 July 16, 2002), was an


American computer scientistrecognized for his large
contribution
oncomputer architectureand optimizingcompilerdesign.
Originated the RISC concept in 1974.
Considered by many to be "the father ofRISC
architecture.
Proved that about 20% of the instructions in a computer did
80% of the work.

The term RISC is credited to David Patterson,


a teacher at
the University of California in Berkeley. The
concept was used in
Sun Microsystems' SPARC microprocessors and
led to the

RISC REDUCED INSTRUCTION SET


COMPUTER(CH1.6 OF T3,PAGE 36)

Architecture
A fixed (32-bit).
A load-store architecture where instructions that process
data, operate only on registers and are separate from
instructions that access memory.
A large register bank (thirty-two 32-bit registers).
Instruction size with few formats.
RISC Organization
Hard-wired instruction decode logic (design of the control
unit).
Pipelined execution.
Single-cycle execution.
RISC Advantages
A smaller die size.
A shorter development time.
A higher performance.
RISC Drawbacks
Poor code density.
Do not execute x86 code.
7

CISC
Complex Instruction Set
Computer

CISC COMPLEX INSTRUCTION


SET COMPUTER
Preamble
Earliest machines were programmed in assembly language
and memory was slow and expensive.

The CISC philosophy made sense, and was commonly


implemented in such large computers as the PDP-11 and the
DEC system 10 and 20 machines.

Most common microprocessor designs such as the Intel


80x86 and Motorola 68K series followed the CISC philosophy.

But recent changes in software and hardware technology


have forced a reexamination of CISC and many modern CISC
processors are hybrids, implementing many RISC principles
and vive versa.

CISC was developed to make compiler development simpler.


It shifts most of the burden of generating machine
9
instructions to the processor.

CISC COMPLEX INSTRUCTION


SET COMPUTER
Architecture(Ch 1.6 of T3)

CISC processors typically had variable length instruction


sets with many formats.
Typically allowed values in memory to be used as
operands in data processing instructions.
A 2-operand instruction format
Register sets were getting larger, but none was as large
as RISC and most processors had different registers for
different purposes.
CISC Organization
Microprogrammed control instruction decode logic.
It was easier to implement and less expensive.

CISC processors allowed little, if any, overlap between


consecutive instructions.
The ease of microcoding new instructions allowed designers to
make CISC machines upwardly compatible

May take many clock cycles to complete a single


instruction.

10

CISC and RISC Convergence

State of the art processor technology has changed


significantly since RISC chips were first introduced in the
early '80s.

A number of advancements are used by both RISC and CISC


processors.

The lines between the two architectures have begun to


blur.

In fact, the two architectures almost seem to have adopted


the strategies of the other.

Since, the processor speeds have increased, CISC chips are


now able to execute more than one instruction within a
single clock.

This also allows CISC chips to make use of pipelining.

11

Q&A
on
CISC and RISC

12

You might also like