A Comparative Analysis of SIMD and MIMD Architectures

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Volume 3, Issue 9, September 2013

ISSN: 2277 128X

International Journal of Advanced Research in


Computer Science and Software Engineering
Research Paper
Available online at: www.ijarcsse.com

A Comparative Analysis of SIMD and MIMD Architectures


Mandeep Kaur*
CSE (M.Tech)
SBBSIET, Jalandhar(Punjab),India

Rajdeep Kaur
CSE (M.Tech)
SBBSIET, Jalandhar(Punjab),India

Abstract-: Computer Architectures are classified depending upon their implementation. This paper describes the
SIMD and MIMD architectures, their implementation. SIMD and MIMD system both are parallel computers.
Elaborate the types SIMD architecture-: True and Pipelined SIMD architecture, MIMD architectures-: Shared
Memory MIMD architectures, Distributed Memory MIMD architectures
Keywords-: Flynns Taxonomy, SIMD architecture, MIMD architecture, Types of SIMD and MIMD architectures.
I.
INTRODUCTION
Computer architecture as the structure of a computer that a machine language programmer must understand to write a
correct program for a machine [1]. Computer architecture can be classified into four main categories. These categories
are defined under the Flynns Taxonomy. Computer architecture is classified by the number of instructions that are
running in parallel and how its data is managed. The four categories that computer architecture can be classified under
are:
1. SISD: Single Instruction, Single Data
2. SIMD: Single Instruction, Multiple Data
3. MISD: Multiple Instruction, Single Data
4. MIMD: Multiple Instruction, Multiple Data

Fig 1-: Flynns Taxonomy


II.
SIMD ARCHITECTURE
Single Instruction stream, Multiple Data stream (SIMD) processors one instruction works on several data items
simultaneously by using several processing elements, all of which carried out same operation as illustrated in Fig 2. [4]

Fig 2 -: Principle of SIMD processor [4]


SIMD system comprise one of the three most commercially successful classes of parallel computers (the other being
vector supercomputer and MIMD systems). A number of factors have contributed to this success including :
2013, IJARCSSE All Rights Reserved

Page | 1151

Kaur et al., International Journal of Advanced Research in Computer Science and Software Engineering 3(9),
September - 2013, pp. 1151-1156
Simplicity of concept and programming
Regularity of structure
Easy scalability of size and performance
Straightforward applicability in a number of fields which demands parallelism to achieve necessary performance.

A. Basic Principles:
There is a two-dimensional array of processing elements, each connected to its four nearest neighbors.
All processors execute the same instruction simultaneously.
Each processor incorporates local memory.
The processors are programmable, that is, they can perform a variety of functions.
Data can propagate quickly through the array. [1]

Fig 3 -: The Archetypal SIMD system [1]


B. Implementing SIMD Architecture:
Two types of SIMD architectures exist:
1. True SIMD
2. Pipelined SIMD

True SIMD architecture: True SIMD architectures can be determined by its usage of distributed memory and
shared memory. Both true SIMD architectures possess similar implementation as seen on Fig.4, but differ on
placement of processor and memory modules. [2]

Fig 4-: True SIMD architecture [2]


True SIMD architecture with distributed memory:
A true SIMD architecture with distributed memory possesses a control unit that interacts with every processing
element on the architecture.
Each processor possesses their own local memory as observe on Fig.5.
The processor elements are used as an arithmetic unit where the instructions are provided by the controlling unit.
In order for one processing element to communicate with another memory on the same architecture, such as for
information fetching, it will have to acquire it through the controlling unit. This controlling unit handles the
transferring of the information from one processing element to another.
The main drawback is with the performance time where the controlling unit has to handle the data transfer. [2]
2013, IJARCSSE All Rights Reserved

Page | 1152

Kaur et al., International Journal of Advanced Research in Computer Science and Software Engineering 3(9),
September - 2013, pp. 1151-1156

Fig 5-: True SIMD Architecture: Distributed Memory [2]


True SIMD architecture with Shared Memory:
In this architecture, a processing element does not have a local memory but instead its connected to a network
where it can communicate with a memory component. Fig.6 shows all the processing elements connected to the
same network which allows them to share their memory content with others.
In this architecture, the controlling unit is ignored when it comes to processing elements sharing information. It
still, however, provides an instruction to the processing elements for computational reasons.
The disadvantage in this architecture is that if there is a need to expand this architecture, each module (processing
elements and memory) has to be added separately and configured.
However, this architecture is still beneficial since it improved performance time and the information can be
transferred more freely without the controlling unit.

Fig 6-: True SIMD Architecture: Shared Memory [2]

Pipelined SIMD Architecture: This architecture implements the logic behind pipelining an instruction as observe
on Fig.7. Each processing element will receive an instruction from the controlling unit, using a shared memory,
and will perform computation at multiple stages. The controlling unit provides the parallel processing elements
with instructions. The sequential processing element is used to handle other instructions. [2]

Fig 7-: Pipelined SIMD Architecture [2]


2013, IJARCSSE All Rights Reserved

Page | 1153

Kaur et al., International Journal of Advanced Research in Computer Science and Software Engineering 3(9),
September - 2013, pp. 1151-1156
III.
MIMD ARCHITECTURE
MIMD stands for Multiple Instruction, Multiple Data. The MIMD class of parallel architecture is the most familiar and
possibly most basic form of parallel processor. MIMD architecture consists of a collection of N independent, tightlycoupled processors, each with memory that may be common to all processors, and /or local and not directly accessible by
the other processors.

Fig 8-: MIMD Processor [5]


Two types of MIMD architecture-:
1. Shared Memory MIMD architecture
2. Distributed Memory MIMD architecture

Shared Memory MIMD architecture:


Create a set of processors and memory modules.
Any processor can directly access any memory module via an interconnection network as observe on Fig.9.
The set of memory modules defines a global address space which is shared among the processors. [1]

Fig 9-: Shared Memory MIMD Architecture [1]

TABLE 1
Shared Memory MIMD classes
Features

NUMA

COMA

CC-NUMA

Abbreviation

NonUniform
memory
access

Cache only
memory
access

Memory uses

Shared
memory is
divided into
blocks and
each block is
attached with
processor

Every
memory
block
works as a
cache
memory

CacheCoherent
non-uniform
memory
access
Each cache
memory and
memory
blocks are
attached with
processors

2013, IJARCSSE All Rights Reserved

Page | 1154

Kaur et al., International Journal of Advanced Research in Computer Science and Software Engineering 3(9),
September - 2013, pp. 1151-1156
Distributed Memory MIMD architecture:
It replicates the processor/memory pairs and connects them via an interconnection network. The
processor/memory pair is called processing element (PE).
Each processing element (PE) can interact with each other via sending messages.

Fig 10-: Distributed Memory MIMD architecture [1]

IV.

COMPARITIVE ANALYSIS OF SIMD AND MIMD

Features

SIMD

MIMD

Abbreviation

Single Instruction Multiple Data

Multiple Instruction Multiple Data

Ease of programming and


debugging

Single program, Processing element(PE)


operate synchronously

Lower program memory


requirements
Lower instruction cost

One copy of the program is stored

Multiple communication programs,


Processing element(PE) operate
asynchronously
Each PE stores it own program

One decoder in control unit

One decoder in each PE

Complexity of architectures

Simple

Complex

Cost

Low

Medium

Size and Performance

Scalability in size and performance

Complex size and good performance

Conditional Statements

Conditional statements depends upon data


local to processors, all of instructions of
then block must broadcast, followed by all
else block

Low
synchronization
overheads
Low
PE-to-PE
communication overheads

Implicit in program

The multiple instruction stream of MIMD


allow for more efficient execution of
conditional statements (e.g.-: if then else)
because each processor can independently
follow either decision path
Explicit data structures and operations needed

Automatic synchronization of all send


and receive operations

Explicit synchronization and identification


protocols needed.

Efficient
execution
of
variable-time instructions

Total execution time equals the sum of


maximal execution times
through all processors

Total execution time equals the maximum


execution time
on a given processor

V.
CONCLUSION
The purpose of this paper is to provide an overview of recent architectural approaches of parallel systems and also
comparison between them. Describes the Flynns Taxonomy-: SIMD and MIMD. SIMD allow for more faster and
multiple computation in this field where sacrifice cannot be made on the delay of time. SIMD processing architecture
example-: a graphic processor processing instructions for translation or rotation or other operations are done on multiple
data. MIMD processing architecture example is super computer or distributed computing systems with distributed or
single shared memory.
2013, IJARCSSE All Rights Reserved

Page | 1155

Kaur et al., International Journal of Advanced Research in Computer Science and Software Engineering 3(9),
September - 2013, pp. 1151-1156
REFERENCES
[1]
Advanced Computer Architectures: A Design Space Approach, Dezso Sima , Terence Fountain, Peter Karsuk ,
Pearson Eduction.
[2]
Houffaneh Osman, Usage of SIMD Processor Extensions, March (2010).
[3]
Claudio Gennaro, Models for SIMD_ MIMD and Hybrid Parallel Architectures, Ph.D. Thesis- 1995/1998.
[4]
Bertil Svensson, SIMD Architecture, 1992.
[5]
http://en.wikipedia.org/wiki/File:MIMD.svg
[6]
MICHAEL J. FLYNN AND KEVIN W. RUDD, Parallel Architectures, ACM Computing Surveys, Vol. 28, No.
1, March 1996
[7]
Taxonomy of Supercomputers: http://www.cosc.brocku.ca/Offerings/3P93/notes/2-Taxonomy.pdf

2013, IJARCSSE All Rights Reserved

Page | 1156

You might also like