Micronics

Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 11

icroprocessor

From Wikipedia, the free encyclopedia


Jump to navigationJump to search
This article is about microprocessors. For central processing units, see CPU. (A
Microprocessor is a type of CPU).
See also: Processor (disambiguation), System on a chip, Microcontroller, and
Digital signal processor

Texas Instruments TMS1000

Intel 4004

Motorola 6800
A microprocessor is a computer processor that incorporates the functions of a
central processing unit on a single integrated circuit (IC),[1] or at most a few
integrated circuits.[2] The microprocessor is a multipurpose, clock driven,
register based, digital integrated circuit that accepts binary data as input,
processes it according to instructions stored in its memory and provides results as
output. Microprocessors contain both combinational logic and sequential digital
logic. Microprocessors operate on numbers and symbols represented in the binary
number system.

The integration of a whole CPU onto a single or a few integrated circuits greatly
reduced the cost of processing power. Integrated circuit processors are produced in
large numbers by highly automated processes, resulting in a low unit price. Single-
chip processors increase reliability because there are many fewer electrical
connections that could fail. As microprocessor designs improve, the cost of
manufacturing a chip (with smaller components built on a semiconductor chip the
same size) generally stays the same according to Rock's law.

Before microprocessors, small computers had been built using racks of circuit
boards with many medium- and small-scale integrated circuits. Microprocessors
combined this into one or a few large-scale ICs. Continued increases in
microprocessor capacity have since rendered other forms of computers almost
completely obsolete (see history of computing hardware), with one or more
microprocessors used in everything from the smallest embedded systems and handheld
devices to the largest mainframes and supercomputers.

Contents
1 Structure
1.1 Special-purpose designs
1.2 Speed and power considerations
2 Embedded applications
3 History
3.1 First projects
3.1.1 CADC
3.1.2 Four-Phase Systems AL1 (1969)
3.1.3 Pico/General Instrument
3.1.4 Intel 4004 (1971)
3.1.5 TMX 1795
3.1.6 TMS 1802NC (1971)
3.1.7 Gilbert Hyatt
3.2 8-bit designs
3.3 12-bit designs
3.4 16-bit designs
3.5 32-bit designs
3.6 64-bit designs in personal computers
3.7 RISC
3.8 Multi-core designs
4 Market statistics
5 See also
6 Notes
7 References
8 External links
Structure

A block diagram of the architecture of the Z80 microprocessor, showing the


arithmetic and logic section, register file, control logic section, and buffers to
external address and data lines
The complexity of an integrated circuit is bounded by physical limitations on the
number of transistors that can be put onto one chip, the number of package
terminations that can connect the processor to other parts of the system, the
number of interconnections it is possible to make on the chip, and the heat that
the chip can dissipate. Advancing technology makes more complex and powerful chips
feasible to manufacture.

A minimal hypothetical microprocessor might include only an arithmetic logic unit


(ALU), and a control logic section. The ALU performs addition, subtraction, and
operations such as AND or OR. Each operation of the ALU sets one or more flags in a
status register, which indicate the results of the last operation (zero value,
negative number, overflow, or others). The control logic retrieves instruction
codes from memory and initiates the sequence of operations required for the ALU to
carry out the instruction. A single operation code might affect many individual
data paths, registers, and other elements of the processor.

As integrated circuit technology advanced, it was feasible to manufacture more and


more complex processors on a single chip. The size of data objects became larger;
allowing more transistors on a chip allowed word sizes to increase from 4- and 8-
bit words up to today's 64-bit words. Additional features were added to the
processor architecture; more on-chip registers sped up programs, and complex
instructions could be used to make more compact programs. Floating-point
arithmetic, for example, was often not available on 8-bit microprocessors, but had
to be carried out in software. Integration of the floating point unit first as a
separate integrated circuit and then as part of the same microprocessor chip sped
up floating point calculations.

Occasionally, physical limitations of integrated circuits made such practices as a


bit slice approach necessary. Instead of processing all of a long word on one
integrated circuit, multiple circuits in parallel processed subsets of each data
word. While this required extra logic to handle, for example, carry and overflow
within each slice, the result was a system that could handle, for example, 32-bit
words using integrated circuits with a capacity for only four bits each.

The ability to put large numbers of transistors on one chip makes it feasible to
integrate memory on the same die as the processor. This CPU cache has the advantage
of faster access than off-chip memory and increases the processing speed of the
system for many applications. Processor clock frequency has increased more rapidly
than external memory speed, so cache memory is necessary if the processor is not
delayed by slower external memory.

Special-purpose designs
A microprocessor is a general-purpose entity. Several specialized processing
devices have followed:

A digital signal processor (DSP) is specialized for signal processing.


Graphics processing units (GPUs) are processors designed primarily for realtime
rendering of images.
Other specialized units exist for video processing and machine vision. (See:
Hardware acceleration.)
Microcontrollers integrate a microprocessor with peripheral devices in embedded
systems.
Systems on chip (SoCs) often integrate one or more microprocessor or
microcontroller cores.
Speed and power considerations
Microprocessors can be selected for differing applications based on their word
size, which is a measure of their complexity. Longer word sizes allow each clock
cycle of a processor to carry out more computation, but correspond to physically
larger integrated circuit dies with higher standby and operating power consumption.
[3] 4, 8 or 12 bit processors are widely integrated into microcontrollers operating
embedded systems. Where a system is expected to handle larger volumes of data or
require a more flexible user interface, 16, 32 or 64 bit processors are used. An 8-
or 16-bit processor may be selected over a 32-bit processor for system on a chip or
microcontroller applications that require extremely low-power electronics, or are
part of a mixed-signal integrated circuit with noise-sensitive on-chip analog
electronics such as high-resolution analog to digital converters, or both. Running
32-bit arithmetic on an 8-bit chip could end up using more power, as the chip must
execute software with multiple instructions.,[4]

Embedded applications
Thousands of items that were traditionally not computer-related include
microprocessors. These include large and small household appliances, cars (and
their accessory equipment units), car keys, tools and test instruments, toys, light
switches/dimmers and electrical circuit breakers, smoke alarms, battery packs, and
hi-fi audio/visual components (from DVD players to phonograph turntables). Such
products as cellular telephones, DVD video system and HDTV broadcast systems
fundamentally require consumer devices with powerful, low-cost, microprocessors.
Increasingly stringent pollution control standards effectively require automobile
manufacturers to use microprocessor engine management systems to allow optimal
control of emissions over the widely varying operating conditions of an automobile.
Non-programmable controls would require complex, bulky, or costly implementation to
achieve the results possible with a microprocessor.

A microprocessor control program (embedded software) can be easily tailored to


different needs of a product line, allowing upgrades in performance with minimal
redesign of the product. Different features can be implemented in different models
of a product line at negligible production cost.

Microprocessor control of a system can provide control strategies that would be


impractical to implement using electromechanical controls or purpose-built
electronic controls. For example, an engine control system in an automobile can
adjust ignition timing based on engine speed, load on the engine, ambient
temperature, and any observed tendency for knocking�allowing an automobile to
operate on a range of fuel grades.

History
The advent of low-cost computers on integrated circuits has transformed modern
society. General-purpose microprocessors in personal computers are used for
computation, text editing, multimedia display, and communication over the Internet.
Many more microprocessors are part of embedded systems, providing digital control
over myriad objects from appliances to automobiles to cellular phones and
industrial process control.

The first use of the term "microprocessor" is attributed to Viatron Computer


Systems[5] describing the custom integrated circuit used in their System 21 small
computer system announced in 1968.
By the late 1960s, designers were striving to integrate the central processing unit
(CPU) functions of a computer onto a handful of very-large-scale integration metal-
oxide semiconductor chips, called microprocessor unit (MPU) chipsets. Building on
an earlier Busicom design from 1969, Intel introduced the first commercial
microprocessor, the 4-bit Intel 4004, in 1971, followed by its 8-bit microprocessor
8008 in 1972. In 1969, Lee Boysel, based on the 8-bit arithmetic logic units
(3800/3804) he designed earlier at Fairchild, created the Four-Phase Systems Inc.
AL-1, an 8-bit CPU slice that was expandable to 32-bits. In 1970, Steve Geller and
Ray Holt of Garrett AiResearch designed the MP944 chipset to implement the F-14A
Central Air Data Computer on six metal-gate chips fabricated by AMI.

The first microprocessors emerged in the early 1970s and were used for electronic
calculators, using binary-coded decimal (BCD) arithmetic on 4-bit words. Other
embedded uses of 4-bit and 8-bit microprocessors, such as terminals, printers,
various kinds of automation etc., followed soon after. Affordable 8-bit
microprocessors with 16-bit addressing also led to the first general-purpose
microcomputers from the mid-1970s on.

Since the early 1970s, the increase in capacity of microprocessors has followed
Moore's law; this originally suggested that the number of components that can be
fitted onto a chip doubles every year. With present technology, it is actually
every two years,[6] and as a result Moore later changed the period to two years.[7]

First projects
Three projects delivered a microprocessor at about the same time: Garrett
AiResearch's Central Air Data Computer (CADC), Texas Instruments' TMS 1802NC
(September 1971) and Intel's 4004 (November 1971, based on an earlier 1969 Busicom
design). Arguably, Four-Phase Systems AL1 microprocessor was also delivered in
1969.

CADC
Question book-new.svg
This section relies too much on references to primary sources. Please improve this
section by adding secondary or tertiary sources. (March 2010) (Learn how and when
to remove this template message)
Further information: Central Air Data Computer
In 1968, Garrett AiResearch (who employed designers Ray Holt and Steve Geller) was
invited to produce a digital computer to compete with electromechanical systems
then under development for the main flight control computer in the US Navy's new F-
14 Tomcat fighter. The design was complete by 1970, and used a MOS-based chipset as
the core CPU. The design was significantly (approximately 20 times) smaller and
much more reliable than the mechanical systems it competed against, and was used in
all of the early Tomcat models. This system contained "a 20-bit, pipelined,
parallel multi-microprocessor". The Navy refused to allow publication of the design
until 1997. For this reason the CADC, and the MP944 chipset it used, are fairly
unknown. Ray Holt's autobiographical story of this design and development is
presented in the book: The Accidental Engineer.[8] [9]

Ray Holt graduated from California Polytechnic University in 1968, and began his
computer design career with the CADC. From its inception, it was shrouded in
secrecy until 1998 when at Holt's request, the US Navy allowed the documents into
the public domain. Since then people[who?] have debated whether this was the first
microprocessor. Holt has stated that no one has compared this microprocessor with
those that came later.[10] According to Parab et al. (2007),
The scientific papers and literature published around 1971 reveal that the MP944
digital processor used for the F-14 Tomcat aircraft of the US Navy qualifies as the
first microprocessor. Although interesting, it was not a single-chip processor, as
was not the Intel 4004 � they both were more like a set of parallel building blocks
you could use to make a general-purpose form. It contains a CPU, RAM, ROM, and two
other support chips like the Intel 4004. It was made from the same P-channel
technology, operated at military specifications and had larger chips � an excellent
computer engineering design by any standards. Its design indicates a major advance
over Intel, and two year earlier. It actually worked and was flying in the F-14
when the Intel 4004 was announced. It indicates that today's industry theme of
converging DSP-microcontroller architectures was started in 1971.[11]

This convergence of DSP and microcontroller architectures is known as a digital


signal controller.[12]

Four-Phase Systems AL1 (1969)


The Four-Phase Systems AL1 was an 8-bit bit slice chip containing eight registers
and an ALU.[13] It was designed by Lee Boysel in 1969.[14][15][16] At the time, it
formed part of a nine-chip, 24-bit CPU with three AL1s, but it was later called a
microprocessor when, in response to 1990s litigation by Texas Instruments, a
demonstration system was constructed where a single AL1 formed part of a courtroom
demonstration computer system, together with RAM, ROM, and an input-output device.
[17]

Pico/General Instrument

The PICO1/GI250 chip introduced in 1971: It was designed by Pico Electronics


(Glenrothes, Scotland) and manufactured by General Instrument of Hicksville NY.
In 1971, Pico Electronics[18] and General Instrument (GI) introduced their first
collaboration in ICs, a complete single chip calculator IC for the Monroe/Litton
Royal Digital III calculator. This chip could also arguably lay claim to be one of
the first microprocessors or microcontrollers having ROM, RAM and a RISC
instruction set on-chip. The layout for the four layers of the PMOS process was
hand drawn at x500 scale on mylar film, a significant task at the time given the
complexity of the chip.

Pico was a spinout by five GI design engineers whose vision was to create single
chip calculator ICs. They had significant previous design experience on multiple
calculator chipsets with both GI and Marconi-Elliott.[19] The key team members had
originally been tasked by Elliott Automation to create an 8-bit computer in MOS and
had helped establish a MOS Research Laboratory in Glenrothes, Scotland in 1967.

Calculators were becoming the largest single market for semiconductors so Pico and
GI went on to have significant success in this burgeoning market. GI continued to
innovate in microprocessors and microcontrollers with products including the
CP1600, IOB1680 and PIC1650.[20] In 1987, the GI Microelectronics business was spun
out into the Microchip PIC microcontroller business.

Intel 4004 (1971)


Main article: Intel 4004

The 4004 with cover removed (left) and as actually used (right)
The Intel 4004 is generally regarded as the first commercially available
microprocessor,[21][22] and cost US$60 (equivalent to $371.19 in 2018).[23] The
first known advertisement for the 4004 is dated November 15, 1971 and appeared in
Electronic News. The microprocessor was designed by a team consisting of Italian
engineer Federico Faggin, American engineers Marcian Hoff and Stanley Mazor, and
Japanese engineer Masatoshi Shima.[24]

The project that produced the 4004 originated in 1969, when Busicom, a Japanese
calculator manufacturer, asked Intel to build a chipset for high-performance
desktop calculators. Busicom's original design called for a programmable chip set
consisting of seven different chips. Three of the chips were to make a special-
purpose CPU with its program stored in ROM and its data stored in shift register
read-write memory. Ted Hoff, the Intel engineer assigned to evaluate the project,
believed the Busicom design could be simplified by using dynamic RAM storage for
data, rather than shift register memory, and a more traditional general-purpose CPU
architecture. Hoff came up with a four-chip architectural proposal: a ROM chip for
storing the programs, a dynamic RAM chip for storing data, a simple I/O device and
a 4-bit central processing unit (CPU). Although not a chip designer, he felt the
CPU could be integrated into a single chip, but as he lacked the technical know-how
the idea remained just a wish for the time being.

First microprocessor by Intel, the 4004.

Silicon and germanium alloy for microprocessors


While the architecture and specifications of the MCS-4 came from the interaction of
Hoff with Stanley Mazor, a software engineer reporting to him, and with Busicom
engineer Masatoshi Shima, during 1969, Mazor and Hoff moved on to other projects.
In April 1970, Intel hired Italian engineer Federico Faggin as project leader, a
move that ultimately made the single-chip CPU final design a reality (Shima
meanwhile designed the Busicom calculator firmware and assisted Faggin during the
first six months of the implementation). Faggin, who originally developed the
silicon gate technology (SGT) in 1968 at Fairchild Semiconductor[25] and designed
the world�s first commercial integrated circuit using SGT, the Fairchild 3708, had
the correct background to lead the project into what would become the first
commercial general purpose microprocessor. Since SGT was his very own invention,
Faggin also used it to create his new methodology for random logic design that made
it possible to implement a single-chip CPU with the proper speed, power dissipation
and cost. The manager of Intel's MOS Design Department was Leslie L. Vad�sz at the
time of the MCS-4 development but Vad�sz's attention was completely focused on the
mainstream business of semiconductor memories so he left the leadership and the
management of the MCS-4 project to Faggin, who was ultimately responsible for
leading the 4004 project to its realization. Production units of the 4004 were
first delivered to Busicom in March 1971 and shipped to other customers in late
1971.[citation needed]

TMX 1795
Along with Intel (who developed the 8008), Texas Instruments developed in 1970�1971
a one-chip CPU replacement for the Datapoint 2200 terminal, the TMX 1795 (later TMC
1795.) Like the 8008, it was rejected by customer Datapoint. According to Gary
Boone, the TMX 1795 never reached production. Since it was built to the same
specification, its instruction set was very similar to the Intel 8008.[26][27]

TMS 1802NC (1971)


The TMS1802NC was announced September 17, 1971 and implemented a four-function
calculator. The TMS1802NC, despite its designation, was not part of the TMS 1000
series; it was later redesignated as part of the TMS 0100 series, which was used in
the TI Datamath calculator. Although marketed as a calculator-on-a-chip, the
TMS1802NC was fully programmable, including on the chip a CPU with an 11-bit
instruction word, 3520 bits (320 instructions) of ROM and 182 bits of RAM.[26][28]
[27][29]

Gilbert Hyatt
Gilbert Hyatt was awarded a patent claiming an invention pre-dating both TI and
Intel, describing a "microcontroller".[30] The patent was later invalidated, but
not before substantial royalties were paid out.[31][32]

8-bit designs

This section and the sections below needs additional citations for verification.
Please help improve this article by adding citations to reliable sources. Unsourced
material may be challenged and removed.
Find sources: "Microprocessor" � news � newspapers � books � scholar � JSTOR (June
2011) (Learn how and when to remove this template message)
The Intel 4004 was followed in 1972 by the Intel 8008, the world's first 8-bit
microprocessor. The 8008 was not, however, an extension of the 4004 design, but
instead the culmination of a separate design project at Intel, arising from a
contract with Computer Terminals Corporation, of San Antonio TX, for a chip for a
terminal they were designing,[33] the Datapoint 2200�fundamental aspects of the
design came not from Intel but from CTC. In 1968, CTC's Vic Poor and Harry Pyle
developed the original design for the instruction set and operation of the
processor. In 1969, CTC contracted two companies, Intel and Texas Instruments, to
make a single-chip implementation, known as the CTC 1201.[34] In late 1970 or early
1971, TI dropped out being unable to make a reliable part. In 1970, with Intel yet
to deliver the part, CTC opted to use their own implementation in the Datapoint
2200, using traditional TTL logic instead (thus the first machine to run "8008
code" was not in fact a microprocessor at all and was delivered a year earlier).
Intel's version of the 1201 microprocessor arrived in late 1971, but was too late,
slow, and required a number of additional support chips. CTC had no interest in
using it. CTC had originally contracted Intel for the chip, and would have owed
them US$50,000 (equivalent to $309,326 in 2018) for their design work.[34] To avoid
paying for a chip they did not want (and could not use), CTC released Intel from
their contract and allowed them free use of the design.[34] Intel marketed it as
the 8008 in April, 1972, as the world's first 8-bit microprocessor. It was the
basis for the famous "Mark-8" computer kit advertised in the magazine Radio-
Electronics in 1974. This processor had an 8-bit data bus and a 14-bit address bus.
[35]

The 8008 was the precursor to the successful Intel 8080 (1974), which offered
improved performance over the 8008 and required fewer support chips. Federico
Faggin conceived and designed it using high voltage N channel MOS. The Zilog Z80
(1976) was also a Faggin design, using low voltage N channel with depletion load
and derivative Intel 8-bit processors: all designed with the methodology Faggin
created for the 4004. Motorola released the competing 6800 in August 1974, and the
similar MOS Technology 6502 in 1975 (both designed largely by the same people). The
6502 family rivaled the Z80 in popularity during the 1980s.

A low overall cost, small packaging, simple computer bus requirements, and
sometimes the integration of extra circuitry (e.g. the Z80's built-in memory
refresh circuitry) allowed the home computer "revolution" to accelerate sharply in
the early 1980s. This delivered such inexpensive machines as the Sinclair ZX81,
which sold for US$99 (equivalent to $272.83 in 2018). A variation of the 6502, the
MOS Technology 6510 was used in the Commodore 64 and yet another variant, the 8502,
powered the Commodore 128.

The Western Design Center, Inc (WDC) introduced the CMOS WDC 65C02 in 1982 and
licensed the design to several firms. It was used as the CPU in the Apple IIe and
IIc personal computers as well as in medical implantable grade pacemakers and
defibrillators, automotive, industrial and consumer devices. WDC pioneered the
licensing of microprocessor designs, later followed by ARM (32-bit) and other
microprocessor intellectual property (IP) providers in the 1990s.

Motorola introduced the MC6809 in 1978. It was an ambitious and well thought-
through 8-bit design that was source compatible with the 6800, and implemented
using purely hard-wired logic (subsequent 16-bit microprocessors typically used
microcode to some extent, as CISC design requirements were becoming too complex for
pure hard-wired logic).

Another early 8-bit microprocessor was the Signetics 2650, which enjoyed a brief
surge of interest due to its innovative and powerful instruction set architecture.
A seminal microprocessor in the world of spaceflight was RCA's RCA 1802 (aka
CDP1802, RCA COSMAC) (introduced in 1976), which was used on board the Galileo
probe to Jupiter (launched 1989, arrived 1995). RCA COSMAC was the first to
implement CMOS technology. The CDP1802 was used because it could be run at very low
power, and because a variant was available fabricated using a special production
process, silicon on sapphire (SOS), which provided much better protection against
cosmic radiation and electrostatic discharge than that of any other processor of
the era. Thus, the SOS version of the 1802 was said to be the first radiation-
hardened microprocessor.

The RCA 1802 had a static design, meaning that the clock frequency could be made
arbitrarily low, or even stopped. This let the Galileo spacecraft use minimum
electric power for long uneventful stretches of a voyage. Timers or sensors would
awaken the processor in time for important tasks, such as navigation updates,
attitude control, data acquisition, and radio communication. Current versions of
the Western Design Center 65C02 and 65C816 have static cores, and thus retain data
even when the clock is completely halted.

12-bit designs
The Intersil 6100 family consisted of a 12-bit microprocessor (the 6100) and a
range of peripheral support and memory ICs. The microprocessor recognised the DEC
PDP-8 minicomputer instruction set. As such it was sometimes referred to as the
CMOS-PDP8. Since it was also produced by Harris Corporation, it was also known as
the Harris HM-6100. By virtue of its CMOS technology and associated benefits, the
6100 was being incorporated into some military designs until the early 1980s.

16-bit designs
Part of a series on
Microprocessor modes for the x86 architecture
Real mode (Intel 8086)
8080 emulation mode (NEC V20/V30 only)
Protected mode (Intel 80286)
Unreal mode (Intel 80286)
Virtual 8086 mode (Intel 80386)
System Management Mode (Intel 386SL)
Long mode (AMD Opteron)
x86 virtualization (Intel Pentium 4, models 6x2)
First supported platform shown in parentheses
vte
The first multi-chip 16-bit microprocessor was the National Semiconductor IMP-16,
introduced in early 1973. An 8-bit version of the chipset was introduced in 1974 as
the IMP-8.

Other early multi-chip 16-bit microprocessors include one that Digital Equipment
Corporation (DEC) used in the LSI-11 OEM board set and the packaged PDP 11/03
minicomputer�and the Fairchild Semiconductor MicroFlame 9440, both introduced in
1975�76. In 1975, National introduced the first 16-bit single-chip microprocessor,
the National Semiconductor PACE, which was later followed by an NMOS version, the
INS8900.

Another early single-chip 16-bit microprocessor was TI's TMS 9900, which was also
compatible with their TI-990 line of minicomputers. The 9900 was used in the TI
990/4 minicomputer, the Texas Instruments TI-99/4A home computer, and the TM990
line of OEM microcomputer boards. The chip was packaged in a large ceramic 64-pin
DIP package, while most 8-bit microprocessors such as the Intel 8080 used the more
common, smaller, and less expensive plastic 40-pin DIP. A follow-on chip, the TMS
9980, was designed to compete with the Intel 8080, had the full TI 990 16-bit
instruction set, used a plastic 40-pin package, moved data 8 bits at a time, but
could only address 16 KB. A third chip, the TMS 9995, was a new design. The family
later expanded to include the 99105 and 99110.

The Western Design Center (WDC) introduced the CMOS 65816 16-bit upgrade of the WDC
CMOS 65C02 in 1984. The 65816 16-bit microprocessor was the core of the Apple IIgs
and later the Super Nintendo Entertainment System, making it one of the most
popular 16-bit designs of all time.

Intel "upsized" their 8080 design into the 16-bit Intel 8086, the first member of
the x86 family, which powers most modern PC type computers. Intel introduced the
8086 as a cost-effective way of porting software from the 8080 lines, and succeeded
in winning much business on that premise. The 8088, a version of the 8086 that used
an 8-bit external data bus, was the microprocessor in the first IBM PC. Intel then
released the 80186 and 80188, the 80286 and, in 1985, the 32-bit 80386, cementing
their PC market dominance with the processor family's backwards compatibility. The
80186 and 80188 were essentially versions of the 8086 and 8088, enhanced with some
onboard peripherals and a few new instructions. Although Intel's 80186 and 80188
were not used in IBM PC type designs,[dubious � discuss] second source versions
from NEC, the V20 and V30 frequently were. The 8086 and successors had an
innovative but limited method of memory segmentation, while the 80286 introduced a
full-featured segmented memory management unit (MMU). The 80386 introduced a flat
32-bit memory model with paged memory management.

The 16-bit Intel x86 processors up to and including the 80386 do not include
floating-point units (FPUs). Intel introduced the 8087, 80187, 80287 and 80387 math
coprocessors to add hardware floating-point and transcendental function
capabilities to the 8086 through 80386 CPUs. The 8087 works with the 8086/8088 and
80186/80188,[36] the 80187 works with the 80186 but not the 80188,[37] the 80287
works with the 80286 and the 80387 works with the 80386. The combination of an x86
CPU and an x87 coprocessor forms a single multi-chip microprocessor; the two chips
are programmed as a unit using a single integrated instruction set.[38] The 8087
and 80187 coprocessors are connected in parallel with the data and address buses of
their parent processor and directly execute instructions intended for them. The
80287 and 80387 coprocessors are interfaced to the CPU through I/O ports in the
CPU's address space, this is transparent to the program, which does not need to
know about or access these I/O ports directly; the program accesses the coprocessor
and its registers through normal instruction opcodes.

32-bit designs

Upper interconnect layers on an Intel 80486DX2 die


16-bit designs had only been on the market briefly when 32-bit implementations
started to appear.

The most significant of the 32-bit designs is the Motorola MC68000, introduced in
1979.[dubious � discuss] The 68k, as it was widely known, had 32-bit registers in
its programming model but used 16-bit internal data paths, three 16-bit Arithmetic
Logic Units, and a 16-bit external data bus (to reduce pin count), and externally
supported only 24-bit addresses (internally it worked with full 32 bit addresses).
In PC-based IBM-compatible mainframes the MC68000 internal microcode was modified
to emulate the 32-bit System/370 IBM mainframe.[39] Motorola generally described it
as a 16-bit processor. The combination of high performance, large (16 megabytes or
224 bytes) memory space and fairly low cost made it the most popular CPU design of
its class. The Apple Lisa and Macintosh designs made use of the 68000, as did a
host of other designs in the mid-1980s, including the Atari ST and Commodore Amiga.

The world's first single-chip fully 32-bit microprocessor, with 32-bit data paths,
32-bit buses, and 32-bit addresses, was the AT&T Bell Labs BELLMAC-32A, with first
samples in 1980, and general production in 1982.[40][41] After the divestiture of
AT&T in 1984, it was renamed the WE 32000 (WE for Western Electric), and had two
follow-on generations, the WE 32100 and WE 32200. These microprocessors were used
in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world's first desktop super
microcomputer; in the "Companion", the world's first 32-bit laptop computer; and in
"Alexander", the world's first book-sized super microcomputer, featuring ROM-pack
memory cartridges similar to today's gaming consoles. All these systems ran the
UNIX System V operating system.

The first commercial, single chip, fully 32-bit microprocessor available on the
market was the HP FOCUS.

Intel's first 32-bit microprocessor was the iAPX 432, which was introduced in 1981,
but was not a commercial success. It had an advanced capability-based object-
oriented architecture, but poor performance compared to contemporary architectures
such as Intel's own 80286 (introduced 1982), which was almost four times as fast on
typical benchmark tests. However, the results for the iAPX432 was partly due to a
rushed and therefore suboptimal Ada compiler.[citation needed]

Motorola's success with the 68000 led to the MC68010, which added virtual memory
support. The MC68020, introduced in 1984 added full 32-bit data and address buses.
The 68020 became hugely popular in the Unix supermicrocomputer market, and many
small companies (e.g., Altos, Charles River Data Systems, Cromemco) produced
desktop-size systems. The MC68030 was introduced next, improving upon the previous
design by integrating the MMU into the chip. The continued success led to the
MC68040, which included an FPU for better math performance. The 68050 failed to
achieve its performance goals and was not released, and the follow-up MC68060 was
released into a market saturated by much faster RISC designs. The 68k family faded
from use in the early 1990s.

Other large companies designed the 68020 and follow-ons into embedded equipment. At
one point, there were more 68020s in embedded equipment than there were Intel
Pentiums in PCs.[42] The ColdFire processor cores are derivatives of the 68020.

During this time (early to mid-1980s), National Semiconductor introduced a very


similar 16-bit pinout, 32-bit internal microprocessor called the NS 16032 (later
renamed 32016), the full 32-bit version named the NS 32032. Later, National
Semiconductor produced the NS 32132, which allowed two CPUs to reside on the same
memory bus with built in arbitration. The NS32016/32 outperformed the MC68000/10,
but the NS32332�which arrived at approximately the same time as the MC68020�did not
have enough performance. The third generation chip, the NS32532, was different. It
had about double the performance of the MC68030, which was released around the same
time. The appearance of RISC processors like the AM29000 and MC88000 (now both
dead) influenced the architecture of the final core, the NS32764. Technically
advanced�with a superscalar RISC core, 64-bit bus, and internally overclocked�it
could still execute Series 32000 instructions through real-time translation.

When National Semiconductor decided to leave the Unix market, the chip was
redesigned into the Swordfish Embedded processor with a set of on chip peripherals.
The chip turned out to be too expensive for the laser printer market and was
killed. The design team went to Intel and there designed the Pentium processor,
which is very similar to the NS32764 core internally. The big success of the Series
32000 was in the laser printer market, where the NS32CG16 with microcoded BitBlt
instructions had very good price/performance and was adopted by large companies
like Canon. By the mid-1980s, Sequent introduced the first SMP server-class
computer using the NS 32032. This was one of the design's few wins, and it
disappeared in the late 1980s. The MIPS R2000 (1984) and R3000 (1989) were highly
successful 32-bit RISC microprocessors. They were used in high-end workstations and
servers by SGI, among others. Other designs included the Zilog Z80000, which
arrived too late to market to stand a chance and disappeared quickly.
The ARM first appeared in 1985.[43] This is a RISC processor design, which has
since come to dominate the 32-bit embedded systems processor space due in large
part to its power efficiency, its licensing model, and its wide selection of system
development tools. Semiconductor manufacturers generally license cores and
integrate them into their own system on a chip products; only a few such vendors
are licensed to modify the ARM cores. Most cell phones include an ARM processor, as
do a wide variety of other products. There are microcontroller-oriented ARM cores
without virtual memory support, as well as symmetric multiprocessor (SMP)
applications processors with virtual memory.

From 1993 to 2003, the 32-bit x86 architectures became increasingly dominant in
desktop, laptop, and server markets, and these microprocessors became faster and
more capable. Intel had licensed early versions of the architecture to other
companies, but declined to license the Pentium, so AMD and Cyrix built later
versions of the architecture based on their own designs. During this span, these
processors increased in complexity (transistor count) and capability
(instructions/second) by at least three orders of magnitude. Intel's Pentium line
is probably the most famous and recognizable 32-bit processor model, at least with
the public at broad.

64-bit designs in personal computers


While 64-bit microprocessor designs have been in use in several markets since the
early 1990s (including the Nintendo 64 gaming console in 1996), the early 2000s saw
the introduction of 64-bit microprocessors targeted at the PC market.

With AMD's introduction of a 64-bit architecture backwards-compatible with x86,


x86-64 (also called AMD64), in September 2003, followed by Intel's near fully
compatible 64-bit extensions (first called IA-32e or EM64T, later renamed Intel
64), the 64-bit desktop era began. Both versions can run 32-bit legacy applications
without any performance penalty as well as new 64-bit software. With operating
systems Windows XP x64, Windows Vista x64, Windows 7 x64, Linux, BSD, and macOS
that run 64-bit natively, the software is also geared to fully utilize the
capabilities of such processors. The move to 64 bits is more than just an increase
in register size from the IA-32 as it also doubles the number of general-purpose
registers.

The move to 64 bits by PowerPC had been intended since the architecture's design in
the early 90s and was not a major cause of incompatibility. Existing integer
registers are extended as are all related data pathways, but, as was the case with
IA-32, both floating point and vector units had been operating at or above 64 bits
for several years. Unlike what happened when IA-32 was extended to x86-64, no new
general purpose registers were added in 64-bit PowerPC, so any performance gained
when using the 64-bit mode for applications making no use of the larger address
space is minimal.[citation needed]

In 2011, ARM introduced a new 64-bit ARM architecture.

You might also like