Network Notes
Network Notes
Network Notes
com/
This module is an overview only. It will familiarize you with much of the vocabulary
you hear with regards to networking. Some of these concepts are covered in more
detail in later lessons
The Agenda
- Networking History
- LAN Topologies
- LAN/WAN Devices
Networking History
Early networks
Less than 25 years later, Alexander Graham Bell invented the telephone – beating out
a competitor to the patent office only by a couple of hours on Valentine‘s Day in
1867. This led to the development of the ultimate analog network – the telephone
system.
The first bit-oriented language device was developed by Emile Baudot – the printing
telegraph. By bit-oriented we mean the device sent pulses of electricity which were
either positive or had no voltage at all. These machines did not use Morse code.
Baudot‘s five-level code sent five pulses down the wire for each character transmitted.
The machines did the encoding and decoding, eliminating the need for operators at
both ends of the wires. For the first time, electronic messages could be sent by
anyone.
Telephone Network
But it‘s really the telephone network that has had the greatest impact on how
businesses communicate and connect today. Until 1985, the Bell Telephone
Company, now known as AT&T, owned the telephone network from end to end. It
represented a phenomenal network, the largest then and still the largest today.
Developments in Communication
Bell telephone had a problem with this and sued – and eventually lost.
This ruling eventually led to the breakup of American Telephone and Telegraph in
1984, thus creating nine regional Bell operating companies like Pacific Bell, Bell
Atlantic, Bell South, Mountain Bell, etc.
The break up of AT&T in 1984 opened the door for other competitors in the
telecommunications market. Companies like Microwave Communications, Inc. (MCI),
and Sprint. Today, when you make a phone call across the country, it may go
through three or four different carrier networks in order to make the connection.
Now, let‘s take a look at what was happening in the computer industry about the
same time.
In the 1960‘s and 1970‘s, traditional computer communications centered around the
mainframe host. The mainframe contained all the applications needed by the users,
as well as file management, and even printing. This centralized computing
environment used low-speed access lines that tied terminals to the host.
These large mainframes used digital signals – pulses of electricity or zeros and ones,
what is called binary -- to pass information from the terminals to the host. The
information processing in the host was also all digital.
This brought about a problem. The telephone industry wanted to use computers to
switch calls faster and the computer industry wanted to connect remote users to the
mainframe using the telephone service. But the telephone networks speak analog and
computers speak digital. Let‘s take a closer look at this problem.
Digital signals are seen as one‘s and zero‘s. The signal is either on or off. Whereas
analog signals are like audio tones – for example, the high-pitched squeal you hear
when you accidentally call a fax machine. So, in order for the computer world to use
the services of the telephone system, a conversion of the signal had to occur.
The solution
Multiplexing or muxing
Given the addition of multiplexing and the use of the modem, let‘s see how we can
grow our network.
Example:-
Using all the technology available, companies were able to team up with the phone
company and tie branch offices to the headquarters. The speeds of data transfer were
often slow and were still dependent on the speed and capacity of the host computers
at the headquarters site.
The phone company was also able to offer leased line and dial-up options. With
leased-lines, companies paid for a continuous connection to the host computer.
Companies using dial-up connections paid only for time used. Dial-up connections
were perfect for the small office or branch.
The 70‘s and 80‘s saw the beginnings of the Internet. The Internet as we know it
today began as the ARPANET — The Advanced Research Projects Agency Network –
built by a division of the Department of Defense essentially in the mid ‗60's through
grant-funded research by universities and companies. The first actual packet-
switched network was built by BBN. It was used by universities and the federal
government to exchange information and research. Many local area networks
connected to the ARPANET with TCP/IP. TCP/IP was developed in 1974 and stands
for Transmission Control Protocol / Internet Protocol. The ARPANET was shut down
in 1990 due to newer network technology and the need for greater bandwidth on the
backbone.
In the late ‗70‘s the NSFNET, the National Science Foundation Network was
developed. This network relied on super computers in San Diego; Boulder;
Champaign; Pittsburgh; Ithaca; and Princeton. Each of these six super computers
had a microcomputer tied to it which spoke TCP/IP. The microcomputer really
handled all of the access to the backbone of the Internet. Essentially this network
was overloaded from the word "go".
Further developments in networking lead to the design of the ANSNET -- Advanced
Networks and Services Network. ANSNET was a joint effort by MCI, Merit and IBM
specifically for commercial purposes. This large network was sold to AOL in 1995.
The National Science Foundation then awarded contracts to four major network
access providers: Pacific Bell in San Francisco, Ameritech in Chicago, MFS in
Washington DC and Sprint in New York City. By the mid ‗80's the collection of
networks began to be known as the ―Internet‖ in university circles. TCP/IP remains
the glue that holds it together.
In January 1992 the Internet Society was formed – a misleading name since the
Internet is really a place of anarchy. It is controlled by those who have the fastest
lines and can give customers the greatest service today.
The primary Internet-related applications used today include: Email, News retrieval,
Remote Login, File Transfer and World Wide Web access and development.
With the growth and development of the Internet came the need for speed – and
bandwidth. Companies want to take advantage of the ability to move information
around the world quickly. This information comes in the form of voice, data and video
– large files which increase the demands on the network. In the future, global
internetworking will provide an environment for emerging applications that will
require even greater amounts of bandwidth. If you doubt the future of global
internetworking consider this – the Internet is doubling in size about every 11
months.
In the previous section, we discussed how networking evolved and some of the
problems involved in the transmission of data such as conflict and multiple
terminals. In this section some of the basic elements needed to build local area
networks (LANs) will be described.
The term local-area network, or LAN, describes of all the devices that communicate
together—printers, file server, computers, and perhaps even a host computer.
However, the LAN is constrained by distance. The transmission technologies used in
LAN applications do not operate at speed over long distances. LAN distances are in
the range of 100 meters (m) to 3 kilometers (km). This range can change as new
technologies emerge.
For systems from different manufacturers to interoperate—be it a printer, PC, and file
server—they must be developed and manufactured according to industry-wide
protocols and standards.
More details about protocols and standards will be given later, but for now, just keep
in mind they represent rules that govern how devices on a network exchange
information. These rules are developed by industry-wide special interest groups
(SIGs) and standards committees such as the Institute of Electrical and Electronics
Engineers (IEEE).
Most of the network administrator‘s tasks deal with LANs. Major characteristics of
LANs are:
- LANs provide multiple connected desktop devices (usually PCs) with access to
high-bandwidth media.
- An enterprise purchases the media and connections used in the LAN; the
enterprise can privately control the LAN as it chooses.
- LANs rarely shut down or restrict access to connected workstations; local services
are usually always available.
Components of LAN
In order for computers to be able to communicate with each other, they must first
have the networking software that tells them how to do so. Without the software, the
system will function simply as a ―standalone,‖ unable to utilize any of the resources
on the network.
Network operating software may by installed by the factory, eliminating the need for
you to purchase it, (for example AppleTalk), or you may install it yourself.
111111111111111
In addition to network operating software, each network device must also have a
network interface card. These cards today are also referred to as adapters, as in
―Ethernet adapter card‖ or ―Token Ring adapter card.‖
The NIC card amplifies electronic signals which are generally very weak within the
computer system itself. The NIC is also responsible for packaging data for
transmission, and for controlling access to the network cable. When the data is
packaged properly, and the timing is right, the NIC will push the data stream onto
the cable.
The NIC also provides the physical connection between the computer and the
transmission cable (also called ―media‖). This connection is made through the
connector port. Examples of transmission media are Ethernet, Token Ring, and
FDDI.
- Writing Hub
In order to have a network, you must have at least two devices that communicate
with each other. In this simple model, it is a computer and a printer. The printer also
has an NIC installed (for example, an HP Jet Direct card), which in turn is plugged
into a wiring hub. The computer system is also plugged into the hub, which
facilitates communication between the two devices.
Additional components (such as a server, a few more PCs, and a scanner) may be
connected to the hub. With this connection, all network components would have
access to all other network components.
The benefit of building this network is that by sharing resources a company can
afford higher quality components. For example, instead of providing an inkjet printer
for every PC, a company may purchase a laser printer (which is faster, higher
capacity, and higher quality than the inkjet) to attach to a network. Then, all
computers on that network have access to the higher quality printer.
The wires connecting the various devices together are referred to as cables.
- Cable prices range from inexpensive to very costly and can comprise of a
significant cost of the network itself.
- Cables are one example of transmission media. Media are various physical
environments through which transmission signals pass. Common network media
include twisted-pair, coaxial cable, fiber-optic cable, and the atmosphere (through
which microwave, laser, and infrared transmission occurs). Another term for this
is ―physical media.‖ *Note that not all wiring hubs support all medium types.
- As their name implies, the connector is the physical location where the NIC card
and the cabling connect.
- Registered jack (RJ) connectors were originally used to connect telephone lines.
RJ connectors are now used for telephone connections and for 10BaseT and other
types of network connections. Different connectors are able support different
speeds of transmission because of their design and the materials used in their
manufacture.
- RJ-11 connectors are used for telephones, faxes, and modems. RJ-45 connectors
are used for NIC cards, 10BaseT cabling, and ISDN lines.
Network Cabling
Cable is the actual physical path upon which an electrical signal travels as it moves
from one component to another.
Transmission protocols determine how NIC cards take turns transmitting data onto
the cable. Remember that we discussed how LAN cables (baseband) carry one signal,
while WAN cables (broadband) carry multiple signals. There are three primary cable
types:
- Fiber-optic cable
- These are the least expensive media for data communication. UTP is cheaper than
STP.
- Because most buildings are already wired with UTP, many transmission standards
are adapted to use it to avoid costly re-wiring of an alternative cable type.
Coaxial cable
Fiber-optic cable
Throughput Needs....!!
The throughput rate is the rate of information arriving at, and possibly passing
through, a particular point in a network.
In this chapter, the term bandwidth means the total capacity of a given network
medium (twisted pair, coaxial, or fiber-optic cable) or protocol.
- Bandwidth is also used to describe the difference between the highest and the
lowest frequencies available for network signals. This quantity is measured in
Megahertz (MHz).
Some of the available bandwidth specified for a given medium or protocol is used up
in overhead, including control characters. This overhead reduces the capacity
available for transmitting data.
This table shows the tremendous variation in transmission time with different
throughput rates. In years past, megabit (Mb) rates were considered fast. In today‘s
modern networks, gigabit (Gb) rates are possible. Nevertheless, there continues to be
a focus on greater throughput rates.
LAN Topologies
You may hear the word topology used with respect to networks. ―Topology‖ refers to
the physical arrangement of network components and media within an enterprise
networking structure. There are four primary kinds of LAN topologies: bus, tree, star,
and ring.
Bus topology is
Tree topology is
- Similar to bus topology, except that tree networks can contain branches with
multiple nodes. As in bus topology, transmissions from one component propagate
the length of the medium and are received by all other components.
The disadvantage of bus topology is that if the connection to any one user is broken,
the entire network goes down, disrupting communication between all users. Because
of this problem, bus topology is rarely used today.
The advantage of bus topology is that it requires less cabling (therefore, lower cost)
than star topology.
Star topology
- The benefit of star topology is that even if the connection to any one user is broken,
the network stays functioning, and communication between the remaining users is
not disrupted.
- The disadvantage of star topology is that it requires more cabling (therefore, higher
cost) than bus topology.
Ring topology
Ring topology consists of a series of repeaters connected to one another by
unidirectional transmission links to form a single closed loop.
Redundancy is used to avoid collapse of the entire ring in the event that a connection
between two components fails.
LAN/WAN Devices
Let‘s now take a look at some of the devices that move traffic around the network.
Hub
Star topology networks generally have a hub in the center of the network that
connects all of the devices together using cabling. When bits hit a networking device,
be they hubs, switches, or routers, the devices will strengthen the signal and then
send it on its way.
A hub is simple a multiport repeater. There is usually no software to load, and no
configuration required (i.e. network administrators don‘t have to tell the device what
to do).
Hubs operate very much the same way as a repeater. They amplify and propagate
signals received out all ports, with the exception of the port from which the data
arrived.
For example in the above image, if system 125 wanted to print on the printer 128, the
message would be sent to all systems on Segment 1, as well as across the hub to all
systems on Segment 2. System 128 would see that the message is intended for it and
would process it.
Devices on the network are constantly listening for data. When devices sense a frame
of information that is addressed (and we will talk more about addressing later) for it,
then it will accept that information into memory found on the network interface card
(NIC) and begin processing the data.
In fairly small networks, hubs work very well. However, in large networks the
limitations of hubs creates problems for network managers. In this example, Ethernet
is the standard being used. The network is also baseband, only one station can use
the network at a time. If the applications and files being used on this network are
large, and there are more nodes on the network, contention for bandwidth will slow
the responsiveness of the network down.
Bridges
Bridges improve network throughput and operate at a more intelligent level than do
hubs. A bridge is considered to be a store and forward device that uses unique
hardware addresses to filter traffic that would otherwise travel from one segment to
another. A bridge performs the following functions:
- Reads data frame headers and records source address/port (segment) pairs
- Reads the destination address of incoming frames and uses recorded addresses to
determine the appropriate outbound port for the frame.
- Uses memory buffers to store frames during periods of heavy transmission, and
forwards them when the medium is ready.
The bridge divides this Ethernet LAN into two segments in the above image, each
connecting to a hub and then to a bridge port. Stations 123-125 are on segment 1
and stations 126-128 are on segment 2.
When station 124 transmits to station 125, the frame goes into the hub (who repeats
it and sends it out all connected ports) and then on to the bridge. The bridge will not
forward the frame because it recognizes that stations 124 and 125 are on the same
segment. Only traffic between segments passes through the bridge. In this example, a
data frame from station 123, 124, or 125 to any station on segment 2 would be
forwarded, and so would a message from any station on segment 2 to stations on
segment 1.
When one station transmits, all other stations must wait until the line is silent again
before transmitting. In Ethernet, only one station can transmit at a time, or data
frames will collide with each other, corrupting the data in both frames.
Bridges will listen to the network and keep track of who they are hearing. For
instance, the bridge in this example will know that system 127 is on Segment 2, and
that 125 is on segment 1. The bridge may even have a port (perhaps out to the
Internet) where it will send all packets that it cannot identify a destination for.
Switches
Switches use bridging technology to forward traffic between ports. They provide full
dedicated transmission rates between two stations that are directly connected to the
switch ports. Switches also build and maintain address tables just like bridges do.
These address tables are known as ―content addressable memory.‖
Routers
A router has two basic functions, path determination using a variety of metrics, and
forwarding packets from one network to another. Routing metrics can include load on
the link between devices, delay, bandwidth, and reliability, or even hop count (i.e. the
number of devices a packet must go through in order to reach its destination).
In essence, routers will do all that bridges and switches will do, plus more. Routers
have the capability of looking deeper into the data frame and applying network
services based on the destination IP address. Destination and Source IP addresses
are a part of the network header added to a packet encapsulation at the network
layer.
- SUMMARY -
* Key LAN components are computers, NOS, NICs, hubs, and cables
The Agenda
In this image, the goal is to get a message from Location A to Location B. The sender
doesn‘t know what language the receiver speaks – so the sender passes the message
on to a translator.
The translator, while not concerned with the content of the message, will translate it
into a language that may be globally understood by most, if not all translators – thus
it doesn‘t matter what language the final recipient speaks. In this example, the
language is Dutch. The translator also indicates what the language type is, and then
passes the message to an administrative assistant.
The administrative assistant, while not concerned with the language, or the message,
will work to ensure the reliable delivery of the message to the destination. In this
example, she will attach the fax number, and then fax the document to the
destination – Location B.
The document is received by an administrative assistant at Location B. The assistant
at Location B may even call the assistant at Location A to let her know the fax was
properly received.
The assistant at Location B will then pass the message to the translator at her office.
The translator will see that the message is in Dutch. The translator, knowing that the
person to whom the message is addressed only speaks French, will translate the
message so the recipient can properly read the message. This completes the process
of moving information from one location to another.
Upon closer study of the process employed to communicate, you will notice that
communication took place at different layers. At layer 1, the administrative assistants
communicated with each other. At layer 2, the translators communicated with each
other. And, at layer 3 the sender was able to communicate with the recipient.
That‘s essentially the same thing that goes in networking with the OSI model. This
image illustrates the model.
So, why use a layered network model in the first place? Well, a layered network model
does a number of things. It reduces the complexity of the problems from one large
one to seven smaller ones. It allows the standardization of interfaces among devices.
It also facilitates modular engineering so engineers can work on one layer of the
network model without being concerned with what happens at another layer. This
modularity both accelerates evolution of technology and finally teaching and learning
by dividing the complexity of internetworking into discrete, more easily learned
operation subsets.
Note that a layered model does not define or constrain an implementation; it provides
a framework. Implementations, therefore, do not conform to the OSI reference model,
but they do conform to the standards developed from the OSI reference model
principles.
Let‘s put this in some context. You are already familiar with different networking
devices such as hubs, switches, and routers. Each of these devices operate at a
different level of the OSI Model.
NIC cards receive information from upper level applications and properly package
data for transmission on to the network media. Essentially, NIC cards live at the
lower four layers of the OSI Model.
Hubs, whether Ethernet, or FDDI, live at the physical layer. They are only concerned
with passing bits from one station to other connected stations on the network. They
do not filter any traffic.
Bridges and switches on the other hand, will filter traffic and build bridging and
switching tables in order to keep track of what device is connected to what port.
Routers, or the technology of routing, lives at layer 3.
These are the layers people are referring to when they speak of ―layer 2‖ or ―layer 3‖
devices.
Let‘s take a closer look at the model.
Host Layers :-
The upper four layers, Application, Presentation, Session, and Transport, are
responsible for accurate data delivery between computers. The tasks or functions of
these upper four layers must ―interoperate‖ with the upper four layers in the system
being communicated with.
Media Layers :-
The lower three layers – Network, Data Link and Physical -- are called the media
layers. The media layers are responsible for seeing that the information does indeed
arrive at the destination for which it was intended.
Layer Functions
- Application Layer
If we take a look at the model from the top layer, the Application Layer, down, I think
you will begin to get a better idea of what the model does for the industry.
The applications that you run on a desktop system, such as Power Point, Excel and
Word work above the seven layers of the model.
The application layer of the model helps to provide network services to the
applications. Some of the application processes or services that it offers are electronic
mail, file transfer, and terminal emulation.
- Presentation Layer
The next layer of the seven layer model is the presentation layer. It is responsible for
the overall representation of the data from the application layer to the receiving
system. It insures that the data is readable by the receiving system.
- Session Layer
- Trasport Layer
The network layer is layer 3. This is the layer that is associated with addressing and
looking for the best path to send information on. It provides connectivity and path
selection between two systems.
The network layer is essentially the domain of routing. So when we talk about a
device having layer 3 capability, we mean that that device is capable of addressing
and best path selection.
The link layer (formally referred to as the data link layer) provides reliable transit of
data across a physical link. In so doing, the link layer is concerned with physical (as
opposed to network or logical) addressing, network topology, line discipline (how end
systems will use the network link), error notification, ordered delivery of frames, and
flow control.
- Physical Layer
The physical layer is concerned with binary transmission. It defines the electrical,
mechanical, procedural, and functional specifications for activating, maintaining, and
deactivating the physical link between end systems. Such characteristics as voltage
levels, physical data rates, and physical connectors are defined by physical layer
specifications. Now you know the role of all 7 layers of the OSI model.
Peer-to-Peer Communications
Let‘s see how these layers work in a Peer to Peer Communications Network. In this
exercise we will package information and move it from Host A, across network lines to
Host B.
Each layer uses its own layer protocol to communicate with its peer layer in the other
system. Each layer‘s protocol exchanges information, called protocol data units
(PDUs), between peer layers.
This peer-layer protocol communication is achieved by using the services of the
layers below it. The layer below any current or active layer provides its services to the
current layer.
The transport layer will insure that data is kept segmented or separated from one
other data. At the network layer we get packets that begin to be assembled. At the
data link layer those packets become frames and then at the physical layer those
frames go out on the wires from one host to the other host as bits
Data Encapsulation
This whole process of moving data from host A to host B is known as data
encapsulation – the data is being wrapped in the appropriate protocol header so it
can be properly received.
Let‘s say we compose an email that we wish to send from system A to system B. The
application we are using is Eudora. We write the letter and then hit send. Now, the
computer translates the numbers into ASCII and then into binary (1s and 0s). If the
email is a long one, then it is broken up and mailed in pieces. This all happens by the
time the data reaches the Transport layer.
At the network layer, a network header is added to the data. This header contains
information required to complete the transfer, such as source and destination logical
addresses.
The packet from the network layer is then passed to the data link layer where a frame
header and a frame trailer are added thus creating a data link frame.
Finally, the physical layer provides a service to the data link layer. This service
includes encoding the data link frame into a pattern of 1s and 0s for transmission on
the medium (usually a wire).
Now let‘s take a look at each of the layers in a bit more detail and with some context.
For Layers 1 and 2, we‘re going to look at physical device addressing, and the
resolution of such addresses when they are unknown.
MAC Address
For multiple stations to share the same medium and still uniquely identify each
other, the MAC sub layer defines a hardware or data link address called the MAC
address. The MAC address is unique for each LAN interface.
On most LAN-interface cards, the MAC address is burned into ROM—hence the term,
burned-in address (BIA). When the network interface card initializes, this address is
copied into RAM.
The MAC address is a 48-bit address expressed as 12 hexadecimal digits. The first 6
hexadecimal digits of a MAC address contain a manufacturer identification (vendor
code) also known as the organizationally unique identifier (OUI). To ensure vendor
uniqueness the Institute of Electrical and Electronic Engineers (IEEE) administers
OUIs. The last 6 hexadecimal digits are administered by each vendor and often
represent the interface serial number.
Which path should traffic take through the cloud of networks? Path determination
occurs at Layer 3. The path determination function enables a router to evaluate the
available paths to a destination and to establish the preferred handling of a packet.
Data can take different paths to get from a source to a destination. At layer 3, routers
really help determine which path. The network administrator configures the router
enabling it to make an intelligent decision as to where the router should send
information through the cloud.
The network layer sends packets from source network to destination network.
After the router determines which path to use, it can proceed with switching the
packet: taking the packet it accepted on one interface and forwarding it to another
interface or port that reflects the best path to the packet‘s destination.
To be truly practical, an internetwork must consistently represent the paths of its
media connections. As the graphic shows, each line between the routers has a
number that the routers use as a network address. These addresses contain
information about the path of media connections used by the routing process to pass
packets from a source toward a destination.
The network layer combines this information about the path of media connections–
sets of links–into an internetwork by adding path determination, path switching, and
route processing functions to a communications system. Using these addresses, the
network layer also provides a relay capability that interconnects independent
networks.
The consistency of Layer 3 addresses across the entire internetwork also improves
the use of bandwidth by preventing unnecessary broadcasts which tax the system.
Each device in a local area network is given a logical address. The first part is the
network number – in this example that is a single digit – 1. The second part is a node
number, in this example we have nodes 1, 2, and 3. The router uses the network
number to forward information from one network to another.
The two-part network addressing scheme extends across all the protocols covered in
this course. How do you interpret the meaning of the address parts? What authority
allocates the addresses? The answers vary from protocol to protocol.
For example, in the TCP/IP address, dotted decimal numbers show a network part
and a host part. Network 10 uses the first of the four numbers as the network part
and the last three numbers–8.2.48 as a host address. The mask is a companion
number to the IP address. It communicates to the router the part of the number to
interpret as the network number and identifies the remainder available for host
addresses inside that network.
The Novell Internet Package Exchange or IPX example uses a different variation of
this two-part address. The network address 1aceb0b is a hexadecimal (base 16)
number that cannot exceed a fixed maximum number of digits. The host address
0000.0c00.6e25 (also a hexadecimal number) is a fixed 48 bits long. This host
address derives automatically from information in hardware of the specific LAN
device.
These are the two most common Layer 3 address types.
Let‘s take a look at the flow of packets through a routed network. For examples sake,
let‘s say it is an Email message from you at Station X to your mother in Michigan
who is using System Y.
The message will exit Station X and travel through the corporate internal network
until it gets to a point where it needs the services of an Internet service provider. The
message will bounce through their network and eventually arrive at Mom‘s Internet
provider in Dearborn. Now, we have simplified this transmission to three routers,
when in actuality, it could travel through many different networks before it arrives at
its destination.
Let‘s take a look, from the OSI models reference point, at what is happening to the
message as it bounces around the Internet on its way to Mom‘s.
As information travels from Station X it reaches the network level where a network
address is added to the packet. At the data link layer, the information is
encapsulated in an Ethernet frame. Then it goes to the router – here it is Router A –
and the router de-encapsulates and examines the frame to determine what type of
network layer data is being carried. The network layer data is sent to the appropriate
network layer process, and the frame itself is discarded.
The network layer process examines the header to determine the destination
network.
The packet is again encapsulated in the data-link frame for the selected interface and
queued for delivery.
This process occurs each time the packet switches through another router. At the
router connected to the network containing the destination host – in this case, C --
the packet is again encapsulated in the destination LAN‘s data-link frame type for
delivery to the protocol stack on the destination host, System Y.
Multiprotocol Routing
It is easy to confuse the similar terms routed protocol and routing protocol:
Routed protocols are what we have been talking about so far. They are any network
protocol suite that provides enough information in its network layer address to allow
a packet to direct user traffic. Routed protocols define the format and use of the fields
within a packet. Packets generally are conveyed from end system to end system. The
Internet protocol IP and Novell‘s IPX are examples of routed protocols.
Routers must be aware of what links, or lines, on the network are up and running,
which ones are overloaded, or which ones may even be down and unusable. There
are two primary methods routers use to determine the best path to a destination:
static and dynamic
Static knowledge is administered manually: a network administrator enters it into the
router‘s configuration. The administrator must manually update this static route
entry whenever an internetwork topology change requires an update. Static
knowledge is private–it is not conveyed to other routers as part of an update process.
Dynamic knowledge works differently. After the network administrator enters
configuration commands to start dynamic routing, route knowledge is updated
automatically by a routing process whenever new topology information is received
from the internetwork. Changes in dynamic knowledge are exchanged between
routers as part of the update process.
Static Route : Uses a protocol route that a network administrator enters into the
router
Dynamic Route : Uses a route that a network protocol adjusts automatically for
topology or
traffic changes
But what happens if the path between Router A and Router D fails? Obviously Router
A will not be able to relay the packet to Router D. Until Router A is reconfigured to
relay packets by way of Router B, communication with the destination network is
impossible.
Dynamic knowledge offers more automatic flexibility. According to the routing table
generated by Router A, a packet can reach its destination over the preferred route
through Router D. However, a second path to the destination is available by way of
Router B. When Router A recognizes the link to Router D is down, it adjusts its
routing table, making the path through Router B the preferred path to the
destination. The routers continue sending packets over this link.
When the path between Routers A and D is restored to service, Router A can once
again change its routing table to indicate a preference for the counter-clockwise path
through Routers D and C to the destination network.
LAN-to-LAN Routing
Example 01:-
The next two examples will bring together many of the concepts we have discussed.
The network layer must relate to and interface with various lower layers. Routers
must be capable of seamlessly handling packets encapsulated into different lower-
level frames without changing the packets‘ Layer 3 addressing.
Let‘s look at an example of this in a LAN-to-LAN routing situation. Packet traffic from
source Host 4 on Ethernet network 1 needs a path to destination Host 5 on Token
Ring Network 2. The LAN hosts depend on the router and its consistent network
addressing to find the best path.
When the router checks its router table entries, it discovers that the best path to
destination Network 2 uses outgoing port To0, the interface to a Token Ring LAN.
Although the lower-layer framing must change as the router switches packet traffic
from the Ethernet on Network 1 to the Token Ring on Network 2, the Layer 3
addressing for source and destination remains the same - in this example it is Net 2,
Host 5 despite the different lower-layer encapsulations.
The packet is then reframed and sent on to the destination Token Ring network.
LAN-to-WAN Routing
Example 02:-
The network layer must relate to and interface with various lower layers for LAN-to-
WAN traffic, as well. As an internetwork grows, the path taken by a packet might
encounter several relay points and a variety of data-link types beyond the LANs. For
example, in the graphic, a packet from the top workstation at address 1.3 must
traverse three data links to reach the file server at address 2.4 shown on the bottom:
The workstation sends a packet to the file server by encapsulating the packet in a
Token Ring frame addressed to Router A.
When Router A receives the frame, it removes the packet from the Token Ring frame,
encapsulates it in a Frame Relay frame, and forwards the frame to Router B.
Router B removes the packet from the Frame Relay frame and forwards the packet to
the file server in a newly created Ethernet frame.
When the file server at 2.4 receives the Ethernet frame, it extracts and passes the
packet to the appropriate upper-layer process through the process of de-
encapsulation.
The routers enable LAN-to-WAN packet flow by keeping the end-to-end source and
destination addresses constant while encapsulating the packet at the port to a data
link that is appropriate for the next hop along the path.
Let‘s look at the upper layers of the OSI seven layer model now. Those layers are the
transport, session, presentation, and application layers.
Transport Layer
The transport layer has several functions. First, it segments upper layer application
information. You might have more than one application running on your desktop at a
time. You might be sending electronic mail open while transferring a file from the
Web, and opening a terminal session. The transport layer helps keep straight all of
the information coming from these different applications.
Another function of the transport layer is to establish the connection from your
system to another system. When you are browsing the Web and double-click on a
link your system tries to establish a connection with that host. Once the connection
has been established, there is some negotiation that happens between your system
and the system that you are connected to in terms of how data will be transferred.
Once the negotiations are completed, data will begin to transfer. As soon as the data
transfer is complete, the receiving station will send you the end message and your
browser will say done. Essentially, the transport layer is responsible then for
connecting and terminating sessions from your host to another host.
Transport Layer— Sends Segments with Flow Control
Another important function of the transport layer is to send segments and maintain
the sending and receiving of information with flow control.
When a connection is established, the host will begin to send frames to the receiver.
When frames arrive too quickly for a host to process, it stores them in memory
temporarily. If the frames are part of a small burst, this buffering solves the problem.
If the traffic continues, the host or gateway eventually exhausts its memory and must
discard additional frames that arrive.
Instead of losing data, the transport function can issue a not ready indicator to the
sender. Acting like a stop sign, this indicator signals the sender to discontinue
sending segment traffic to its peer. After the receiver has processed sufficient
segments that its buffers can handle additional segments, the receiver sends a ready
transport indicator, which is like a go signal. When it receives this indicator, the
sender can resume segment transmission.
Reliable delivery guarantees that a stream of data sent from one machine will be
delivered through a functioning data link to another machine without duplication or
data loss. Positive acknowledgment with retransmission is one technique that
guarantees reliable delivery of data streams. Positive acknowledgment requires a
receiving system or receiver to communicate with the source, sending back an
acknowledgment message when it receives data. The sender keeps a record of each
packet it sends and waits for an acknowledgment before sending the next packet.
In this example, the sender is transmitting packets 1, 2, and 3. The receiver
acknowledges receipt of the packets by requesting packet number 4. The sender,
upon receiving the acknowledgment sends packets 4, 5, and 6. If packet number 5
does not arrive at the destination, the receiver acknowledges with a request to resend
packet number 5. The sender resends packet number 5 and must receive an
acknowledgment to continue with the transmission of packet number 7.
Transport to Network Layer
The transport layer assumes it can use the network as a given ―cloud‖ as segments
cross from sender source to receiver destination.
If we open up the functions inside the ―cloud,‖ we reveal issues like, ―Which of several
paths is best for a given route?‖ We see the role that routers perform in this process,
and we see the segments of Layer 4 transport further encapsulated into packets.
Session Layer
The session layer establishes, manages, and terminates sessions among applications.
This layer is primarily concerned with coordinating applications as they interact on
different hosts. Some popular session layer protocols are listed here, Network File
Systems (NFS), Structured Query Language or SQL, X Window Systems; even
AppleTalk Session Protocol is part of the session layer.
Presentation Layer
The presentation layer is primarily concerned with the format of the data. Data and
text can be formatted as ASCII files, as EBCDIC files or can even be Encrypted.
Sound may become a Midi file. Video files can be formatted as MPEG video files or
QuickTime files. Graphics and visual images can be formatted as PICT, TIFF, JPEG,
or even GIF files. So that is really what happens at the presentation layer.
Application Layer
The application layer is the highest level of the seven layer model. Computer
applications that you use on your desktop everyday, applications like word
processing, presentation graphics, spreadsheets files, and database management, all
sit above the application layer. Network applications and internetwork applications
allow you, as the user, to move computer application files through the network and
through the internetwork.
Examples:-
COMPUTER APPLICATIONS
- Word Processor
- Presentation Graphics
- Spreadsheet
- Database
- Design/Manufacturing
- Project Planning
- Others
NETWORK APPLICATIONS
- Electronic Mail
- File Transfer
- Remote Access
- Client-Server Process
- Information Location
- Network Management
- Others
INTERNETWORK APPLICATIONS
- SUMMARY -
- Layers 4–7 (host layers) provide accurate data delivery between computers
- Layers 1–3 (media layers) control physical delivery of data over the network
The OSI reference model describes what must transpire for program to program
communications to occur between even dissimilar computer systems. Each layer is
responsible to provide information and pointers to the next higher layer in the OSI
Reference Model.
The Application Layer (which is the highest layer in the OSI model) makes available
network services to actual software application programs.
The presentation layer is responsible for formatting and converting data and ensuring
that the data is presentable for one application through the network to another
application.
The session layer is responsible for coordinating communication interactions between
applications. The reliable transport layer is responsible for segmenting and
multiplexing information, keeping straight all the various applications you might be
using on your desktop, the synchronization of the connection, flow control, error
recovery as well as reliability through the process of windowing. The network layer is
responsible for addressing and path determination.
The link layer provides reliable transit of data across a physical link. And finally the
physical layer is concerned with binary transmission.
This lesson provides an introduction to TCP/IP. I am sure you‘ve heard of TCP/IP…
though you may wonder why you need to understand it. Well, TCP/IP is the language
that governs communications between all computers on the Internet. A basic
understanding of TCP/IP is essential to understanding Internet technology and how
it can bring benefits to an organization.
We‘re going to explain what TCP/IP is and the different parts that make it up. We‘ll
also discuss IP addresses.
The Agenda
- What Is TCP/IP?
- IP Addressing
What Is TCP/IP?
TCP/IP is shorthand for a suite of protocols that run on top of IP. IP is the Internet
Protocol, and TCP is the most important protocol that runs on top of IP. Any
application that can communicate over the Internet is using IP, and these days most
internal networks are also based on TCP/IP.
Protocols that run on top of IP include: TCP, UDP and ICMP. Most TCP/IP
implementations support all three of these protocols. We‘ll talk more about them
later.
Protocols that run underneath IP include: SLIP and PPP. These protocols allow IP to
run across telecommunications lines.
TCP/IP protocols work together to break data into packets that can be routed
efficiently by the network. In addition to the data, packets contain addressing,
sequencing, and error checking information. This allows TCP/IP to accurately
reconstruct the data at the other end.
Here‘s an analogy of what TCP/IP does. Say you‘re moving across the country. You
pack your boxes and put your new address on them. The moving company picks
them up, makes a list of the boxes, and ships them across the country using the
most efficient route. That might even mean putting different boxes on different
trucks. When the boxes arrive at your new home, you check the list to make sure
everything has arrived (and in good shape), and then you unpack the boxes and
―reassemble‖ your house.
- A suite of protocols
- Rules that dictate how packets of information are sent across - multiple networks
- Addressing
- Error checking
IP
After TCP/IP was invented and deployed, the OSI layered network model was
accepted as a standard. OSI neatly divides network protocols into seven layers; the
bottom four layers are shown in this diagram. The idea was that TCP/IP was an
interesting experiment, but that it would be replaced by protocols based on the OSI
model.
As it turned out, TCP/IP grew like wildfire, and OSI-based protocols only caught on
in certain segments of the manufacturing community. These days, while everyone
uses TCP/IP, it is common to use the OSI vocabulary.
TCP/IP Applications
- Application layer
- Transport layer
- Network layer
Roughly, Ethernet corresponds to both the physical layer and the data link layer.
Other media (T1, Frame Relay, ATM, ISDN, analog) and other protocols (SLIP, PPP)
are down here as well.
Roughly, IP corresponds to the network layer.
Roughly, TCP and UDP correspond to the transport layer.
TCP is the most important of all the IP protocols. Most Internet applications you can
think of use TCP, including: Telnet, HTTP (Web), POP & SMTP (email) and FTP (file
transfer).
- source IP address
- source port
- destination IP address
- destination port
Typically, a client will use a random port number, but a server will use a ―well
known‖ port number, e.g. 25=SMTP (email), 80=HTTP (Web) and so on. Because every
TCP connection is unique, even though many people may be making requests to the
same Web server, TCP/IP can identify your packets among the crowd.
In addition to the port information, each TCP packet has a sequence number. Packets
may arrive out of sequence (they may have been routed differently, or one may have
been dropped), so the sequence numbers allow TCP to reassemble the packets in the
correct order and to request retransmission of any missing packets.
TCP packets also include a checksum to verify the integrity of the data. Packets that
fail checksum get retransmitted.
- Unreliable
- Fast
- Assumes application will retransmit on error
- Often used in diskless workstations
ICMP Ping
Ping is an example of a program that uses ICMP rather than TCP or UDP. Ping sends
an ICMP echo request from one system to another, then waits for an ICMP echo reply.
It is mostly used for testing.
IPv4 Addressing
Most IP addresses today use IP version 4—we‘ll talk about IP version 6 later.
IPv4 addresses are 32 bits long and are usually written in ―dot‖ notation. An example
would be 192.1.1.17.
The Internet is actually a lot of small local networks connected together. Part of an IP
address identifies which local network, and part of an IP address identifies a specific
system or host on that local network.
What part of an IP address is for the ―network‖ and what part is for the ―host‖ is
determined by the class or the subnet.
IP Addressing—Three Classes
- Class A: NET.HOST.HOST.HOST
- Class B: NET.NET.HOST.HOST
- Class C: NET.NET.NET.HOST
Before the introduction of subnet masks, the only way to tell the network part of an
IP address from the host part was by its class.
Class A addresses have 8 bits (one octet) for the network part and 24 bits for the host
part. This allows for a small number of large networks.
Class B addresses have 16 bits each for the network and host parts.
Class C addresses have 24 bits for the network and 8 bits for the host. This allows for
a fairly large number of networks with up to 254 systems on each.
To summarize:
IPv4 addresses are 32 bits with a network part and a host part.
Unless you are using subnets, you divide an IP address into the network and host
parts based on the address class.
The network part of an address is used for routing packets over the Internet. The
host part is used for final delivery on the local net.
IP Addressing—Class A
Here‘s an example of a class A address. Any IPv4 address in which the first octet is
less than 128 is by definition a class A address.
This address is for host #222.135.17 on network #10, although the host is always
referred to by its full address.
Examlpe:- 10.222.135.17
- Network # 10
- Host # 222.135.17
- Range of class A network IDs: 1–126
- Number of available hosts: 16,777,214
IP Addressing—Class B
Here‘s an example of a class B address. Any IPv4 address in which the first octet is
between 128 and 191 is by definition a class B address
Examlpe:- 128.128.141.245
- Network # 128.128
- Host # 141.245
- Range of class B network IDs: 128.1–191.254
- Number of available hosts: 65,534
IP Addressing—Class C
Here‘s an example of a class C address. Most IPv4 addresses in which the first octet
is 192 or higher are class C addresses, but some of the higher ranges are reserved for
multicast applications.
Examlpe:- 192.150.12.1
-Network # 192.150.12
-Host # 1
-Range of class C network IDs: 192.0.1–223.255.254
-Number of available hosts: 254
IP Subnetting
As it turns out, dividing IP addresses into classes A, B and C is not flexible enough.
In particular, it does not make efficient use of the available IP addresses and it does
not give network administrators enough control over their internal LAN
configurations.
In this diagram, the class B network 131.108 is split (probably into 256 subnets),
and a router connects the 131.108.2 subnet to the 131.108.3 subnet.
IP Subnet Mask
A subnet mask tells a computer or a router how to divide a range of IP addresses into
the network part and the host part.
Given:
Address = 131.108.2.160
Subnet Mask = 255.255.255.0
Subnet = 131.108.2.0
In this example, without a subnet mask the address would be treated as class B and
the network number would be 131.108. But because someone supplied a subnet
mask of 255.255.255.0, the network number is actually 131.108.2.
These days, routers and computers always use subnet masks if they are supplied. If
there is no subnet mask for an address, then the class A, B, C scheme is used.
IP Address Assignment
IPv6 Addressing
- 128-bit addresses
- 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses
Example1:- 5F1B:DF00:CE3E:E200:0020:0800:5AFC:2B36
Example2:- 0:0:0:0:0:0:192.1.1.17
With the explosive growth of the Internet, there are not enough IPv4 addresses to go
around. IPv6 is now released, and many organizations are already migrating.
While IPv6 has a number of nice features, its biggest claim to fame is a huge number
of IP addresses. IPv4 was only 32 bits; IPv6 is 128 bits.
To ease migration, IPv6 completely contains all of IPv4, as shown in the second
example above.
Most network applications will have to be modified slightly to accommodate IPv6.
- SUMMARY -
In this lesson, we will cover the fundamentals of LAN technologies. We‘ll look at
Ethernet, Token Ring, and FDDI. For each one, we‘ll look at the technology as well as
its operations.
The Agenda
- Ethernet
- Token Ring
- FDDI
The three LAN technologies shown here account for virtually all deployed LANs:
The most popular local area networking protocol today is Ethernet. Most network
administrators building a network from scratch use Ethernet as a fundamental
technology.
Token Ring technology is widely used in IBM networks.
FDDI networks are popular for campus LANs – and are usually built to support high
bandwidth needs for backbone connectivity.
Ethernet
Ethernet was initially developed by Xerox. They were later joined by Digital
Equipment Corporation (DEC) and Intel to define the Ethernet 1 specification in
1980. There have been further revisions including the Ethernet standard (IEEE
Standard 802.3) which defines rules for configuring Ethernet as well as specifying
how elements in an Ethernet network interact with one another.
Ethernet is the most popular physical layer LAN technology because it strikes a good
balance between speed, cost, and ease of installation. These strong points, combined
with wide acceptance in the computer marketplace and the ability to support
virtually all popular network protocols, make Ethernet an ideal networking
technology for most computer users today.
The Fast Ethernet standard (IEEE 802.3u) has been established for networks that
need higher transmission speeds. It raises the Ethernet speed limit from 10 Mbps to
100 Mbps with only minimal changes to the existing cable structure. Incorporating
Fast Ethernet into an existing configuration presents a host of decisions for the
network manager. Each site in the network must determine the number of users that
really need the higher throughput, decide which segments of the backbone need to be
reconfigured specifically for 100BaseT and then choose the necessary hardware to
connect the 100BaseT segments with existing 10BaseT segments.
Gigabit Ethernet is an extension of the IEEE 802.3 Ethernet standard. It increases
speed tenfold over Fast Ethernet, to 1000 Mbps, or 1 Gbps.
- Ethernet is the most popular physical layer LAN technology because it strikes a
good balance between speed, cost, and ease of installation
- Supports virtually all network protocols
- Xerox initiated, then joined by DEC & Intel in 1980
- Fast Ethernet (IEEE 802.3u) raises speed from 10 Mbps to 100 Mbps
- Gigabit Ethernet is an extension of IEEE 802.3 which increases speeds to 1000
Mbps, or 1 Gbps
One thing to keep in mind in Ethernet is that there are several framing variations
that exist for this common LAN technology.
These differences do not prohibit manufacturers from developing network interface
cards that support the common physical layer, and software that recognizes the
differences between the two data links
Ethernet protocol names follow a fixed scheme. The number at the beginning of the
name indicates the wire speed. If the word ―base‖ appears next, the protocol is for
baseband applications. If the word ―broad‖ appears, the protocol is for broadband
applications. The alphanumeric code at the end of the name indicates the type of
cable and, in some cases, the cable length. If a number appears alone, you can
determine the maximum segment length by multiplying that number by 100 meters.
For example 10Base2 is a protocol with a maximum segment length of approximately
200 meters (2 x 100 meters).
Ethernet and Fast Ethernet
This chart give you an idea of the range of Ethernet protocols including their data
rate, maximum segment length, and medium.
Ethernet has survived as an essential media technology because of its tremendous
flexibility and its relative simplicity to implement and understand. Although other
technologies have been touted as likely replacements, network managers have turned
to Ethernet and its derivatives as effective solutions for a range of campus
implementation requirements. To resolve Ethernet‘s limitations, innovators (and
standards bodies) have created progressively larger Ethernet pipes. Critics might
dismiss Ethernet as a technology that cannot scale, but its underlying transmission
scheme continues to be one of the principal means of transporting data for
contemporary campus applications.
The most popular today is 10BaseT and 100BaseT… 10Mbps and 100Mbps
respectively using UTP wiring.
Example:-
Let‘s say in our example here that station A is going to send information to station D.
Station A will listen through its NIC card to the network. If no other users are using
the network, station A will go ahead and send its message out on to the network.
Stations B and C and D will all receive the communication.
At the data link layer it will inspect the MAC address. Upon inspection station D will
see that the MAC address matches its own and then will process the information up
through the rest of the layers of the seven layer model.
As for stations B & C, they too will pull this packet up to their data link layers and
inspect the MAC addresses. Upon inspection they will see that there is no match
between the data link layer MAC address for which it is intended and their own MAC
address and will proceed to dump the packet.
Ethernet Broadcast
Broadcasting is a powerful tool that sends a single frame to many stations at the
same time. Broadcasting uses a data link destination address of all 1s. In this
example, station A transmits a frame with a destination address of all 1s, stations B,
C, and D all receive and pass the frame to their respective upper layers for further
processing.
When improperly used, however, broadcasting can seriously impact the performance
of stations by interrupting them unnecessarily. For this reason, broadcasts should be
used only when the MAC address of the destination is unknown or when the
destination is all stations.
Ethernet Reliability
Ethernet is known as being a very reliable local area networking protocol. In this
example, A is transmitting information and B also has information to transmit. Let‘s
say that A & B listen to the network, hear no traffic and broadcast at the same time.
A collision occurs when these two packets crash into one another on the network.
Both transmissions are corrupted and unusable.
When a collision occurs on the network, the NIC card sensing the collision, in this
case, station C sends out a jam signal that jams the entire network for a designated
amount of time.
Once the jam signal has been received and recognized by all of the stations on the
network, stations A and D will both back off for different amounts of time before they
try to retransmit. This type of technology is known as Carrier Sense Multiple Access
With Collision Detection – CSMA/CD.
We‘ve mentioned that Ethernet also has high speed options that are currently
available. Fast Ethernet is used widely at this point and provides customers with 100
Mbps performance, a ten-fold increase. Fast EtherChannel is a Cisco value-added
feature that provides bandwidth up to 800 Mbps. There is now a standard for Gigabit
Ethernet as well and Cisco provides Gigabit Ethernet solutions with 1000 Mbps
performance.
Grouping of multiple Fast Ethernet interfaces into one logical transmission path
Fast EtherChannel provides a solution for network managers who require higher
bandwidth between servers, routers, and switches than Fast Ethernet technology can
currently provide.
Fast EtherChannel is the grouping of multiple Fast Ethernet interfaces into one
logical transmission path providing parallel bandwidth between switches, servers,
and Cisco routers. Fast EtherChannel provides bandwidth aggregation by combining
parallel 100-Mbps Ethernet links (200-Mbps full-duplex) to provide flexible,
incremental bandwidth between network devices.
For example, network managers can deploy Fast EtherChannel consisting of pairs of
full-duplex Fast Ethernet to provide 400+ Mbps between the wiring closet and the
data center, while in the data center bandwidths of up to 800 Mbps can be provided
between servers and the network backbone to provide large amounts of scalable
incremental bandwidth.
Cisco‘s Fast EtherChannel technology builds upon standards-based 802.3 full-duplex
Fast Ethernet. It is supported by industry leaders such as Adaptec, Compaq, Hewlett-
Packard, Intel, Micron, Silicon Graphics, Sun Microsystems, and Xircom and is
scalable to Gigabit Ethernet in the future.
The Gigabit Ethernet spec addresses three forms of transmission media though not
all are available yet:
The Token Ring network was originally developed by IBM in the 1970s. It is still
IBM‘s primary LAN technology and is second only to Ethernet in general LAN
popularity. The related IEEE 802.5 specification is almost identical to and completely
compatible with IBM‘s Token Ring network.
Collisions cannot occur in Token Ring networks. Possession of the token grants the
right to transmit. If a node receiving the token has no information to send, it passes
the token to the next end station. Each station can hold the token for a maximum
period of time.
Token-passing networks are deterministic, which means that it is possible to
calculate the maximum time that will pass before any end station will be able to
transmit. This feature and several reliability features make Token Ring networks
ideal for applications where delay must be predictable and robust network operation
is important. Factory automation environments are examples of such applications.
Token Ring is more difficult and costly to implement. However, as the number of
users in a network rises, Token Ring‘s performance drops very little. In contrast,
Ethernet‘s performance drops significantly as more users are added to the network.
Here are some of the speeds associated with Token Ring. Note that Token Ring runs
at 4 Mbps or 16 Mbps. Today, most networks operate at 16 Mbps. If a network
contains even one component with a maximum speed of 4 Mbps, the whole network
must operate at that speed.
When Ethernet first came out, networking professionals believed that Token Ring
would die, but this has not happened. Token Ring is primarily used with IBM
networks running Systems Network Architecture (SNA) networking operating
systems. Token Ring has not yet left the market because of the huge installed base of
IBM mainframes being used in industries such as banking.
The practical difference between Ethernet and Token Ring is that Ethernet is much
cheaper and simpler. However, Token Ring is more elegant and robust.
Token Ring Topology
The logical topology of an 802.5 network is a ring in which each station receives
signals from its nearest active upstream neighbor (NAUN) and repeats those signals
to its downstream neighbor. Physically, however, 802.5 networks are laid out as
stars, with each station connecting to a central hub called a multistation access unit
or MAU. The stations connect to the central hub through shielded or unshielded
twisted-pair wire.
Typically, a MAU connects up to eight Token Ring stations. If a Token Ring network
consists of more stations than a MAU can handle, or if stations are located in
different parts of a building–for example on different floors–MAUs can be chained
together to create an extended ring. When installing an extended ring, you must
ensure that the MAUs themselves are oriented in a ring. Otherwise, the Token Ring
will have a break in it and will not operate.
Station access to a Token Ring is deterministic; a station can transmit only when it
receives a special frame called a token. One station on a token ring network is
designated as the active monitor. The active monitor will prepare a token. A token is
usually a few bits with significance to each one of the network interface cards on the
network. The active monitor will pass the token into the multistation access unit. The
multistation access unit then will pass the token to the first downstream neighbor.
Let‘s say in this example that station A has something to transmit. Station A will
seize the token and append its data to the token. Station A will then send its token
back to the multistation access unit. The MAU will then grab the token and push it to
the next downstream neighbor. This process is followed until the token reaches the
destination for which it is intended.
If a station receiving the token has no information to send, it simply passes the token
to the next station. If a station possessing the token has information to transmit, it
claims the token by altering one bit of the frame, the T bit. The station then appends
the information it wishes to transmit and sends the information frame to the next
station on the Token Ring.
The information frame circulates the ring until it reaches the destination station,
where the frame is copied by the station and tagged as having been copied. The
information frame continues around the ring until it returns to the station that
originated it, and is removed.
Because frames proceed serially around the ring, and because a station must claim
the token before transmitting, collisions are not expected in a Token Ring network.
Broadcasting is supported in the form of a special mechanism known as explorer
packets. These are used to locate a route to a destination through one or more source
route bridges.
- Token Ring Summary -
- 4- or 16-Mbps transport
FDDI uses a dual-ring architecture. Traffic on each ring flows in opposite directions
(called counter-rotating). The dual-rings consist of a primary and a secondary ring.
During normal operation, the primary ring is used for data transmissions, and the
secondary ring remains idle. The primary purpose of the dual rings is to provide
superior reliability and robustness.
One of the unique characteristics of FDDI is that multiple ways exist to connect
devices to the ring. FDDI defines three types of devices: single-attachment station
(SAS) such as PCs, dual attachment station (DAS) such as routers and servers, and a
concentrator.
- Dual-ring architecture
- Components
- FDDI concentrator
Example:-
- Features
- Security, reliability, and performance are enhanced because it does not emit
electrical signals
- Much higher bandwidth than copper
- Summary -
- Ethernet
- Token Ring
- FDDI
We'll begin by looking at traditional shared LAN technologies. We'll then look at LAN
switching basics, and then some key switching technologies, such as spanning tree
and multicast controls.
The earliest Local Area Network technologies that were installed widely were either
thick Ethernet or thin Ethernet infrastructures. And it's important to understand
some of he limitations of these to see where we're at today with LAN switching.With
thick Ethernet installations there were some important limitations such as distance,
for example. Early thick Ethernet networks were limited to only 500 meters before the
signal degraded.In order to extend beyond the 500 meter distance, they required to
install repeaters to boost and amplify that signal.There were also limitations on the
number of stations and servers we could have on our network, as well as the
placement of those workstations on the network.
The cable itself was relatively expensive, it was also large in diameter, which made it
difficult or more challenging to install throughout the building, as we pulled it
through the walls and ceilings and so on. As far as adding new users, it was relatively
simple.There could use what was known as a non-intrusive tap to plug in a new
station anywhere along the cable.And in terms of the capacity that was provided by
this thick Ethernet network, it provided 10 megabits per second, but this was shared
bandwidth, meaning that that 10 megabits was shared amongst all users on a given
segment.
As you can see indicated in the diagram on the left, Ethernet is fundamentally what
we call a shared technology.And that is, all users of a given LAN segment are fighting
for the same amount of bandwidth. And this is very similar to the cars you see in our
diagram, here, all trying to get onto the freeway at once.This is really what our
frames, or packets, do in our network as we're trying to make transmissions on our
Ethernet network. So, this is actually what's occurring on our hub.Even though each
device has its own cable segment connecting into the hub, we're still all fighting for
the same fixed amount of bandwidth in the network.Some common terms that we
hear associated with the use of hubs, sometimes we call these Ethernet
concentrators, or Ethernet repeaters, and they're basically self-contained Ethernet
segments within a box.So while physically it looks like everybody has their own
segment to their workstation, they're all interconnected inside of this hub, so it's still
a shared Ethernet technology.Also, these are passive devices, meaning that they're
virtually transparent to the end users, the end users don't even know that those
devices exist, and they don't have any role in terms of a forwarding decision in the
network whatsoever, they also don't provide any segmentation within the network
whatsoever.And this is basically because they work at Layer 1 in the OSI framework.
It's also important to understand fundamentally how transmissions can occur in the
network. There's basically three different ways that we can communicate in the
network. The most common way is by way of unicast transmissions.And when we
make a unicast transmission, we basically have one transmitter that's trying to reach
one receiver, which is by far the most common, or hopefully the most common form
of communication in our network.
Now, in terms of broadcast, it's relatively easy to broadcast in a network, and that's a
transmission mechanism that many different protocols use to communicate certain
information, such as address resolution, for example.Address resolution is something
that all protocols need to do in order to map Layer 2 MAC addresses up to logical
layer, or Layer 3, addresses. For example, in an IP network we do something known
as an ARP, an Address Resolution Protocol.And this allows us to map Layer 3 IP
addresses down to Layer 2 MAC-layer addresses. Also, in terms of distributing
routing protocol information, we do this by way of broadcasting, and also some key
network services in our networks rely on broadcast mechanisms as well.
And it doesn't really matter what our protocol is, whether it's AppleTalk or Novell IPX,
or TCP IP, for example, all of these different Layer 3 protocols rely on the broadcast
mechanism. So, in other words, all of these protocols produce broadcast traffic in a
network.
Broadcasts Consume Processor Performance
So you can see, we did this study based on a SPARC2 CPU, a SPARC5 CPU and also
a Pentium CPU. And as the number of broadcasts increased, the amount of CPU
cycles consumed, simply by processing and listening to that broadcast traffic,
increased dramatically.So, the other thing we need to recognize is that a lot of times
the broadcast traffic in our network is not needed by the stations that receive it.So
what we have then in shared LAN technologies is our broadcast traffic running
throughout the network, needlessly consuming bandwidth, and needlessly
consuming CPU cycles.
Hub-Based LANs
So hubs are introduced into the network as a better way to scale our thinand thick
Ethernet networks. It's important to remember, though, that these are still shared
Ethernet networks, even though we're using hubs.
Basically what we have is an individual desktop connection for each individual
workstation or server in the network, and this allows us to centralize all of our
cabling back to a wiring closet for example. There are still security issues here,
though.It's still relatively easy to tap in and monitor a network by way of a hub. In
fact it's even easier to do that because all of the resources are generally located
centrally.If we need to scale this type of network we're going to rely on routers to
scale this network beyond the workgroup, for example.
It's makes adds, moves and changes easier because we can simply go to the wiring
closet and move cables around, but we'll see later on with LAN switching that it's
even easier with LAN switching.Also, in terms of our workgroups, in a hub or
concentrator based network, the workgroups are determined simply by the physical
hub that we plug into. And once again we'll see later on with LAN switching how we
can improve this as well.
Bridges
That's why we say that bridges are more intelligent than a hub, because they can
actually listen in, or eavesdrop on the traffic going through the bridge, they can look
at source and destination addresses, and they can build a table that allows them to
make intelligent forwarding decisions.
They actually collect and pass frames between two network segments and while
they're doing this they're making intelligent forwarding decisions. As a result, they
can actually provide greater control of the traffic within our network.
Switches—Layer 2
To provide even better control we're going to look to switches to provide the most
control in our network, at least at Layer 2. And as you can see in the diagram, have
improved the model of traffic going through our network.
Getting back to our traffic analogy, as you can see looking at the highway here, we've
actually subdivided the main highway so that each particular car has it's own lane
that they can drive on through the network. And fundamentally, this is what we can
provide in our data networks as well.So that when we look at our network we see that
physically each station has its own cable into the network, well, conceptually we can
think of this as each workstation having their own lane through the
highway.Basically there is something known as micro-segmentation. That's a fancy
way simply to say that each workstation gets its own dedicated segment through the
network.
If we compare that with a hub or with a bridge, we're limited on the number of
simultaneous conversations we can have at a time.Remember that if two stations
tried to communicate in a hubbed environment, that caused something known as
collisions. Well, in a switched environment we're not going to expect collisions
because each workstation has its own dedicated path through the network.What that
means in terms of bandwidth, and in terms of scalability, is we have dramatically
more bandwidth in the network. Each station now will have a dedicated 10 megabits
per second worth of bandwidth.
So when we look at our switches versus our hubs, and the top diagram, remember
that we're looking at a hub. And this is when all of our traffic was fighting for the
same fixed amount of bandwidth.Looking at the bottom diagram you can see that
we've improved our traffic flow through the network, because we've provided a
dedicated lane for each workstation.
Now, how can you tell if you have congestion problems in your network? Well, some
early things to look at, some early things to watch out for, include increased delay on
our file transfers.If basic file transfers are taking a long, long time in the network,
that means we may need more bandwidth. Also, another thing to watch out for is
print jobs that take a very long time to print out.From the time we queue them from
our workstation, till the time they actually get printed, if that's increasing, that's an
indication that we may have some LAN congestion problems.Also, if your organization
is looking to take advantage of multimedia applications, you're going to need to move
beyond basic shared LAN technologies, because those shared LAN technologies don't
have the multicast controls that we're going to need for multimedia applications.
Some causes of this congestion, if we're seeing those early warning signs some things
we might want to look for, if we have too many users on a shared LAN segment.
Remember that shared LAN segments have a fixed amount of bandwidth.As we add
users, proportionally, we're degrading the amount of bandwidth per user. So we're
going to get to a certain number of users and it's going to be too much congestion,
too many collisions, too many simultaneous conversations trying to occur all at the
same time.
And that's going to reduce our performance. Also, when we look at the newer
technologies that we're using in our workstations. With early LAN technologies the
workstations were relatively limited in terms of the amount of traffic they could dump
on the network.Well, with newer, faster CPUs, faster busses, faster peripherals and
so on, it's much easier for a single workstation to fill up a network segment.So by
virtue of the fact that we have much faster PCs, we can also do more with the
applications that are on there, we can more quickly fill up the available bandwidth
that we have.
Network Traffic Impact from Centralization of Servers
Also, the way the traffic is distributed on our network can have an impact as well. A
very common thing to do in many networks is to build what's known as a server farm
for example.Well, in a server farm effectively what we're doing is centralizing all of the
resources on our network that need to be accessed by all of the workstations in our
network.So what happens here is we cause congestion on those centralized segments
within the network. So, when we start doing that, what we're going to do is cause
congestion on those centralized or backbone resources.
Servers are gradually moving into a central area (data center) versus being located
throughout the company to:
More centralized servers increase the bandwidth demands on campus and workgroup
backbones
Today’s LANs
When we look at today's LANs, the ones that are most commonly implemented today,
we're looking at mostly switched infrastructures, because of the price point of
deploying switches, many companies are bypassing the shared hub technologies and
moving directly to switches.Even within switched networks, at some point we still
need to look to routers to provide scalability. And also we see that in terms of the
grouping of users, they're largely determined by the physical location.So that's a
quick look at traditional shared LAN technologies. What we want to do now, since we
know those limitations, we want to look at how we can fix some of those issues. We
want to see how we can deploy LAN switches to take advantage of some new,
improved technologies.
First of all, it's important to understand the reason that we use LAN switching.
Basically, they do this to provide what we called earlier as micro-segmentation.
Again, micro-segmentation provides dedicated bandwidth for each user on the
network.What this is going to do is eliminate collisions in our network, and it's going
to effectively increase the capacity for each station connected to the network.It'll also
support multiple, simultaneous conversations at any given time, and this will
dramatically improve the bandwidth that's available, and it'll dramatically improve
the scalability in our network.
So let's take a look at the fundamental operation of a LAN switch to see what it can
do for us. As you can see indicated in the diagram, we have some data that we need
to transmit from Station A to Station B.
Now, as we watch this traffic go through the network, remember that the switch
operates at Layer 2. What that means is the switch has the ability to look at the
MAC-layer address, the Media Access Control address, that's on each frame as it goes
through the network.
And we're going to see that the switch actually looks at the traffic as it goes through
to pick off that MAC address and store it in an address table.So, as the traffic goes
through, you can see that we've made an entry into this table in terms of which
station and the port that it's connected to on the switch.
Now what happens, once that frame of data is in the switch, we have no choice but to
flood it to all ports. The reason that we flood it to all ports is because we don't know
where the destination station resides.
Once that address entry is made into the table, though, when we have a response
coming back from Station B, going back to Station A, we now know where Station A
is connected to the network.
So what we do is we transmit our data into the switch,but notice the switch doesn't
flood that traffic this time, it sends it only out port number 3. The reason is because
we know exactly where Station A is on the network, because of that original
transmission we made.On that original transmission we were able to note where that
MAC address came from. That allows us to more efficiently deliver that traffic in the
network.
And what this provides, for example, in 10 megabit per second connections, it
effectively provides 10 meg of transmit capacity, and 10 megabit of receive capacity,
for effectively 20 megabits of capacity on a single connection.Likewise, for a 100
megabit per second connection, we can get effectively 200 megabits per second of
throughput
Another concept that we have in switching is that we have actually two different
modes of switching. And this is important because it can actually effect the
performance or the latency of the switching through our network.
Cut-through
First of all we have something known as cut through switching. What cut through
switching does, is, as the traffic flows through the switch, the switch simple reads the
destination MAC address, in other words we find out where the traffic needs to go
through, go to.And as the data flows through the switch we don't actually look at all
of the data. We simply look at that destination address, and then, as the name
implies, we cut it through to its destination without continuing to read the rest of the
frame.
Store-and-forward
And that allows to improve performance over another method known as store and
forward. With store and forward switching, what we do is we actually read, not only
the destination address, but we read the entire frame of data.As we read that entire
frame we then make a decision on where it needs to go, and send it on it's way. The
obvious trade-off there is, if we're going to read the entire frame it takes longer to do
that.
But the reason that we read the entire frame is that we can do some error correction,
or error detection, on that frame, that may increase the reliability if we're having
problems with that in a switched network.So cut through switching is faster, but the
trade-off is that we can't do any error detection in our switched network.
- Multicasting
Specifically we'll look at the Spanning Tree Protocol, and also some multicasting
controls that we have in our network.As we build out large networks, one of the
problems we have at Layer 2 in the OSI model, is if we're just making forwarding
decisions at Layer 2, that means that we cannot have any Physical Layer loops in our
network.
So if we have a simple network, as we see in the diagram here, what these switches
are going to do is that anytime they have any multicast, broadcast traffic, or any
unknown traffic, that's going to create storms of traffic that are going to get looped
endlessly through our network.So in order to prevent that situation we need to cut
out any of the loops.
802.1d Spanning-Tree Protocol (STP)
Spanning Tree Protocol, or STP. This is actually an industry standard that's defined
by the IEEE standards committee, it's known as the 802.1d Spanning Tree
Protocol.This allows us to have physical redundancy in the network, but it logically
disconnects those loops.
It's important to understand that we logically disconnect the loops because that
allows us to dynamically re-establish a connection if we need to, in the event of a
failure within our network.The way that the switches do this, and actually bridges
can do this as well, is that they simply communicate by way of a protocol, back and
forth. The basically exchange these little hello messages.
If they stop hearing a given communication from a certain device on the network, we
know that a network device has failed. And when a network failure occurs we have to
re-establish a link in order to maintain that redundancy.technically, these little
exchanges are known as BPDUs or Bridge Protocol Data Units.
Now, Spanning Tree protocol works just fine, but one of the issues with Spanning
Tree is that it can take anywhere from half a minute to a full minute in order for the
network to fully converge, or in order for all devices to know the status of the
network.So in order to improve on this, there are some refinements that Cisco has
introduced, such as PortFast and UplinkFast, and this allows your Spanning Tree
protocol to converge even faster.
Multicasting
But without special controls in the network, multicasting is going to quickly congest
our network. Okay, so what we need is to add intelligent multicasting in the network.
Multipoint Communications
Now, again, let's understand that there are a few fundamental ways that we have in
order to achieve multipoint communications, because effectively, that's what we're
trying to do with our video based applications or any of our multimedia type
applications that use this mechanism.
One way is to broadcast our traffic. And what that does is it effectively sends our
messages everywhere. The problem, and the obvious down side there is that not
everybody necessarily needs to hear these communications.So while it will get the job
done, it's not the most efficient way to get the job done. So the better way to do this is
by way of multicasting.
And that is, the applications will use a special group address to communicate to only
those stations or group of stations that need to receive these transmissions.And
that's what we mean by multipint communications. That's going to be the more
effective way to do that.
Multicast
This also needs to be done dynamically because these multicast groups are going to
change over time at any given moment. So, in order to do this, we need some special
protocols in our network. First of all, in the Wide Area, we need something known as
multicast routing protocols.Certainly, in our Wide Area we already have routing
protocols such as RIP, the Routing Information Protocol, or OSPF, or IGRP, for
example, but what we need to do is add multicast extensions so that these routing
protocols need, understand how to handle the need for our multicast groups.
End-to-End Multicast
And that's because IGMP works at Layer 3,, but our LAN switch works at Layer 2. So
the switch has no concept of our Layer 3 group membership. So what we need to do
is add some intelligence to our switch.The intelligence that going to add is a protocol
such as CGMP, for example, or Cisco Group Management Protocol. Another similar
technology that we could add, is called IGMP Snooping, which has the same effect in
the Local Area Network.
And that effect is, as you see in the diagram, to limit our multicast traffic to only
those stations that want to participate in the group. So now, as you can see, the red
channel, or channel number 1, is delivered to only station #1 and station #3.
The station 2 does not receive this content because he doesn't wish to participate.So
the advantage of adding protocols such as IGMP, CGMP, IGMP Snooping, and
Protocol Independent multicasting into our network, that achieved bandwidth savings
for our multicast traffic.
What we see indicated in the red is, as we add stations to our multicast group, the
amount of bandwidth we need to do that is going to increase in a linear fashion.But
by adding multicast controls, you can see the amount of bandwidth is reduced
dramatically. Because these intelligent multicast controls can better make, can make
better use of the bandwidth in our network.So by adding multicast controls that's
going to also reduce the cost of networking as well because we've reduced the
bandwidth that we need, so that's going to provide a dramatic improvement to our
Local Area Network.
- Summary -
In this Lesson, we‘ll discuss the WAN. We‘ll start by defining what a WAN is, and then
move on to talking about basic technology such as WAN devices and circuit and
packet switching.
also cover transmission options from POTS (plain old telephone service) to Frame
Relay, to leased lines, and more.
Finally, we‘ll discuss wide area requirements including a section on minimizing WAN
charges with bandwidth optimization features.
The Agenda
- WAN Basics
- Transmission Options
WAN Basics
What Is a WAN?
So, what is a WAN? A WAN is a data communications network that serves users
across a broad geographic area and often uses transmission facilities provided by
common carriers such as telephone companies. These providers are companies like
MCI, AT&T, UuNet, and Sprint. There are also many small service providers that
provide connectivity to one of the larger carriers‘ networks and may even have email
servers to store clients mail until it is retrieved.
- WAN technologies function at the lower three layers of the OSI reference model: the
physical layer, the data link layer, and the network layer.
Common WAN network components include WAN switches, access servers, modems,
CSU/DSUs, and ISDN Terminals.
WAN Devices
A WAN switch is a multiport internetworking device used in carrier networks. These
devices typically switch traffic such as Frame Relay, X.25, and SMDS and operate at
the data link layer of the OSI reference model. These WAN switches can share
bandwidth among allocated service priorities, recover from outages, and provide
network design and management systems.
A modem is a device that interprets digital and analog signals, enabling data to be
transmitted over voice-grade telephone lines. At the source, digital signals are
converted to analog. At the destination, these analog signals are returned to their
digital form.
An ISDN terminal is a device used to connect ISDN Basic Rate Interface (BRI)
connections to other interfaces, such as EIA/TIA-232. A terminal adapter is
essentially an ISDN modem.
The WAN physical layer describes the interface between the data terminal equipment
(DTE) and the data circuit-terminating equipment (DCE). Typically, the DCE is the
service provider, and the DTE is the attached device (the customer‘s device). In this
model, the services offered to the DTE are made available through a modem or
channel service unit/data service unit (CSU/DSU).
CSU/DSU (Channel Service Unit / Data Service Unit) Device that connects the end-
user equipment to the local digital telephone loop or to the service providers data
transmission loop. The DSU adapts the physical interface on a DTE device to a
transmission facility such as T1 or E1. Also responsible for such functions as signal
timing for synchronous serial transmissions.
Unless a company owns (literally) the lines over which they transport data, they must
utilize the services of a Service Provider to access the wide area network.
Circuit Switching
- Example: ISDN
Service providers typically offer both circuit switching packet switching services.
Circuit switching is a WAN switching method in which a dedicated physical circuit is
established, maintained, and terminated through a carrier network for each
communication session. Circuit switching accommodates two types of transmissions:
datagram transmissions and data-stream transmissions. Used extensively in
telephone company networks, circuit switching operates much like a normal
telephone call. Integrated Services Digital Network (ISDN) is an example of a circuit-
switched WAN technology.
Packet Switching
Packet switching is a WAN switching method in which network devices share a single
point-to-point link to transport packets from a source to a destination across a
carrier network. Statistical multiplexing is used to enable devices to share these
circuits. Asynchronous Transfer Mode (ATM), Frame Relay, Switched Multimegabit
Data Service (SMDS), and X.25 are examples of packet-switched WAN technologies.
- Permanently established
- Save bandwidth for cases where certain virtual circuits must exist all the time
WAN Protocols
- LAN protocols: operate at the physical and data link layers and define
communication over the various LAN media
- WAN protocols: operate at the lowest three layers and define communication over
the various wide-area media.
- Network protocols: are the various upper-layer protocols in a given protocol suite.
SDLC:-
Synchronous Data Link Control. IBM‘s SNA data link layer communications protocol.
SDLC is a bit-oriented, full-duplex serial protocol that has spawned numerous
similar protocols, including HDLC and LAPB.
HDLC:-
High-Level Data Link Control. Bit-oriented synchronous data link layer protocol
developed by ISO. Specifies a data encapsulation method on synchronous serial links
using frame characters and checksums.
LAPB:-
Link Access Procedure, Balanced. Data link layer protocol in the X.25 protocol stack.
LAPB is a bit-oriented protocol derived from HDLC.
PPP:-
Point-to-Point Protocol. Provides router-to-router and host-to-network connections
over synchronous and asynchronous circuits with built-in security features. Works
with several network layer protocols, such as IP, IPX, & ARA.
X.25 PTP:-
Packet level protocol. Network layer protocol in the X.25 protocol stack. Defines how
connections are maintained for remote terminal access and computer
communications in PDNs. Frame Relay is superseding X.25.
ISDN:-
Integrated Services Digital Network. Communication protocol, offered by telephone
companies, that permits telephone networks to carry data, voice, and other source
traffic.
Frame Relay:-
Industry-standard, switched data link layer protocol that handles multiple virtual
circuits using HDLC encapsulation between connected devices. Frame Relay is more
efficient than X.25, and generally replaces it.
There are a number of transmission options available today. They fall either into the
analog or digital category. Next let‘s take a brief look at each of these transmission
types.
- Available everywhere
- Easy to set up
- Dial anywhere on demand
- The lowest cost alternative of any wide-area service
ISDN is a digital service that can use asynchronous or, more commonly, synchronous
transmission. ISDN can transmit data, voice, and video over existing copper phone
lines. Instead of leasing a dedicated line for high-speed digital transmission, ISDN
offers the option of dialup connectivity—incurring charges only when the line is
active.
ISDN provides a high-bandwidth, cost-effective solution for companies requiring light
or sporadic high-speed access to either a central or branch office.
ISDN can transmit data, voice, and video over existing copper phone lines.
Instead of leasing a dedicated line for high-speed digital transmission, ISDN offers the
option of dialup connectivity —incurring charges only when the line is active.
Companies needing more permanent connections should evaluate leased-line
connections.
- High bandwidth
- Up to 128 Kbps per basic rate interface
- Dial on demand
- Multiple channels
- Fast connection time
- Monthly rate plus cost-effective, usage-based billing
- Strictly digital
ISDN comes in two flavors, Basic Rate Interface (BRI) and Primary Rate Interface
(PRI). BRI provides two ―B‖ or bearer channels of 64 Kbps each and one additional
signaling channel called the ―D‖ or delta channel.
While it requires only one physical connection, ISDN provides two channels that
remote telecommuters use to connect to the company network.
PRI provides up to 23 bearer channels of 64 Kbps each and one D channel for
signaling. That‘s 23 channels but with only one physical connection, which makes it
an elegant solution- there‘s no wiring mess (PRI service typically provides 30 bearer
channels outside the U.S. and Canada).
You‘ll want to use PRI at your central site if you plan to have many ISDN dial-in
clients.
Leased Line
Leased lines are most cost-effective if a customer‘s daily usage exceeds four to six
hours. Leased lines offer predictable throughput with bandwidth typically 56 Kbps to
1.544 Mbps. They require one connection per physical interface (namely, a
synchronous serial port).
Frame Relay provides a standard interface to the wide-area network for bridges,
routers, front-end processors (FEPs), and other LAN devices. A Frame Relay interface
is designed to act like a wide-area LAN- it relays data frames directly to their
destinations at very high speeds. Frame Relay frames travel over predetermined
virtual circuit paths, are self-routing, and arrive at their destination in the correct
order.
Frame Relay is designed to handle the LAN-type bursty traffic efficiently.
The guaranteed bandwidth (known as committed information rate or CIR) is typically
between 56 Kbps and 1.544 Mbps.
The cost is normally not distance-sensitive.
Frame Relay service is often less expensive than leased lines, and the cost is based
on:
- The committed information rate (CIR), which can be exceeded up to the port speed
when the capacity is available on your carrier‘s network.
- Port speed
- The number of permanent virtual circuits (PVCs) you require; a benefit to users
who need reliable, dedicated connections to resources simultaneously.
X.25
X.25 networks implement the internationally accepted ITU-T standard governing the
operation of packet switching networks. Transmission links are used only when
needed. X.25 was designed almost 20 years ago when network link quality was
relatively unstable. It performs error checking along each hop from source node to
destination node.
The bandwidth is typically between 9.6Kbps and 64Kbps.
X.25 is widely available in many parts of the world including North America, Europe,
and Asia.
There is a large installed base of X.25 devices.
Digital subscriber line (DSL) technology is a high-speed service that, like ISDN,
operates over ordinary twisted-pair copper wires supplying phone service to
businesses and homes in most areas. DSL is often more expensive than ISDN in
markets where it is offered today.
Using special modems and dedicated equipment in the phone company's switching
office, DSL offers faster data transmission than either analog modems or ISDN
service, plus-in most cases-simultaneous voice communications over the same lines.
This means you don't need to add lines to supercharge your data access speeds. And
since DSL devotes a separate channel to voice service, phone calls are unaffected by
data transmissions.
DSL has several flavors. ADSL delivers asymmetrical data rates (for example, data
moves faster on the way to your PC than it does on the way out to Internet). Other
DSL technologies deliver symmetrical data (same speeds traveling in and out of your
PC).
The type of service available to you will depend on the carriers operating in your area.
Because DSL works over the existing telephone infrastructure, it should be easy to
deploy over a wide area in a relatively short time. As a result, the pursuit of market
share and new customers is spawning competition between traditional phone
companies and a new breed of firms called competitive local exchange carriers
(CLECs).
Each cell contains 5 bites of header information, 48 bites of payload for 53 bites total
in every cell. Each cell contains identifiers that specify the data stream to which they
belong. ATM is capable of T3 speeds, E3 speeds in Europe as well as Fiber speed, like
Sonet which is asynchronous optical networking speeds of OC-1 and up. ATM
technology is primarily used in enterprise backbones or in WAN links.
Analog services are the least expensive type of service. ISDN costs somewhat more
but improves performance over even the fastest current analog offerings. Leased lines
are the costliest of these three options, but offer dedicated, digital service for more
demanding situations. Which is right?
You‘ll need to answer a few questions:
The more times the answer is ―yes‖, the more likely that leased line services are
required. It is also possible to mix and match services. For example, small branch
offices or individual employees dialing in from home might connect to the central
office using ISDN, while the main connection from the central office to the Internet
can be a T1.
Which service you select also depends on what the Internet Service Provider (is using.
If the ISP‘s maximum line speed is 128K, as with ISDN, it wouldn‘t make sense to
connect to that ISP with a T1 service. It is important to understand that as the
bandwidth increases, so do the charges, both from the ISP and the phone company.
Keep in mind that rates for different kinds of connections vary from location to
location.
Let‘s compare our technology options, assuming all services are available in our
region. To summarize:
- A leased-line service provides a dedicated connection with a fixed bandwidth at a
flat rate. You pay the same monthly fee regardless how much or how little you use
the connection.
Because transmission costs are by far the largest portion of a network‘s cost, there
are a number of bandwidth optimization features you should be aware of that enable
the cost-effective use of WAN links. These include dial-on-demand routing,
bandwidth-on-demand, snapshot routing, IPX protocol spoofing, and compression.
Dial-on-demand ensures that you‘re only paying for bandwidth when it‘s needed for
switched services such as ISDN and asynchronous modem (and switched 56Kb in the
U.S. and Canada only).
Bandwidth-on-demand gives you the flexibility to add additional WAN bandwidth
when it‘s needed to accommodate heavy network loads such as file transfers.
Snapshot routing prevents unnecessary transmissions. It inhibits your switched
network from being dialed solely for the purpose of exchanging routing updates at
short intervals (e.g.: 30 seconds). Many of you are familiar with compression, which
is also a good method of optimization.
Lets take a close look at a few features that will keep your WAN costs down.
- Dial-on-Demand Routing
- Bandwidth-on-Demand
By default, routing protocols such as RIP exchange routing tables every 30 seconds.
If placed as calls, these routine updates will drive up WAN costs unnecessarily, and
Snapshot Routing limits these calls to the remote site.
A remote router with this feature only requests a routing update when the WAN link
is already up for the purpose of transferring user application data.
Without Snapshot Routing, your ISDN connection would be dialed every 30 seconds;
this feature ensures that the remote router always has the most up-to-date routing
information but only when needed.
Protocol spoofing allows the user to improve performance while providing the ability
to use lower line speeds over the WAN.
- Compression
Compression reduces the space required to store data, thus reducing the bandwidth
required to transmit. The benefit of these compression algorithms is that users can
utilize lower line speeds if needed to save costs. Compression also provides the ability
to move more data over a link than it would normally bear.
- Three types
Header
Link
Payload
- Dial Backup
Dial backup addresses a customer‘s need for reliability and guaranteed uptime. Dial
backup capability offers users protection against WAN downtime by allowing them to
configure a backup serial line via a circuit-switched connection such as ISDN. When
the software detects the loss of a signal from the primary line device or finds that the
line protocol is down, it activates the secondary line to establish a new session and
continue the job of transmitting traffic over the backup line.
- Summary -
- The network operates beyond the local LAN‘s geographic scope. It uses the services
of carriers like regional bell operating companies (RBOCs), Sprint, and MCI.
- WANs use serial connections of various types to access bandwidth over wide-area
geographies.
- An enterprise pays the carrier or service provider for connections used in the WAN;
the enterprise can choose which services it uses; carriers are usually regulated by
tariffs.>
- WANs rarely shut down, but since the enterprise must pay for services used, it
might restrict access to connected workstations. All WAN services are not available
in all locations.
Lesson 7: Understanding Routing
The objective of this lesson is to explain routing. We‘ll start by first defining what
routing is. We‘ll follow that with a discussion on addressing.
There is a section on routing terminology which covers subjects like routed vs.
routing protocols and dynamic and static routing.
Finally, we‘ll talk about routing protocols.
The Agenda
- What Is Routing?
- Network Addressing
- Routing Protocols
What Is Routing?
Routers—Layer 3
A router can perform LAN-to-LAN routing through its ability to route packet traffic
from one network to another. It checks its router table entries to determine the best
path to the destination network.
A router can perform LAN-to-WAN and remote access routing through its ability to
route packet traffic from one network to another while handling different WAN
services in between. Popular WAN service options include Integrated Services Digital
Network, or ISDN, leased lines, Frame Relay, and X.25.
Let‘s look at routing in more detail.
LAN-to-LAN Connectivity
This illustrates the flow of packets through a routed network using the example of an
e-mail message being sent from system X to system Y.
The message exits system X and travel through an organization‘s internal network
until it gets to a point where it needs an Internet service provider.
The message will bounce through their network and eventually arrive at system Y‘s
internet provider. While this example shows three routers, the message could
actually travel through many different networks before it arrives at its destination.
From the OSI model reference point of view, when the e-mail is converted into
packets and sent to a different network, a data-link frame is received on one of a
router's interfaces.
- The router de-encapsulates and examines the frame to determine what type of
network layer data is being carried. The network layer data is sent to the
appropriate network layer process, and the frame itself is discarded.
- The network layer process examines the header to determine the destination
network and then references the routing table that associates networks to outgoing
interfaces.
- The packet is again encapsulated in the link frame for the selected interface and
sent on.
This process occurs each time the packet transfers to another router. At the router
connected to the network containing the destination host, the packet is encapsulated
in the destination LAN‘s data-link frame type for delivery to the protocol stack on the
destination host.
Path Determination
Routing involves two basic activities: determining optimal routing paths and
transporting information groups (typically called packets) through an internetwork.
In the context of the routing process, the latter of these is referred to as switching.
Although switching is relatively straightforward, path determination can be very
complex.
During path determination, routers evaluate the available paths to a destination and
to establish the preferred handling of a packet.
- After the router determines which path to use, it can proceed with switching the
packet: Taking the packet it accepted on one interface and forwarding it to another
interface or port that reflects the best path to the packet‘s destination.
Multiprotocol Routing
Routing Tables
To aid the process of path determination, routing algorithms initialize and maintain
routing tables, which contain route information. Route information varies depending
on the routing algorithm used. Routing algorithms fill routing tables with a variety of
information. Two examples are
destination/next hop associations and path desirability.
Routers communicate with one another and maintain their routing tables through
the transmission of a variety of messages.
- Link-state advertisements inform other routers of the state of the sender‘s link so
that routers can maintain a picture of the network topology and continuously
determine optimal routes to network destinations.
Routing tables contain information used by software to select the best route. But
how, specifically, are routing tables built? What is the specific nature of the
information they contain? How do routing algorithms determine that one route is
preferable to others?
Routing algorithms often have one or more of the following design goals:
Optimality - the capability of the routing algorithm to select the best route,
depending on metrics and metric weightings used in the calculation. For example,
one algorithm may use a number of hops and delays, but may weight delay more
heavily in the calculation.
Robustness and stability - routing algorithm should perform correctly in the face of
unusual or unforeseen circumstances, such as hardware failures, high load
conditions, and incorrect implementations. Because of their locations at network
junctions, failures can cause extensive problems.
Routing algorithms have used many different metrics to determine the best route.
Sophisticated routing algorithms can base route selection on multiple metrics,
combining them in a single (hybrid) metric. All the following metrics have been used:
Path length - The most common metric. The sum of either an assigned cost per
network link or hop count, a metric specify the number of passes through network
devices between source and destination.
Reliability - dependability (bit-error rate) of each network link. Some network links
might go down more often than others. Also, some links may be easier or faster to
repair after a failure.
Delay - The length of time required to move a packet from source to destination
through the internetwork. Depends on bandwidth of intermediate links, port
queues at each router, network congestion, and physical distance. A common and
useful metric.
Load - Degree to which a network resource, such as a router, is busy (uses CPU
utilization or packets processed per second).
Network Addressing
Examples:-
For TCP/IP, dotted decimal numbers show a network part and a host part. Network
10 uses the first of the four numbers as the network part and the last three
numbers—8.2.48-as a host address. The mask is a companion number to the IP
address. It communicates to the router the part of the number to interpret as the
network number and identifies the remainder available for host addresses inside that
network.
For Novell IPX, the network address 1aceb0b is a hexadecimal (base 16) number that
cannot exceed a fixed maximum number of digits. The host address 0000.0c00.6e25
(also a hexadecimal number) is a fixed 48 bits long. This host address derives
automatically from information in the hardware of the specific LAN device.
Subnetwork Addressing
- Flat versus hierarchical: In a flat routing system, the routers are peers of all others.
In a hierarchical routing system, some routers form what amounts to a routing
backbone. In hierarchical systems, some routers in a given domain can
communicate with routers in other domains, while others can communicate only
with routers in their own domain.
- Static versus dynamic - this classification will be discussed in the following two
slides.
- Link state versus distance vector: will be discussed after static versus dynamic
routing.
Static Routing
Distance vector versus link state is another possible routing algorithm classification.
- Link state algorithms (also known as shortest path first algorithms) flood routing
information about its own link to all network nodes. The link-state (also called
shortest path first) approach recreates the exact topology of the entire internetwork
(or at least the partition in which the router is situated).
- Distance vector algorithms send all or some portion of their routing table only to
neighbors. The distance vector routing approach determines the direction (vector)
and distance to any link in the internetwork.
- A third classification in this course, called hybrid, combines aspects of these two
basic algorithms.
Hybrid
RIP takes the path with the least number of hops, but does not account for the speed
of the links. It only counts hops. The limitation of RIP is about 15 hops. This creates
a scalability issue when routing in large, heterogeneous networks.
IGRP was developed by Cisco and works only with Cisco products (although it has
been licensed to some other vendors). It accounts for the varying speeds of each link.
Additionally, IRGP can handle 224 to 252 hops, depending on the IOS version.
However, IGRP only supports IP.
OSPF and EIGRP
OSPF - Open Shortest Path First. Link-state, hierarchical IGP routing algorithm
proposed as a successor to RIP in the Internet community. OSPF features include
least-cost routing, multipath routing, and load balancing. OSPF was derived from an
early version of the IS-IS protocol.
EIGRP - Enhanced Interior Gateway Routing Protocol. Advanced version of IGRP
developed by Cisco. Provides superior convergence properties and operating
efficiency, and combines the advantages of link state protocols with those of distance
vector protocols.
- Summary -
The term Layer 3 switching makes many people‘s eyes glaze over. In this module,
we‘ll explain what Layer 3 switching is and how it compares with Layer 2 switching
and routing.
The Agenda
- What is the Difference Between Layer 2 Switching, Layer 3 Switching, and Routing?
Recently, the industry has been bombarded with terminology such as Layer 3
switching, Layer 4 switching, multilayer switching, routing switches, switching
routers, and gigabit routers. This ―techno-jargon‖ can be confusing to customers and
resellers alike.
For purposes of this discussion, all these terms essentially represent the same
function, and, as such, the term Layer 3 switching is used to represent them all.
While the performance aspect of Layer 3 switching makes most of the headlines,
higher performance in switching packets does not, by itself, promise that all
problems are solved in a network. There must be a recognition that application
design, mix of network protocols, placement of servers, placement of networking
devices, management, as well as the implementation of end-to-end intelligent
network services are at least as important—maybe more so—than simply adding
more bandwidth and switching capability to the network.
What is the difference between a Layer 2 switch, a Layer 3 switch, and a router?
A Layer 2 switch is essentially a multiport bridge. Switching and filtering are based
on the Layer 2 MAC addresses, and, as such, a Layer 2 switch is completely
transparent to network protocols and users‘ applications.
Layer 2 switching is the number one choice for providing plug-and-play performance.
What Is Routing?
How does Layer 3 switching differ from Layer 2 switching? Layer 3 switching requires
rewriting the packet. This implies decrementing the TTL field, modifying the MAC
addresses, changing the VLAN-ID and recomputing the FCS. Doing all these actions
at wire speed is difficult which is why an ASIC is necessary.
True Layer 3 switching has all the advantage of routing, therefore it is rich in feature
and performance.
Layer 2 switching, on the contrary, does not require packet rewriting. Without packet
rewriting, no matter how you call it (e.g. virtual routing) it is NOT routing.
ASICs:
Routing software:
- Backbone redundancy
- Dynamic load balancing and fast convergence in the backbone
- Reachability information
Let‘s look more closely at when a customer might choose a Layer 3 switch over a
traditional Layer 2 switch. Layer 3 switches offer considerable advantages depending
on the customer‘s requirements.
Scalability— For customers with large networks that need increased performance to
handle the changing traffic patterns of today‘s new applications, Layer 3 switches
offer increased scalability. Clearly a network of hubs does not scale. While bridges
helped, they were not sufficient to handle networks of many thousands of users and
devices. Routers were the solution as they kept broadcasts local to a segment. Layer
3 switches avoid the problems associated with flat bridged or switched designs using
traditional routing mechanisms allowing customers to scale their network
infrastructure.
Layer 3 switches also utilize routing protocols thus avoiding the slow convergence
problem of Spanning Tree Protocol and lack of load-balancing across multiple paths.
Advanced services— Layer 3 switches also offer the benefit of broader intelligent
network services. These services permit applications to run on the network as well as
enable the creation of a cost-effective, operational environment to support day-to-day
operations and management of the enterprise intranet.
Other Advantages
While there are obvious advantages to a Layer 3 switch over a Layer 2 switch, other
factors needed to be considered as well. Layer 3 switches are more expensive than
Layer 2 switches and are more complex. Depending on the size of a customer‘s
network, the cost and complexity may not justify a Layer 3 switch. However, for
customers with larger networks in need of enhanced scalability, Layer 3 switches will
actually simplify network infrastructure.
At its most basic, Layer 3 packet switching or forwarding is common across all
vendors platforms, with perhaps exceptions in their multicast or DHCP services
behavior.
The more scalable, flexible, and adaptable Layer 3 switches also offer a variety of
routing protocols and services for topology discovery, load balancing, and resiliency.
Buying a Layer 3 switch without the richness and depth of routing protocols is
somewhat akin to a driverless car. The car can certainly travel very fast in the
direction that it is pointed, but the intelligence lies in the driver, who needs to make
all the decisions about where it should go and when to stop and turn. The more
flexible and resilient these capabilities, the better reliability and adaptability the
switch offers.
Finally, there are services. All the queuing, filtering, classification, multiprotocol,
route summarization and redistribution functions, plus additional debugging,
statistics gathering and event logging services is what lets network managers deploy
solutions that rise to the future challenges of mobility, multiservice, multimedia, and
service level agreements for business critical applications.
- Summary -
- Layer 2 switches are more appropriate when the additional cost and complexity are
not warranted
Lesson 9: Understanding Virtual LANs
This lesson covers virtual LANs or VLANs. We‘ll start by defining what a VLAN is and
then explaining how it works. We‘ll conclude the lesson by talking about some key
VLAN technologies such as ISL and VTP.
The Agenda
- What Is a VLAN?
- VLAN Technologies
What Is a VLAN?
Well, the reality of the work environment today is that personnel is always changing.
Employees move departments; they switch projects. Keeping up with these changes
can consume significant network administration time. VLANs address the end-to-end
mobility needs that businesses require.
Traditionally, routers have been used to limit the broadcast domains of workgroups.
While routers provide well-defined boundaries between LAN segments, they introduce
the following problems:
Virtual LAN, or VLAN, technology solves these problems because it enables switches
and routers to configure logical topologies on top of the physical network
infrastructure. Logical topologies allow any arbitrary collection of LAN segments
within a network to be combined into an autonomous user group, appearing as a
single LAN.
Virtual LANs
A VLAN can be defined as a logical LAN segment that spans different physical LANs.
VLANs provide traffic separation and logical network partitioning.
VLANs logically segment the physical LAN infrastructure into different subnets
(broadcast domains for Ethernet) so that broadcast frames are switched only between
ports within the same VLAN.
A VLAN is a logical grouping of network devices (users) connected to the port(s) on a
LAN switch. A VLAN creates a single broadcast domain and is treated like a subnet.
Unlike a traditional segment or workgroup, you can create a VLAN to group users by
their work functions, departments, the applications used, or the protocols shared
irrespective of the users‘ work location (for example, an AppleTalk network that you
want to separate from the rest of the switched network).
VLAN implementation is most often done in the switch software.
VLAN Benefits
- The power of VLANs comes from the fact that adds, moves, and changes can be
achieved simply by configuring a port into the appropriate VLAN. Expensive, time-
consuming recabling to extend connectivity in a switched LAN environment, or host
reconfiguration and re-addressing is no longer necessary, because network
management can be used to logically ―drag and drop‖ a user from one VLAN group
to another.
Better management and control of broadcast activity—A VLAN solves the scalability
problems often found in a large flat network by breaking a single broadcast domain
into several smaller broadcast domains or VLAN groups. All broadcast and multicast
traffic is contained within each smaller domain.
VLAN Components
Servers — Servers are not required within VLAN environments specifically; however,
they are a staple within any network. Within a VLAN environment, users can utilize
servers in several different ways, and we‘ll discuss them momentarily. Because
VLANs are used throughout the network, users from multiple VLANs will most likely
need their services.
Switches provide the means for users to access a network and join a VLAN. Various
approaches exist for establishing VLAN membership.
each of these methods has its positive and negative points.
Membership by Port
Let‘s look at the first method for determining or assigning VLAN membership:
Port-based — In this case, the port is assigned to a specific VLAN independent of the
user or system attached to the port. This VLAN assignment is typically done by the
network administrator and is not dynamic. In other words, the port cannot be
automatically changed to another VLAN without the personal supervision and
processing of the network administrator.
This approach is quite simple and fast, in that no complex lookup tables are required
to achieve this VLAN segregation. If this port-to-VLAN association is done via ASICs,
the performance is very good.
This approach is also very easy to manage, and a Graphical user Interface, or GUI,
illustrating the VLAN-to-port association is normally intuitive for most users.
As in other VLAN approaches, the packets within this port-based method do not leak
into other VLAN domains on the network. The port is assigned to one and only one
VLAN at any time, and no other packets from other VLANs will ―bleed‖ into or out of
this port.
Membership by MAC Addresses
The other methods for determining VLAN membership provide more flexibility and are
more ―user-centric‖ than the port-based model. However, these methods are
conducted with software in the switch and require more processing power and
resources within the switches and the network. These solutions require a packet-by-
packet lookup method that decreases the overall performance of the switch. (Software
solutions do not run as fast as hardware/ASIC-based solutions.)
In the MAC-based model, the VLAN assignment is linked to the physical media
address or MAC address of the system accessing the network. This approach provides
enhanced security benefits of the more ―open‖ port-based approach, because all MAC
addresses are unique.
From an administrative aspect, the MAC-based approach requires slightly more work,
because a VLAN membership table must be created for all of the users within each
VLAN on the network. As a user attaches to a switch, the switch must verify and
confirm the MAC address with a central/main table and place it into the proper
VLAN.
The network address and user ID approaches are also more flexible than the port-
based approach, but they also require even more overhead than the MAC-based
method, because tables must exist throughout the network for all the relevant
network protocols, subnets, and user addresses. With the user ID method, another
large configuration/policy table must exist containing all authorized user login IDs.
Within both of these methods, the switches typically do not have enough resources
(CPU, memory) to accommodate such large tables. Therefore, these tables must exist
within servers located elsewhere in the network. Additionally, the latencies resulting
from the lookup process would be more significant in these approaches.
From an administrative aspect, the network and user ID-based approaches require
more resources (memory and bandwidth) to use distributed tables on several
switches or servers throughout the network. These two approaches also require
slightly more bandwidth to share this information between switches and servers.
Multiple VLANs per Port
When addressing these various methods for implementing VLANs, customers always
question the use of multiple VLANs per switch port. Can this be done? Does this
make sense?
The means for implementing this type of design is based on using shared hubs off of
switch ports. Members using the hub belong to different VLANs, and thus, the switch
port must also support multiple VLANs.
While this method does offer the flexibility of having VLANs completely port
independent, this method also violates one of the general principle of implementing
VLANs: broadcast containment. An incoming broadcast on any VLAN would be sent
to all hub ports — even though they may belong to a different VLAN. The switch, hub,
and all endstations will have to process this broadcast even if it belongs to a different
VLAN. This ―bleeding‖ of VLAN information does not provide true segmentation nor
does it effectively use resources.
In general, there are two approaches to using routers as communication points for
VLANs:
- Logical connection method— Using ISL within the router, a trunk can be
established between the switch and the router. One high- speed port is used, and
multiple VLAN information runs across this trunk link. (We‘ll explain ISL in just a
minute.)
- Physical connection method— Multiple independent links are used between the
router and the switch. Each link contains its own VLAN. This scenario does not
require ISL to be implemented on the router and also allows lower-speed links to be
used.
The proper method to implement depends on the customer‘s needs and requirements.
(Does the customer need to conserve router and switch ports? Does the customer
need a high-speed ISL port?) In both instances, the router still supports inter-VLAN
communication.
Server Connectivity
The network server is another key component of VLANs. Servers provide file, print,
and storage services to users throughout the network regardless of VLANs.
To optimize their network environments many customers deploy centralized server
farms in their networks.
This eases administration of the servers and Network Operating System, or NOS,
significantly. These server farms contain servers that support the entire network, but
each server supports a specific VLAN or number of VLANs.
As in the use of routers within VLANs, there are two approaches to using servers as
common access within a VLAN environment:
The proper method to implement depends on the customer‘s needs and requirements.
(Does the customer need to conserve switch ports? Does the customer need a high-
speed ISL port? Does the customer want to use ISL server adapters?) In both
methods, the server still supports multiple VLANs.
VLAN Technologies
Let‘s take a look at some technologies that are essential for VLAN implementations.
Inter-Switch Link
Cisco‘s Inter-Switch Link protocol (ISL) enables VLAN traffic to cross LAN segments.
ISL is used for interconnecting multiple switches and maintaining VLAN information
as traffic goes between switches. ISL uses ―packet tagging‖ to send VLAN packets
between devices on the network without impacting switching performance or
requiring the use and exchange of complex filtering tables. Each packet is tagged
depending on the VLAN to which it belongs.
The benefits of packet tagging include manageable broadcast domains that span the
campus; bandwidth management functions such as load distribution across
redundant backbone links and control over spanning tree domains; and a substantial
cost reduction in the number of physical switch and router ports required to
configure multiple VLANs.
The ISL protocol enables in excess of 1000 VLANs concurrently without requiring any
fragmentation or re assembly of the packets.
Additionally, ISL wraps a 48-byte ―envelope‖ around the packet that handles
processing, priority, and quality-of-service, or QoS, features. ISL is not limited to Fast
Ethernet/Ethernet packet sizes (1518 bytes) and can even accommodate large packet
sizes up to 16000 bytes — which is appropriate for Token Ring. It is important to
understand that ISL (and 802.1q—a format used by some other vendors, for that
matter) are both just packet-tagging formats. Neither sets up a standard for
administration.
VLAN Standardization
While Cisco was first to market with its revolutionary packet tagging schemes for
Fast Ethernet and FDDI, they are proprietary solutions. Other vendors implemented
their own unique methods for sharing VLAN information across the network. As a
result, a standards body was created within the IEEE to provide one common VLAN
communication standard. This ultimately benefits customers using switches from
various vendors in the marketplace.
Within the 802.1Q standard, packet tagging is the exchange vehicle for VLAN
information.
Because ISL is so widely deployed in our installed customer base, Cisco will continue
to support both ISL and 802.1Q. It is important to note that Cisco‘s dual mode
support of both methods will be implemented via hardware ASICs, which will provide
tremendous performance.
In addition to the ISL packet tagging method, Cisco also created the Virtual Trunking
Protocol, or VTP, for dynamically configuring VLAN information across the network
regardless of media type (for example Fast Ethernet, ATM, FDDI, and so on).
This VTP protocol is the software that makes ISL usable.
Conceptually, VTP works like this: When you add a new VLAN to the network, let's
say VLAN 1, VTP automatically goes out and configures the trunk interfaces across
the backbone for that VLAN. This includes the mapping of ISL to LANE or to 802.1Q.
Adding a second VLAN is just as easy. VTP sends out new advertisements and maps
the VLAN across the appropriate interfaces. The important thing to remember about
this second VLAN, is that VTP keeps track of the VLANs that already exist and
eliminates any cross configurations between these two, especially if this configuration
were to be done manually.
- Summary -
The Agenda
- What Is QoS?
- QoS in Action
Basically, QoS comprises the mechanisms that give network managers the ability to
control the mix of bandwidth, delay, variances in delay (jitter), and packet loss in the
network in order to deliver a network service such as voice over IP; define different
service-level agreements (SLAs) for divisions, applications, or organizations; or simply
prioritize traffic across a WAN.
QoS provides the ability to prioritize traffic and allocate resources across the network
to ensure the delivery of mission-critical applications, especially in heavily loaded
environments. Traffic is usually prioritized according to protocol.
So what does this really mean...
An analogy is the carpool lane on the highway. For business applications, we want to
give high priority to mission-critical applications. All other traffic can receive equal
treatment.
Mission-critical applications are given the right of way at all times. Multimedia
applications take a lower priority. Bandwidth-consuming applications, such as file
transfers, can receive an even lower priority.
There are two broad application areas that are driving the need for QoS in the
network:
- Mission-critical applications need QoS to ensure delivery and that their traffic is
not impacted by misbehaving applications using the network.
Voice and data convergence is the first compelling application requiring delay-
sensitive traffic handling on the data network. The move to save costs and add new
features by converging the voice and data networks--using voice over IP, VoFR, or
VoATM--has a number of implications for network management:
- Users will expect the combined voice and data network to be as reliable as the voice
network: 99.999% availability
- Order entry
- Finance
- Manufacturing
- Human resources
- Supply-chain management
- Sales-force automation
- SNA applications
- Selected physical ports
- Selected hosts/clients
QoS Benefits
It ensures the WAN is being used efficiently by the mission-critical applications and
that other applications get ―fair‖ service, but take a back seat to mission-critical
traffic.
It also provides an infrastructure that delivers the service levels needed by new
mission-critical applications, and lays the foundation for the ―rich media‖
applications of today and tomorrow.
QoS is required wherever there is congestion. QoS has been a critical requirement for
the WAN for years. Bandwidth, delay, and delay variation requirements are at a
premium in the wide area.
LAN QoS requirements are emerging with the increased reliance on mission critical
applications and the growing popularity of voice over LAN and WAN.
The importance of end-to-end QoS is increasing due to the rapid growth of intranets
and extranet applications that have placed increased demands on the entire network.
QoS Example
Hopefully this Image provides a little context. It demonstrates a real example of how
QoS could be used to manage network applications.
QoS Building Blocks
There are a wide range of QoS services. Queuing, traffic shaping, and filtering are
essential to traffic prioritization and congestion control, determining how a router or
switch handles incoming and outgoing traffic.
QoS signaling services determine how network nodes communicate to deliver the
specific end-to-end service required by applications, flows, or sets of users.
Classification
- IP Precedence
- Committed Access Rate (CAR)
- Diff-Serv Code Point (DSCP)
- IP-to-ATM Class of Service
- Network-Based Application Recognition (NBAR)
- Resource Reservation Protocol (RSVP)
Policing
Shaping
Congestion Avoidance
Weighted fair queuing is another queuing mechanism that ensures high priority for
sessions that are delay sensitive, while ensuring that other applications also get fair
treatment.
For instance, in the Cisco network, Oracle SQLnet traffic, which consumes relatively
low bandwidth, jumps straight to the head of the queue, while video and HTTP are
serviced as well. This works out very well because these applications do not require a
lot of bandwidth as long as they meet their delay requirements.
- Priority queuing assigns different priority levels to traffic according to traffic types
or source and destination addresses. Priority queuing does not allow any traffic of
a lower priority to pass until all packets of high priority have passed. This works
very well in certain situations. For instance, it has been very successfully
implemented in Systems Network Architecture (SNA) environments, which are
very sensitive to delay.
This has been implemented especially effectively in applications where SNA leased
lines have been replaced to provide guaranteed transmission times for very time-
sensitive SNA traffic. What does ―no bandwidth wasted‖ mean?Traffic loads are
redirected when and if space becomes available. If there is space and there is traffic,
the bandwidth is used.
Random Early Detection (RED)
Assuming the packet source is using TCP, it will decrease its transmission rate until
all the packets reach their destination, indicating that the congestion is cleared. You
can use RED as a way to cause TCP to back off traffic. TCP not only pauses, but it
also restarts quickly and adapts its transmission rate to the rate that the network
can support.
RED distributes losses in time and maintains normally low queue depth while
absorbing spikes. When enabled on an interface, RED begins dropping packets when
congestion occurs at a rate you select during configuration.
RED is recommended only for TCP/IP networks. RED is not recommended for
protocols, such as AppleTalk or Novell Netware, that respond to dropped packets by
retransmitting the packets at the same rate.
Weighted RED
By randomly dropping packets prior to periods of high congestion, WRED tells the
packet source to decrease its transmission rate. Assuming the packet source is using
TCP, it will decrease its transmission rate until all the packets reach their
destination, indicating that the congestion is cleared. WRED generally drops packets
selectively based on IP Precedence. Packets with a higher IP Precedence are less likely
to be dropped than packets with a lower precedence. Thus, higher priority traffic is
delivered with a higher probability than lower priority traffic. However, you can also
configure WRED to ignore IP precedence when making drop decisions so that non
weighted RED behavior is achieved. WRED is also RSVP-aware, and can provide
integrated services controlled-load QoS service.
WRED reduces the chances of tail drop by selectively dropping packets when the
output interface begins to show signs of congestion. By dropping some packets early
rather than waiting until the buffer is full, WRED avoids dropping large numbers of
packets at once and minimizes the chances of global synchronization. Thus, WRED
allows the transmission line to be used fully at all times. In addition, WRED
statistically drops more packets from large users than small. Therefore, traffic
sources that generate the most traffic are more likely to be slowed down than traffic
sources that generate little traffic.
QoS Signalling
Here‘s an example of how RSVP works. Let‘s first look at what the problem would be
without RSVP.
In this example, the video traffic still gets through, but it is impacted by a large file
transfer in progress. This causes a negative effect on the quality of the video and the
picture comes out all jittery.
What we need is a method to reserve bandwidth from end-to-end on a per-application
basis. RSVP can do this.
RSVP reserves bandwidth from end-to-end on a per-application basis for each user.
This is especially important for delay-sensitive applications, such as video.
As shown here, with RSVP, the client‘s application requests bandwidth be reserved at
each of the network elements on the path. These elements will reserve the requested
bandwidth using priority and queuing mechanisms.
Once the server receives the OK, bandwidth has been reserved across the whole path,
and the video stream can start being transmitted. RSVP ensures clear video
reception.
The good news is that RSVP is becoming widely accepted by industry leaders, such as
Microsoft and Intel, who are implementing RSVP support in their applications. These
applications include Intel‘s Proshare and Microsoft‘s NetShow. To provide support on
a network, Cisco routers also run RSVP.
End-to-End QoS
End-to-end QoS is essential. Following image provides a context for the different QoS
features we looked at.
QoS in Action
- SUMMARY -
The goal of QoS is to provide better and more predictable network service by
providing dedicated bandwidth, controlled jitter and latency, and improved loss
characteristics. QoS achieves these goals by providing tools for managing network
congestion, shaping network traffic, using expensive wide-area links more efficiently,
and setting traffic policies across the network.
- classification
- policing
- shaping
- congestion avoidance
Lesson 11: Security Basics
Welcome to the Lesson 11.Our goal here is to give you the terminology, the words
that your customers are going to want you to know and want you to be able to
converse with.
The Agenda
- Why Security?
- Security Technology
- Identity
- Integrity
- Active Audit
Security is very important. The Internet is a wonderful tool. Meteoric growth like that
of Cisco from nowhere to a multi-billion dollar company in a decade would not be
possible without leveraging the tools available with the internet and intranet.
But without well defined security, the Internet can be a dangerous place. The good
news is that the tools are available to make the Internet a safe place for your
business. Some people think that only large sites are hacked. In reality, even small
company sites are hacked.
There‘s a false impression from many small company owners that, "Hey, who would
want to break into my company? I‘m a nobody.
I‘m not a big corporation like IBM or the Pentagon or something like that, so why
would somebody want to break into my company?"
The reality is that even small companies are hacked into very, very often.
Why Security?
Why network security? There‘s three primary reasons to explore network security.
And the bottom line is there are people that are willing and eager to take advantage of
these vulnerabilities.
Security Threats
So these are some of the different things that we need to protect against:
Impersonation: You must also be careful to protect your identity on the Internet.
Many security systems today rely on IP addresses to uniquely identify users.
Unfortunately this system is quite easy to fool and has led to numerous break-ins.
Denial of service:And you must ensure that your systems are available. Over the
last several years, attackers have found deficiencies in the TCP/IP protocol suite that
allows them to arbitrarily cause computer systems to crash.
Loss of integrity:Even for data that is not confidential, one must still take measures
to ensure data integrity. For example, if you were able to securely identify yourself to
the your bank using digital certificates, you would still want to ensure that the
transaction itself is not modified in some way, such as by changing the amount of the
deposit.
Objectives for security need to balance the risks of providing access with the need to
protect network resources. Creating a security policy involves evaluating the risks,
defining what‘s valuable, and determining whom you can trust. The security policy
plays three roles to help you specify what must be done to secure company assets.
-It specifies what is being protected and why, and the responsibility for that
protection.
-It provides grounds for interpreting and resolving conflicts in implementation,
without listing specific threats, machines, or individuals. A well-designed policy
does not change much over time.
-It addresses scalability issues
Identity:
Links user authentication and authorization on the network infrastructure; verifies
the identity of those requesting access and prescribe what users are allowed to do.
Integrity:
Provides data confidentiality through firewalls, management control, routing, privacy
and encryption, and access control.
Active Audit:
Provides data on network activities and assist network administrators to account for
network usage, discover unauthorized activities, and scan the network for security
vulnerabilities.
Identity
Let‘s start by looking at some Identity technologies. Again, identity is the recognition
of each individual user, and mapping of their identity, location and the time to policy;
authorization of their network services and what they can do on the network.
The key to centralized identity and security policy management is the ―combination‖
of all key authentication mechanisms, from SecurID and DES Dial cards to MS Login,
and their internetworking with one common identity repository.
To truly be centralized and configured once only, the identity mechanism must also
be media independent; equally applicable to dial-users and campus users for
example.
For basic security, user id‘s and passwords can be used to authenticate remote
users.
First, a remote user dials into the network access server. The NAS, or network access
server, negotiates data link setup with the user using (most likely) PPP. As part of
this negotiation, the user must send a password to the NAS. This is usually handled
by either the PAP or CHAP protocols, which we‘ll cover in more detail in a little bit.
Next, the NAS forwards the user‘s password to a AAA server to verify that it is
legitimate. The protocol used between the NAS and AAA server is (most likely) either
TACACS+ or RADIUS. I‘ll be covering these protocols in more detail in a minute.
When the AAA server gets the user id and password, it checks its database of
legitimate users and looks for a match. If a match is found, the AAA server sends the
NAS a call accept message. If not, the AAA server sends the NAS a call reject
message.
If the call is accepted, the user is connected to the campus network.
Now let‘s back up for a minute and explain a little more about the process of dial in
connections.
Many of you have probably heard of PPP (Point-to-Point Protocol) before. PPP is used
primarily on dial-in connections since it provides a standard mechanism for passing
authentication information such as a password from a remote user to the NAS.
Two protocols are supported to carry the authentication information: PAP (Password
Authentication Protocol) and CHAP (Challenge/Handshake Authentication Protocol).
These protocols are well documented in IETF RFCs and widely implemented in
vendor products.
PAP provides a simple password protocol. User ID and password are sent at the
beginning of the call, then validated by the access server using a central PAP
database. The PAP password database is encrypted, but the password is sent in clear
text through the public network. A AAA server may be used to hold the password
database.
The problem with PAP is that it is subject to sniffing and replay attacks. Hacker could
intercept communication and use information to spoof a legitimate user.
One-Time Password
- Token cards are the most common way. The 2 most common token cards are the
SecurID card by Security Dynamics and the DES Gold card by Enigma Logic. In
one, the user enters a PIN into the card and the card displays the one-time
password, which the user types in at their terminal. In the other, the user
appends a PIN to the random number displayed on the token card, and enters this
new password at their terminal.
- Soft tokens are the same as token cards except the user doesn‘t have to carry
around a physical card. Software runs on the user‘s PC that performs the same
function as the token card, and the user need only enter a PIN.
- S-key is a PC application that presents a dialog box to the user upon login into
which the user must enter the correct combination of six key words.
The process used to send the one-time password to the NAS is virtually the same as
that used for the password example described in the previous slide. When the NAS
receives the one-time password, it forwards it to the AAA server using either
TACACS+ or RADIUS protocol. When the AAA server receives the one-time password,
it forwards it to a token server for authentication. The accept or reject message flows
back to the NAS through the AAA server.
We‘ve mentioned AAA servers. What does this mean. AAA stands for Authentication,
authorization, and accounting.
Authentication is to provide exact end user verification. I need to know exactly who
this person is, and how they prove it to me
Authorization is the second step. Now that I know who you are, what can you do. I
need to assign IP addresses, provide routes, block access to certain resources. All the
things I can do to a local user, I should be able to control with a remote user.
Accounting is the last step. I need to create an accurate record of the transactions of
this user. How long were they connected? How much data did they FTP? What was
the cause of there disconnection. This allows me to not only bill my customers
accurately, but understand my user base.
AAA Services
A AAA server provides a centralized security database that offers per-user access
control.It supports services such as TACACS+ and RADIUS that we‘ll discuss in a
minute as well as service such as:
RADIUS
RADIUS is an access server authentication and accounting protocol that has gained
wide support.
There are two flavors of TACACS: an original TACACS and extended TACACS or
TACACS+. The primary difference between the two is that TACACS+ provides more
information when a user logs in, thus allowing more control than the original
TACACS.
Lock-and-Key Security
Lock and Key challenges users to respond to a login and password prompt before
loading a unique access list into the local or remote router.
In this example, Lock and Key security allows only authorized users to access
services beyond the firewall at the corporate site.
Calling Line Identification
Caller ID is another security mechanism for dial-in access. It allows routers to look at
the ISDN number of a calling device and compare it with a list of known callers. If the
number is not in the list, the call is rejected and no charges are incurred by the
calling party.
Kerberos is another technology. It is one that has been broken into historically;
however, it provides a good level of security. With Kerberos you create a ticket that‘s
going to have a specific time allocated to it.
So with Kerberos, once a ticket is issued to me, the knowledge that that ticket was
sent plus my login itself is going to ensure that I have access to that system. So the
tickets or credentials are issued by a trusted Kerberos server that you allow on with
some specific ID that you have.
You‘ll hear a term called a Public Key. This is how a Public Key works. A Public Key
works in conjunction with something called a Private Key.
This is technology that was actually developed back in the ‘70s. The Private Key is
going to be something that you‘re going to keep to yourself.
The Private Key is going to be something that exists perhaps on your PC or perhaps
as a piece of code that you have.
A Public Key is going to be something that you publish to the outside world. What
you‘ll do is take your document and send it out with your Public Key that‘s going to
be able to be accessed by a user that‘s going to receive your document, but you‘re
going to encrypt it using your Private Key.
So by using these two things together, another user that‘s going to receive your
document can utilize your Public Key to ensure that, in fact, the document that you
send is the document that you thought it was.
So the two keys together, in essence, create a unique key, something that‘s uniquely
known by the combination of the private and the Public Key.
Digital Signatures
Now, Digital Signatures takes us a little bit further. With Digital Signatures what
we‘re going to do is take the original document and run it along with the Private Key
and we‘re going to create something called the Hash. This is going to be another
unique document that‘s created with a Digital Signature.
Now, that unique document is going to be sent along, and your Public Key is going to
be able to be used in conjunction with that new smaller document. If that Public Key
winds up with that document, then you know the confidentiality of the original
document is in place.
So here we‘ve ensured both the user that‘s sending the document as well as the
document itself as being something that‘s truthful and, in fact, the document that we
thought was sent out. So in this way, we know that the document hasn‘t been
altered.
Certificate Authority
You might want to ensure that important documents come out with some kind of
encryption or data signatures so you know they are exactly what the sender
intended. Certificate Authority allows you to do just that. It relies on a third party to
issue those kinds of certificates that are going to ensure that you are who you say
you are.
Why would you want a third party to do that? Well, there‘s a number of reasons. One
may be cost. Maybe it‘s more cost effective to have a third party do it rather than
issue Certificate Authority yourself. But another reason is if you‘re involved with third
parties. Say I‘m a manufacturer and I have a supplier. Well, that same supplier may
issue supplies to a competitor of mine.
Let‘s explore another methodology of making sure that your system is safe. This is
different than the other ones we‘ve been touching on. Network Address Translation
means security through obscurity. It means by not advertising my IP address to the
outside world, I can ensure that nobody can come in and pretend that they‘re me or
pretend that they‘re somebody trusted to me.
So the way that that would work is your device, it might be a firewall, might be a
router, is going to have a pool of IP addresses that you want to utilize to go to the
outside world. So whatever the address is on the inside, it‘s never seen. It‘s always
changed when it gets to whatever your perimeter device is.
The way it does that is by putting the different requests on a different port number,
keeping track of that information, and changing the port number when it comes
back. The reason that you might want to implement port address translation is if you
have difficulty getting enough IP addresses for all of the users on your network.
Integrity
Integrity—Network Availability
One of the functions of integrity is making sure the network is up. You need to
guarantee that data in fact gets where it‘s supposed to This is job 1! Your network
isn‘t worth a thing if your routers go down. If network infrastructure isn‘t reliable,
business doesn‘t happen. Let‘s look at a few features.
TCP Intercept
Route Authentication
Route authentication enables routers to identify one another and verify each other‘s
legitimacy before accepting route updates. So route authentication ensures that you
have trusted devices talking to trusted devices.
Integrity—Perimeter Security
Integrity also means ensuring the safety of the network devices and the flows of
information between them, including payload data, configuration and configuration
updates.
Everyone is connecting to the Internet, so networks are vulnerable: you need to
defend your perimeters. There are several kinds of network perimeter, and you may
need some kind of firewall protection at each perimeter access point to reflect your
security policy. Perimeter security gives customers the ability to leverage the Internet
as a business resource, while protecting internal resources.
The key to network integrity is that it be implemented across all types of devices with
full internetworking, so that every device in the network can participate and not be a
weak link in the security implementation chain.
Access Lists
So Access Control Lists are often the first wave of defense. Security is a multi-step
thing, and Access Control Lists can play an important part in this. Standard Access
Control Lists can filter addresses.
So you can say, "Hey, I don't want traffic from particular places," maybe people that
are known spammers or something like that. It may be anything. It's not part of your
extranet. So you can do permit and denies on an entire protocol suite.
Maybe you don't want to see a particular class of service flowing through this
particular router. There's also extended Access Control Lists where we can filter the
source and destination address. So if you have a list of people that you don't want to
be making connections, you can tell that to your ACL, as Access Control Lists are
called.
You can sort these both on inbound and outbound, on port number. For an example,
maybe you want to create a demilitarized zone, or DMZ, and you only want traffic
that's on the Web port where HTML traffic goes, which is port 80.
And also time based. Maybe you have a different set of rules during business hours
as opposed to after business hours.
Now we're going to look at policy enforcement using Access Control Lists.
We want the ability to stop and reroute traffic based on packet characteristics, based
on the information that's flowing across the network.
We can do this with access control lists on incoming or outgoing interfaces. In other
words, depending on if this is going to be your connection to the outside world, or to
an intranet, you can define where this control is going to be. You can do this together
with NetFlow to provide high-speed enforcement on network access points.
If you had an Access Control List that simply dropped packets that were
unacceptable but without a way of logging that and telling you about it, then you
may miss some alerts today to potentially more malicious behavior in the future. And
so it's very important to have logs that you review periodically.
Importance of Firewalls
Firewalls are used to build trusted perimeters around information and services. Your
Internet security solution must be able to allow employees to access Internet
resources, while keeping out unauthorized traffic. The most common way of
protecting the internal network is by using a firewall between the intranet and the
Internet.
What Is a Firewall?
So what are the basic requirements of an Internet firewall? First, a firewall needs to
be able to analyze all the traffic passing between the internal user community and
the external network. In this way it can ensure that only authorized traffic, as defined
by the security policy, is permitted through. It can also ensure that content which
could be potentially harmful to the internal network is filtered out.
A firewall also needs to be designed to resist attacks, since once a hacker gains
control of the firewall, the internal network could be compromised. And finally, it
should be able to hide the addresses of the internal network from the outside world,
making the life of a potential hacker much more difficult.
Importantly, a firewall needs to support all these requirements and have the ability to
support the constantly increasing Internet connection speeds and traffic loads, so
that it doesn‘t become a bottleneck.
Packet-Filtering Routers
The traditional approach was access routers. Using access control lists to control
network access.. A low cost, high performance solution. Didn‘t need UNIX expertise,
transparent to user - no requirements for user to change their behavior or
configuration.
Issues though were that internal addresses were exposed to the Internet. If you were
logging onto servers that were suspect to attacks or snooping, someone could then
see the host addresses. This is often the first step to finding holes in the network. By
finding out the host address, you can then start attacking the host, leaving you
vulnerable to attacks. Important to hide the addresses.
In most cases, it was possible to spoof in. Basically, spoofing means someone
represents themselves as a trusted host in the network, thus having free access to
the network. ACLs are also tough to negotiate if they‘re complex; thus it‘s easy to
make a mistake. This brought about the development of proxy servers, which brought
about statefulness, which we‘ll discuss in more detail later.
Proxy Service
Proxy servers are also sometimes known as ―bastion hosts‖. As its name suggests,
this kind of firewall acts as a ―proxy‖ for internal computers accessing the Internet.
To the outside world, it appears as if all sessions terminate at a single host, which is
carefully configured for maximum security.
Proxy servers hide IP addresses, so they are not exposed to the outside world. Certain
proxy servers also can examine content, so they can limit what can or can not be
done, such as FTP gets, or going higher in the application and determining what you
can or can not do. They can also run other services (e.g. run your mail services).
Problem is that you‘re buying a box dedicated for that, plus software, plus
maintaining the operating system. Must follow CERT alerts and make changes
quickly. Hackers can follow alerts and use those techniques to break in before you
make changes. This requires a lot of administration and time spent monitoring such
advisories. Difficult to do in today‘s busy environment.
This was also a very intrusive method for users as well, since users have to tell apps
they‘re using a firewall and going through 2-3 step logins to gain access - not at all
transparent to user.
Stateful Sessions
Many Firewalls talk about being stateful, but what does this mean and why is this
important? If you know what traffic to expect on your network, you can ensure that
that is the volume of traffic you get. For example, when Mary sends a web request to
a homepage (www.e-tutes.com), a stateful firewall will remember this. When a page
comes back from e-tutes.com to Mary, the firewall will expect it and let the traffic
pass.
Each time a TCP connection is established from an inside host accessing the Internet
through the firewall, the information about the connection is logged in a stateful
session flow table. The table contains the source and destination addresses, port
numbers, TCP sequencing information, and additional flags for each TCP connection
associated with that particular host.
Performance Requirements
High performance in a firewall is critical. This is driven not only by your end user
community, but by some of the applications people plan to use. Today‘s performance
is being driven by the new technologies.
For instance, some of the multimedia applications like video or audio over the
Internet require a high performance firewall.
Integrity—Privacy
Next let's look at some of the different privacy requirements people might have. So
following are some of the different methodologies that used to ensure privacy on the
network.
- VPNs IPSec, IKE, encryption, DES, 3DES, digital certificates, CET, CEP
Encryption and Decryption
Encryption can also be performed at the network layer by general networking devices
for specific protocols. This has the advantage of operating transparently between
subnet boundaries and being reliably enforceable from a network administrator‘s
perspective.
Finally, encryption can be performed at the link layer by specific encryption devices
for a given media or interface type. This has the advantage of being protocol
independent, but has to be performed on a link-by-link basis.
Institutions such as the military have been using link-level encryption for years. With
this scheme, every communications link is protected with a pair of encrypting
devices-one on each end of the link. While this system provides excellent data
protection, it is quite difficult to provision and manage. It also requires that each end
of every link in the network is secure, because the data is in clear text at these
points. Of course, this scheme doesn‘t work at all in the Internet, where possibly
none of the intermediate links are accessible to you or trusted.
What Is IPSec?
IPSec provides network layer encryption. IPSec is a framework of open standards for
ensuring secure private communications over the Internet. Based on standards
developed by the IETF, IPSec ensures confidentiality, integrity, and authenticity of
data communications across a public network. IPSec provides a necessary
component of a standards-based, flexible solution for deploying a network-wide
security policy.
In other words, you can just set it up at the router level or the level that makes sense
to you, and your users don't necessarily have to know that they're implementing
IPSec.
You can just define all of the transactions between my company and this company
that happens between, say, ordering and manufacturing that is going to across IPSec
and other traffic will not. It's an end-to-end security solution that's going to
incorporate routers, firewalls, PCs and servers.
IPSec Everywhere!
IPSec can be in any device with an IPStack, as shown in the picture. This is an
important point, as customers can deploy IPSec where they are most comfortable:
On the gateway/router: Much easier to install and manage, as only dealing with a
limited set of devices. The network infrastructure provides the security.
On the host/server. Best end-to-end security, but the hardest to install and manage.
Good for applications that really need this level of control.
- Negotiates its own policy. IKE has several methods it can use for authentication
and encryption. It is very flexible. Part of this is to positively identify the other side
of the connection.
- After the IKE SA is established, it will negotiated the IPSec SA. It can derive the
IPSec key material with a new Diffie Hellman or by a permutation of existing key
material.
- Identification
- Negotiation of policy
- Exchange key material
So the encryption that we're utilizing here with IPSec, DES and Triple DES are widely
adopted standards.
They encrypt plain text, which then becomes a cipher text. DES performs 16 rounds
of encryption. Triple DES is going to do a lot more than that.
We're going to do that encryption again and again and again until you wind up with
168-bit encryption. So you can do this on the client, on the server, on the router, or
on the firewall.
Now, obviously, when you're doing 168 different bits of encryption, you're going to
introduce some latency. You need to consider performance implications when using
Triple DES.
How secure is DES? The common way to break it is to do just an exhaustive search.
You just try different possibilities until you find the way to break into it.
So they took a big network that had some CRAYs on it, a whole bunch of PCs, and
instead of a screen saver, they put in this little program that, tried passwords when
the PC was not active. This led to more thinking that the Internet was made up of lots
of computers that could work on the problem simultaneously.
In fact, the Electronic Frontier Foundation and distributed.net did just this. They
cracked a 56-bit DES challenge in just 22 hours and 15 minutes. So if DES is not
currently insecure, it'll soon be insecure. So this is why we need to start thinking
about Triple DES.
Now, does this mean for your client who has a local hardware shop that he's doing
his encryption at 56-bit DES isn't safe enough? It probably is safe enough.
Again, you need to take into account the particular costs that you have and how
motivated someone's going to be to break into your particular stream.
Active Audit
Why is active audit necessary? Many companies rely on their perimeter security.
Perimeter can be breached most of the network and its systems are virtually
unprotected.
First, hackers are quite likely to be employees or may have breached the security
perimeter through a business partner or a modem. Because they are considered
‗trusted‘ they have already breached most network security, such as firewalls,
encryption, and authentication. Note: the company network is usually considered the
‗trusted‘ network while the Internet is ‗untrusted‘. However, with up to 80% of
security breaches occurring in the ‗trusted‘ network companies may want to rethink
their strategies for protecting systems and data.
Second, the defense may be ineffective. Aging, mismanaged security is no match for
today‘s hacker, who is constantly improving techniques.
Third, most security breaks down due to human error. People make mistakes on
programming firewalls, they allow services to the network and forget to turn them off,
they are no efficient at changing passwords, they add modems and forget to turn
them off -- the list goes on and on.
Fourth, the network is always growing and changing, Every change is a new
opportunity for the patient hacker, who may spend months or even years waiting for
an opening. Firewalls , authorization, and encryption provide policy enforcement, but
do not monitor behavior. And with hacking, it is the behavior that is the problem.
These problems can be alleviated by creating a security process that includes
visibility into the network.
1)User behaviors -- are your employees, business partners, and anyone else
misusing the network?
2)System vulnerabilities -- if a ‗bad guy‘ gets into your network, have your
systems been secured to lock him out?
This is where a strong firewall gives a false sense of security. You must consider what
would happen if your firewall is compromised.
The most effective and security strategy for your network defense includes a ‗defense
in depth‘ or ‗layered defense‘. This includes augmenting your point solutions with
dynamic systems that monitor users as they use the network and measure the
network resources for changes and vulnerabilities. And these technologies should be
used to help better secure the network perimeter as well as the intranet.
Often organizations have a tactical approach to network security and do not treat it
with the same importance as network operations. However, more companies today
are taking a strategic approach to network security and treating it as part of the
network operation. This includes development of processes that constantly measure,
monitor and improve the security posture.
Active Audit is the systematic implementation of the security policy, to actively audit,
verify, detect intrusion and anomalies and report findings
For true security policy management enterprise-wide, Active Audit capability must be
in place and be applicable for all access ports, devices and media.
Proactive network auditing tools provide preventative maintenance by detecting
security weak points before they can be exploited by intruders.
Intrusion detection tools recognize when the security of the network is in jeopardy.
Intrusion detection provides the burglar alarms that notify you in real-time when
break-in attempts are detected.
For example, you want to be able to see a bunch of port scans are happening on your
system. There's some IP address that they are originating from. That somebody who
could be potentially doing bad things to your network.
You want to be able to watch suspect behavior. You also want to be able to watch
things like, hey, does that person in data entry, are they going back into the data
warehouse? Are they going into our accounting system?
IDS architecture is going to consist of several different parts. There's going to be some
IDS engine, something that's analogous to a sniffer that's watching the line, looking
for violations in policy. There's going to go some security management system,
someplace where you give the instructions about what adheres to your security policy
and what doesn't. And there will be kind of real time alarm notification, some way to
tell the people within the organization, hey, this is what's going on in your network.
Something bad is about to happen. Something bad is happening. It's time to take
action.
Some of the different kinds of things that an Intrusion Detection System or IDS might
detect would be looking in the context of the data, looking for attacks on your
network for denial of service.
For an example, a Ping of Death shares this following parameters: It's going to be a
ping, but it's going to have a super large packet size. So you can watch for that kind
of traffic and take appropriate action against it.
Things like Port Sweeps. I can think of no reason, other than testing your network, to
do a Port Sweep other than trying to find ways to break into your system.
SYN attacks and TCP hijacking fall into that same category. There would be no
reason to do those other than to do malicious activity on your network. So you want
to be able to watch for those.
For the content itself, you want to be able to look at DNS attacks. Internet Explorer
attacks would be an example of content attack. And you want to do composite scans.
You want to look for telnet attacks and character mode attacks. So these are all the
kinds of things that we can be looking for on the network.
Active Audit
Authentication and authorization occur on the front end. Equally as important is the
―back-end‖ side of security. Accounting is the systematic and dynamic verification
that the security policy as defined is properly implemented. It provides assurance
that the security policy is consistent and operating correctly.
Accounting should be handled by a system that is totally separate from the network
security solutions that are installed. Currently, there aren‘t many tools available for
active audit, which explains why many companies hire outside auditors to check
their security implementations.
- SUMMARY -
The Agenda
- VPN Technologies
- VPN Examples
A VPN can be built on the Internet or on a service provider‘s IP, Frame Relay, or ATM
infrastructure. Businesses that run their intranets over a VPN service enjoy the same
security, QoS, reliability, and scalability as they do in their own private networks.
VPNs based on IP can naturally extend the ubiquitous nature of intranets over wide-
area links, to remote offices, mobile users, and telecommuters. Further, they can
support extranets linking business partners, customers, and suppliers to provide
better customer satisfaction and reduced manufacturing costs. Alternatively, VPNs
can connect communities of interest, providing a secure forum for common topics of
discussion.
Building a virtual private network means you use the ―public‖ Internet (or a service
provider‘s network) as your ―private‖ wide-area network.
Since it‘s generally much less expensive to connect to the Internet than to lease your
own data circuits, a VPN may allow to you connect remote offices or employees who
wouldn‘t ordinarily justify the cost of a regular WAN connection.
VPNs may be useful for conducting secure transactions, or transferring highly
confidential data between offices that have a WAN connection.
- Tunneling
- Encryption
- QoS
- Comprehensive security
- Reduce costs
- Internet-based VPNs offer low-cost connectivity from anywhere in the world, and
can be considered a viable replacement for leased-line or Frame Relay services
Using the Internet as a replacement for expensive WAN services can cut costs by
as much as 60 percent, according to Forrester Research
- Also lower remote costs by connecting a mobile user over the Internet. (Often
referred to as a virtual private dial-up networking, or VPDN).
- A VPN can provide more connectivity options (for example, over cable, DSL,
telephone, or Ethernet)
- Increased speed of deployment
- Extranets can be created more easily (you don‘t wait for suppliers). This keeps
the customer in control of their own destiny.
The strain on today's corporate networks is greater than ever before. Network
managers must continually find ways to connect geographically dispersed work
groups in an efficient, cost-effective manner. Increasing demands from feature-rich
applications used by a widely dispersed workforce are causing businesses of all sizes
to rethink their networking strategies. As companies expand their networks to link
up with partners, and as the number of telecommuters and remote users continues
to grow, building a distributed enterprise becomes ever more challenging.
To meet this challenge, VPNs have emerged, enabling organizations to outsource
network resources on a shared infrastructure. Access VPNs in particular appeal to a
highly mobile work force, enabling users to connect to the corporate network
whenever, wherever, or however they require.
Networked Applications
The traditional drivers of network deployment are also driving the deployment of
VPNs.
Example of a VPN
This what a VPN might look like for a company with offices in Munich, New York,
Paris, and Milan.
VPN Technologies
Let‘s take a look at some of the technologies that are integral to virtual private
networks.
Business-ready VPNs rely on both security and QoS technologies. Let‘s take a look at
both of these in more detail.
Security
Layer 2 Forwarding (L2F) enables remote clients to gain access to corporate networks
through existing public infrastructures, while retaining control of security and
manageability. Cisco has submitted this new technology to the IETF for approval as a
standard. It supports scalability and reliability features as discussed in later sections
of this document.
L2F achieves private network access through a public system by building a secure
"tunnel" across a public infrastructure to connect directly to a home gateway. The
service requires only local dialup capability, reducing user costs and providing the
same level of security found in private networks.
Using L2F tunneling, service providers can create a virtual tunnel to link customer
remote sites or remote users with corporate home networks. In particular, a network
access server at the POP exchanges PPP messages with the remote users and
communicates by L2F requests and responses with the customer's home gateway to
set up tunnels. L2F passes protocol-level packets through the virtual tunnel between
endpoints of a point-to-point connection.
Frames from remote users are accepted by the service provider POP, stripped of any
linked framing or transparency bytes, encapsulated in L2F, and forwarded over the
appropriate tunnel. The customer's home gateway accepts these L2F frames, strips
the L2F encapsulation, and processes incoming frames for the appropriate interface.
One of the most significant advantages of this approach is that Service Providers can
offer application-level QoS. This is possible because the routers still have visibility
into the additional IP header information needed for fine-grained QoS (this is hidden
in an IPSec packet).
- Encryption-optional tunneling.
- Fine-grained QoS service capabilities, including application-level QoS.
- IP-level visibility makes this the platform of choice for building value-added
services such as application-level bandwidth management.
What Is IPSec?
IPSec assumes that a security association is in place, but does have a mechanism for
creating that association. The IETF chose to break the process into two parts: IPSec
provides the packet level processing while IKE negotiates security associations. IKE is
the mechanism IPSec uses to set up SAs
IKE can be used for more than just IPSec. IPSec is its first application. It can also be
used with S/Mime, SSL, etc.
- Negotiates its own policy. IKE has several methods it can use for authentication
and encryption. It is very flexible. Part of this is to positively identify the other side
of the connection.
- Once it has negotiated an IKE policy, it will perform an exchange of key-material
using authenticated Diffie-Hellman.
- After the IKE SA is established, it will negotiate the IPSec SA. It can derive the
IPSec key material with a new Diffie Hellman or by a permutation of existing key
material.
- Identification
- Negotiation of policy
- Exchange key material
Now that you understand both IPSec and IKE, let‘s look at what really happens from
the client‘s perspective.
An IPSec client is a software component that allows a desktop user to create an IPSec
tunnel to a remote site. IPSec provides privacy, integrity, and authenticity for VPN
client operations. With IPSec, no one can see what data you are sending and no one
can change it.
What‘s input by a remote user dialing in via the public Internet is encrypted all the
way to corporate headquarters with an IPSec client to a router at the home gateway.
First, the remote user dials into the corporate network. The client uses either an
X.509 or one-time password with a AAA server to negotiate an Internet Key
Exchange. Only after it‘s authenticated is a secure tunnel created.
Then all data is encrypted.
IPSec is transparent tot he network infrastructure and is scalable from very small
applications to very large networks. As you can see, this is an ideal way to connect
remote users or telecommuters to corporate networks in a safe and secure
environment.
Another thing that people often get confused about is the relationship between L2TP
and IPSec. Remember that L2TP is Layer 2 Tunneling Protocol. Some people think
that the two technologies are exclusive of each other. In fact, they are
complementary.
So you can use both of these together. IPSec can create remote tunnels. L2TP can
provide tunnel and end-to-end authentication.
So IPSec is going to maintain the encryption, but often times you want to tunnel non-
IP traffic in addition to IP traffic.
L2TP can be useful for that.
DES stands for Data Encryption Standard. It is a widely adopted standard created to
protect unclassified computer data and communications. DES has been incorporated
into numerous industry and international standards since its approval in the late
1970s.
DES and 3DES are strong forms of encryption that allow sensitive information to be
transmitted over untrusted networks. They enable customers to utilize network layer
encryption.
Firewalls
A key component of VPN security is making sure authorized users gain access to
enterprise computing resources they need, while unauthorized users are shut out of
the network entirely. AAA services (that stands for authentication, authorization, and
accounting) provide the foundation to authenticate users, determine access levels,
and archive all the necessary audit and accounting data. Such capabilities are
paramount in the dial access and extranet applications of VPNs.
So how does QoS play a role in VPNs? Well, the goal of QoS is to control the
utilization of bandwidth so that you can support mission critical applications. Here‘s
how it works. The customer premises equipment or CPE assigns packet priority
based on the network policy. Packets are marked and bandwidth is managed so that
the VNP WAN links don‘t choke out the important traffic.
One example of this could be an employee watching television off the Internet to his
PC where the video traffic clogs a small 56K WAN line making it impossible for
mission critical financial application data to pass.
With QoS, you can take advantage of the service providers differentiated services to
maximize network resources and minimize congestion at peak times.
For example, e-mail traffic doesn‘t care about latency, but video and mission-critical
applications do. Some components of bandwidth management/QoS that apply to
VPNs are as follows:
These QoS features complement each other, working together in different parts of the
VPN to create a comprehensive bandwidth management solution. Bandwidth
management solutions must be applied at multiple points on the VPN to be effective;
single point solutions cannot ensure predictable performance.
Intranet VPNs link corporate headquarters, remote offices, and branch offices over a
shared infrastructure using dedicated connections. The VPN typically is an
alternative to a leased line. It provides the benefit of extended connectivity and lower
cost.
Access VPNs
Remote access VPNs extend the corporate network to telecommuters, mobile workers,
and remote offices with minimal WAN traffic. They enable users to connect to their
corporate intranets or extranets whenever, wherever, or however they require.
Remote access VPNs provide connectivity to a corporate intranet or extranet over a
shared infrastructure with the same policies as a private network. Access methods
are flexible---asynchronous dial, ISDN, DSL, mobile IP, and cable technologies are
supported. Migrating from privately managed dial networks to remote access.
- Reduced capital costs associated with modem and terminal server equipment
- Ability to utilize local dial-in numbers instead of long distance or 800 numbers,
thus significantly reducing long distance telecommunications costs
- Greater scalability and ease of deployment for new users added to the network
L2TP Network Server (LNS): A device such as a Cisco router located in the customer
premises. Remote dial users access the home LAN as if they were dialed into the
home gateway directly, although their physical dialup is via the ISP network access
server. Home gateway is the Cisco term for LNS.
An LNS operates on any platform capable of PPP termination. LNS handles the server
side of the L2TP protocol. Because L2TP relies only on the single media over which
L2TP tunnels arrive, LNS may have only a single LAN or WAN interface, yet still be
able to terminate calls arriving at any LAC's full range of PPP interfaces (async,
synchronous ISDN, V.120, and so on). LNS is the initiator of outgoing calls and the
receiver of incoming calls. LNS is also known as HGW in L2F terminology.
L2TP Access Concentrator (LAC): A device such as a Cisco access server attached to
the switched network fabric (for example, PSTN or ISDN) or colocated with a PPP end
system capable of handling the L2TP protocol. An LAC needs to only implement the
media over which L2TP is to operate to pass traffic to one or more local network
servers (LNSs). It may tunnel any protocol carried within PPP. LAC is the initiator of
incoming calls and the receiver of outgoing calls. LAC is also known as NAS in L2F.
Client-Initiated VPNs
An advantage of a client-initiated model is that the "last mile" service provider access
network used for dialing to the point of presence (POP) is secured. An additional
consideration in the client-initiated model is whether to utilize operating system
embedded security software or a more secure supplemental security software
package. While supplemental security software installed on the client offers more
robust security, a drawback to this approach is that it entails installing and
maintaining tunneling/encryption software on each client accessing the remote
access VPN, potentially making it more difficult to scale.
In a NAS-initiated scenario, client software issues are eliminated. A remote user dials
into a service provider's POP using a PPP/SLIP connection, is authenticated by the
service provider, and, in turn, initiates a secure, encrypted tunnel to the corporate
network from the POP using L2TP or L2F. With a NAS-initiated architecture, all VPN
intelligence resides in the service provider network---there is no end-user client
software for the corporation to maintain, thus eliminating client management
burdens associated with remote access. The drawback, however, is lack of security
on the local access dial network connecting the client to the service provider network.
In a remote access VPN implementation, these security/management trade-offs must
be balanced.
NAS-Initiated VPNs
IPSec provides encryption only, in contrast with the client-initiated model where
IPSec enables both tunneling and encryption. Premium service examples include
reserved modem ports, guarantees of modem availability, and priority data transport.
The NAS can simultaneously be used for Internet as well as VPN access.
All traffic to a given destination travels over a single tunnel from a NAS, making
larger deployments more scalable and manageable.
Con: NAS-initiated Access VPN connections are restricted to POPs that can support
VPNs.
Intranet VPNs: Link corporate headquarters, remote offices, and branch offices over a
shared infrastructure using dedicated connections. Businesses enjoy the same
policies as a private network, including security, quality of service (QoS),
manageability, and reliability.
One of the primary benefits of a VPN WAN architecture is the ease of extranet
deployment and management. Extranet connectivity is deployed using the same
architecture and protocols utilized in implementing intranet and remote access VPNs.
The primary difference is the access permission extranet users are granted once
connected to their partner's network.
Intranet and extranet VPN services based on IPSec, GRE, and mobile IP create secure
tunnels across an IP network. These technologies leverage industry standards to
establish secure, point-to-point connections in a mesh topology that is overlaid on
the service provider's IP network or the Internet. They also offer the option to
prioritize applications. An IPSec architecture, however, includes the IETF proposed
standard for IP-based encryption and enables encrypted tunnels from the access
point to and across the intranet or extranet.
Finally, in addition to IP tunnels and virtual circuits, intranet and extranet VPNs can
be deployed with a Tag Switching/MPLS architecture. Tag Switching is a switching
mechanism created by Cisco Systems and introduced to the IETF under the name
MPLS. MPLS has been adopted as an industry standard for converging IP and ATM
technologies.
A VPN built with Tag Switching/MPLS affords broad scalability and flexibility across
any backbone choice whether IP, ATM, or multivendor. With Tag Switching/MPLS,
packets are forwarded based on a VPN-based address that is analogous to mail
forwarded with a postal office zip code. This VPN identifier in the packet header
isolates traffic to a specific VPN. Tag Switching/MPLS solves peer adjacency
scalability issues that occur with large virtual circuit topologies. It also offers
granularity to the application for priority and bandwidth management, and it
facilitates incremental multiservice offerings such as Internet telephony, Internet fax,
and videoconferencing.
Access VPNs are differentiated from intranet and extranet VPNs primarily by the
connectivity method into the network. While an access VPN refers to dialup (or part-
time) connectivity, an intranet or extranet VPN may contain both dialup and
dedicated links.
The distinction between intranet and extranet VPNs is essentially in the users that
will be connecting to the network and the security restrictions that each will be
subject to.
VPN Examples
Well, why would they care so much about security? Your health records are
something that you want to be secure. This is information that you don't want non-
authorized personnel to have access to.
So you can see on the figger, the company has a number of remote centers.
In this case, these are like doc-in-the-box, those little new medical clinics that are
springing up. So those are relayed back to a primary network and back to the
association where the primary hospital that these different medical centers are
associated with resides.
So a lot of more sophisticated databases, etc., can be back at the hospital, and they
can share the Internet and, with confidence, share medical data that they don't want
to have published to the outside world.
This isn't just encrypting mail or just encrypting the database or something like that.
You can encrypt all traffic if you want to. And so that's something that you can set up
right into the router in terms of what traffic you want to encrypt right into your
client.
So using this, telecommuters can have full access safely to the corporation.
To illustrate the savings an Access VPN can provide, compare the cost of
implementing one with that of supporting a dial-up remote access application.
Suppose a small manufacturing firm must support 20 mobile users dialing into the
corporate network to access the company database and e-mail for approximately 90
minutes per day (per user).
In the traditional dial-up model, the 20 mobile workers use a modem to dial long
distance directly into their corporate remote access server. Most of the cost in this
scenario comes from the monthly toll chares and the time and effort required to
manage modem pools (access server) that accrue on an on-going basis over the life of
the application.
By using an access VPN, the manufacturing firm‘s monthly toll charges can be
significantly reduced. The mobile users will dial into a service provider‘s local point of
presence (POP) and initiate a tunnel back to the corporate headquarters over the
Internet. Instead of paying long distance/800 toll charges, users pay only the cost
equivalent to making a local call to the ISP. The initial investment in equipment and
installation of an access VPN may be recaptured quickly by the savings in monthly
toll charges.
How long will it take the manufacturing firm to realize a payback of the initial capital
investment, then realize recurring monthly savings?
VPN Payback
This chart shows us the return on investment. You can see that the payback is right
about three months.
So you can see that VPNs save money in the long run.
- Summary -
Lower cost: VPNs save money because they use the Internet, not costly leased lines,
to transmit information to and from authorized users. Prior to VPNs, many
companies with remote offices communicated through wide area networks (WANs), or
by having remote workers make long-distance calls to connect to the main-office
server. Both can be expensive propositions. WANs require establishing dedicated and
inflexible leased lines between various business locations, which can be costly or
impractical for smaller offices.
Improved communications: A VPN provides a robust level of connectivity
comparable to a WAN. With increased geographic coverage, remote offices, mobile
employees, clients, vendors, telecommuters, and even international business
partners can use a VPN to access information on a company's network. This level of
interconnectivity allows for a more effective flow of information between a large
number of people. The VPN provides access to both extranets and wide-area
intranets, which opens the door for improved client service, vendor support, and
company communications.
Security: VPNs maintain privacy through the use of tunneling protocols and
standard security procedures. A secure VPN encrypts data before it travels through
the public network and decrypts it at the receiving end. The encrypted information
travels through a secure "tunnel‖ that connects to a company's gateway. The gateway
then identifies the remote user and lets the user access only the information he or
she is authorized to receive.
Increased flexibility: With a VPN, customers, suppliers and remote users can be
added to the network easily and quickly. Some VPN solutions simplify the process of
administering the network by allowing the system's manager to implement changes
from any desktop computer. Once the equipment is installed, the company simply
signs up with a service provider that activates the network by giving the company a
slice of its bandwidth. This is much easier than establishing a WAN, which must be
designed, built and managed by the company that creates it. VPNs also easily adapt
to a company's growth. These systems can connect 2,000 people as easily as 25.
Reliability: A secure VPN can be used for the authorization of orders from suppliers,
the forwarding of revised legal documents, and many other confidential business
processes. Recent improvements in VPN technology have also increased the system's
reliability. Many service providers will guarantee 99% VPN uptime and will offer
credits for unanticipated outages.
Welcome to the Voice Technology Basics lesson. Combined voice and data networks
are definitely a hot topic these days. In this module, we‘ll start by discussing the
convergence of voice and data. We‘ll present a bit of history as well so that you
understand how this all came about.
We‘ll then move into discussing actual voice technology. There‘s a lot to cover here
and a lot of vocabulary you‘ll need to be familiar with. We‘ll start with understanding
the traditional telephony equipment. We‘ll also discuss voice quality issues as well as
enabling technologies such as compression that are making voice/data networks
possible.
After we cover the technology, we‘ll discuss Voice over IP, Voice over Frame Relay,
and Voice over ATM. We‘ll then cover some of the new applications that are possible
on combined voice/data networks.
Finally, we‘ll look at how a company might migrate from traditional telephony to an
integrated voice/data network.
The Agenda
- Applications
- Sample Migration
Today, voice and data typically exist in two different networks. Data networks use
packet-switching technology, which sends packets across a network. All packets
share the available network bandwidth. At the same time, voice networks use circuit
switching, which seizes a trunk or line for dedicated use. But this is all changing...
Data/Voice Convergence—Why?
There is a lot of talk today about merging voice and data networks. You may hear this
referred to as multiservice networking or data/voice/video integration or just
voice/data integration. They all refer to the same thing. Merging multiple
infrastructures into one that carries all data, regardless of type.
In this new world order, voice is just plain data. The trends driving this integration
are cost initially--saving money. Significant amounts of money can be saved by doing
away with parallel infrastructures. In the long run, though, new business
applications are what will drive the integration of data and voice. Applications such
as:
- Integrated messaging
- Voice-enabled desktop applications
- Internet telephony
- Desktop video (Intel ProShare, Microsoft NetMeeting, etc.)
The place where you can realize the greatest savings is in the wide-area network
(WAN), where the bandwidth and services are very expensive.
The concept here is that at some point, you want voice data ―to ride for free.‖ If you
look at the overall bandwidth requirements of voice compared to the rest of the
network, it is miniscule. If you had to charge per-packet or per-kilobit, voice is
basically ―free.‖
Companies should experience several kinds of cost savings. Traditionally, the overall
telecom budget includes three basic sections: capital equipment, support overhead
such as wages and salaries, and facilities. The majority of costs are incurred in the
facilities. Facilities charges are recurring, such as leased-line charges which occur
every month, as opposed to capital equipment, which can be amortized over a couple
of years.
Because facilities are the largest expense, this can also be the place where the most
money can be saved. The largest part of the facilities charge is the telecom budget. If
the telecom budget can be reduced, money can be leveraged out of that to pay for
network expansion.
People tell Cisco, ―We have to leverage our budget to converge data, voice, and video.
We have exponential applications that demand growth and we don‘t know how to
finance that.‖ Cisco advises customers to look at their established budgets and see if
there is any way to squeeze money out of them by putting in a more efficient
infrastructure with features such as compression, and move all traffic over a single
transport mechanism. On average, users can expect a 30 to 50 percent reduction in
their IT budgets with convergence.
New applications that include voice are becoming increasingly important as they
drive competitive advantage.
Before we get into the nuts and bolts of voice technology, let‘s take a look at just a
couple of these applications that multiservice networks enable.
There is a lot of technology and a lot of issues that are important to understand with
voice/data integration. There‘s also a lot of jargon and vocabulary. Pace yourself as
we move through this section.
We‘ll start by looking at TDM versus packet-based networks. Then we‘ll cover the
traditional telephony equipment. Voice quality issues are essential and we‘ll discuss
these, along with the technologies that are making voice/data convergence a
possibility.
Traditional Separate Networks
Many organizations operate multiple separate networks, because when they were
created that was the best way to provide various types of communication services
that were both affordable and at a level of quality acceptable to the user community.
For example, many organizations currently operate at least three wide-area networks,
one for voice, one for SNA, and another for LAN-to-LAN data communications. This
traffic can be very ―bursty.‖
The traditional model for voice transport has been time-division multiplexing (TDM),
which employs dedicated circuits.
Dedicated TDM circuits are inefficient for the transport of ―bursty‖ traffic such as
LAN-to-LAN data. Let‘s look at TDM in more detail so that you can understand why.
- LAN traffic can typically be supported by TDM in the WAN only by allocating
enough bandwidth to support the peak requirement of each connection or traffic
type. The trade-off is between poor application response time and expensive
bandwidth.
With a multiservice network, all data is run over the same infrastructure. We no
longer have three or four separate networks, some TDM, some packet. One packet-
based network carries all the data. How does this work? Let‘s look at packet-based
networking.
Packet-Based Networking
As we have just seen, TDM networking allocates time slots through the network.
However, this efficiency is not without its cost. In our effort to increase efficiency, we
run the risk of a surge in offered traffic that exceeds our trunk.
In that case, there are two options: we can discard the traffic or buffer it. Buffering
helps us reduce the potential of discarded data traffic, but increases the delay of the
data. Large amounts of oversubscription and large amounts of buffering can result in
long variable delays.
Traditional Telephony
You can‘t really understand voice/data integration unless you understand telephony.
This section covers that.
In a typical voice/analog telephone network, users make an outside phone call from
the phone on their desk. The call then connects to the company‘s internal phone
system or directly to the Public Switched Telephone Network (PSTN) over a basic
telephone service analog trunk or a T1/E1 digital trunk. From the PSTN, the call is
routed to the recipient, such as an individual at home.
If a call connects to a company‘s internal phone system, the call may be routed
internally to another phone on the corporate voice network without ever going
through a PSTN.
The PSTN may contain a variety of transmission media, including copper cable, fiber-
optic cable, microwave communications, and satellite communications.
Traditional Telephony Equipment
EKTS: Electronic key telephone systems improve upon KTS systems. EKTSs often
provide switching capabilities and impressive functionality, crossing into the PBX
world.
PBX: A private branch exchange system allows the sharing of pooled trunks (outside
lines) to which the user typically gains access by dialing an access digit such as ―9.‖
Software in the PBX manages contention for pooled lines. The PBX system has many
features, including simultaneous voice call and data screen, automated dial-outs
from computer databases, and transfers to experts based on responses to questions
rather than phone numbers.
The historical differences between a PBX and a key system have blurred, and both
product lines offer comparable feature sets for station-to-station calling, voice mail,
and so on. Either the customer owns the PBX or it can be owned and operated by a
third party as a service to the end customer. To blur things further, key systems are
beginning to offer selected trunk interfaces.
The major differences between a PBX and a key system are the following:
- A PBX looks to the network like another switch—it connects via trunk (PBX-to-
PBX) interfaces to the network.
- A key system looks like a phone set (station) and connects via lines (station to
PBX).
- PBXs serve the high end of the market.
- Key systems serve the low end of the market.
CO: The central office is the phone company facility that houses the switches.
We will now consider how phone calls are created and sent through the traditional
telephone network Signaling
- Off-hook signaling - how a phone call gets started
- Signaling paths
- Signaling types
Addressing
Routing
In any telephone system, some form of signaling mechanism is required to set up and
tear down calls. When a caller from an office desk calls someone across the country
at another office desk, many forms of signaling are used, including the following:
All of these signaling forms may be different. Simple examples of signaling include
ringing of a telephone, dial tone, ringing, and so on.
Control—Interface signals that are used to announce, start, stop, or modify a call.
Controls signals are used in interoffice trunk signaling.
A telephone can be in one of two states: off-hook or on-hook. A line is seized when
the phone goes off-hook.
Off-hook—A telephone is off-hook when the telephone handset is lifted from its
cradle. When you lift the handset, the hook switch is moved by a spring and alerts
the PBX that the user wants to receive an incoming call or dial an outgoing call. A
dial tone indicates ―Give me an order.‖
On-hook—A telephone is on-hook when its handset is resting in the cradle and the
phone is not connected to a line. Only the bell is active, that is, it will ring if a call
comes in.
The phone company can provision a Private Line, Automatic Ringdown (PLAR)
between two devices. A PLAR is a leased voice circuit that connects two single
instruments. When either handset is lifted, the other instrument automatically rings.
Typical PLAR applications include a telephone at a bank ATM, phones at an airport
that ring a selected hotel, and emergency phones.
Signaling Between the PBX and CO
- Ground start—A signaling method in which one side of the two-wire line
(typically the ―ring‖ conductor of the tip and ring) is momentarily grounded to get
dial tone.
With a DID trunk, a wink signal from the CO indicates that additional digits will be
sent. After the PBX acknowledges the wink, the DID digits are sent by the CO.
PBXs work best on ground start trunks, though many will work on both loop start
and ground start. Normal single-line phones and key systems typically work on loop
start trunks.
- Supervisory signaling
- Alerting
- Addressing
SS7 is an integral part of ISDN. It enables companies to extend full PBX and Centrex-
based services—such as call forwarding, call waiting, call screening, call transfer, and
so on—outside the switch to the full international network.
Foreign Exchange (FX) trunk signaling can be provided over analog or T1/E1 links.
Connecting basic telephone service telephones to a computer telephony system via T1
links requires a channel band configured with FX type connections.
To generate a call from the basic telephone service set to a computer telephony
system, a foreign exchange office (FXO) connection must be configured. To generate a
call from the computer telephony system to the basic telephone service set, a foreign
exchange station (FXS) connection must be configured.
When two PBXs communicate over a tie trunk, they use E&M signaling (stands for
Earth and Magneto or Ear and Mouth). E&M is generally used for two-way (either
side may initiate actions) switch-to-switch or switch-to-network connections. It is
also frequently used for the computer telephony system to switch connections.
- On-net calling refers to calls that stay on a customer‘s private network, traveling
by private line from beginning to end.
- Off-net calling refers to phone calls that are carried in part on a network but are
destined for a phone that is not on the network. That is, some part of the
conversation‘s journey will be over the PSTN or someone else‘s network.
International and national numbering plans are described by the ITU‘s E.164
recommendation. It is expected that the local telephone company adheres to this
recommendation.
E.164 is only the public network addressing system. There are also private dialing
plans, which are nonstandardized and can be considered highly effective by their
users.
This slide depicts a trunk group that bypasses the PSTN. Selection of this trunk has
been predefined and mapped to the number 8. The access number could be part of
the E.164 addressing scheme or part of a private dialing plan.
Alternate numbering schemes are employed by users and providers of PSTN service
for specific reasons. An example of a of non-E.164 plan is carrier identification code
(CIC), used for selecting different long-distance carriers, tie lines, trunk groups,
WATS lines, and private numbering plans, such as seven-digit dialing.
For integrating voice and data networks, each of these areas must be considered.
Voice Routing
Routing is closely related to the numbering plan and signaling that we just described.
At its most basic level, routing enables the establishment of a call from the source
telephone to the destination telephone. However, most routing is much more
sophisticated and allows subscribers to select specific services.
In terms of implementation, routing is a result of establishing a set of tables or rules
within each switch. As a call comes in, the path to the desired destination and the
type of features available will be derived from these tables or rules.
It is important to know how routing is done in the telephone network, because this
function will be required in an integrated data/voice network.
Now that you understand how today‘s voice networks work, let‘s take a look at how
real-time voice over a data network works.
Voice over packet networks provide techniques for sending real-time voice over data
networks, including IP, Frame Relay, and Asynchronous Transfer Mode (ATM)
networks.
Analog voice is converted into digital voice packets, sent over the data network as
data packets, and converted to analog voice on the other end.
The data network can be an IP LAN, or a leased-line, ATM, or Frame Relay network.
Digital data packets are converted to Analog voice packets with the following steps:
What‘s made this all possible is that in the last ten years, a lot of things have
happened in voice technology:
What makes voice compression possible is the power of Digital Signal Processors.
DSPs have continued to increase in performance and decrease in price over time, and
as they have, it has made it possible to use new compression schemes that offer
better quality and use less bandwidth. The power of the DSP makes it possible to
combine this traffic onto a line that formerly supported perhaps only a LAN
connection, but now can support voice, data, and LAN integration.
Looking at this chart, quality and bandwidth tend to trade off. PCM is the standard
64Kbps scheme for coding voice; it is the standard for toll quality. The other
compression schemes - ADPCM at 32Kbps, 24Kbps and 16Kbps - offer less quality
but more bandwidth efficiency. The newer compression schemes -LDCELP at 16Kbps
and CS-ACELP at 8Kbps - offer even higher efficiency but with very high quality very
acceptable in a business environment.
Voice activity detection (VAD) provides for additional savings beyond that achieved by
voice compression.
Telephone conversations are half duplex by nature , because we listen and pause
between sentences. Sixty percent of a 64-kbps voice channel typically contains
silence. VAD enables traffic from other voice channels or data circuits to make use of
this silence.
The benefits of VAD increase with the addition of more channels, because the
statistical probability of silence increases with the number of voice conversations
being combined.
The advantages of reduced cost and bandwidth savings of carrying voice over packet
networks are associated with some quality of service issues that are unique to packet
networks.
In a circuit-switched or TDM environment, bandwidth is dedicated, making QoS—
quality of service—implicit, whereas, in a packet-switched environment, all kinds of
traffic are mixed in a store-and-forward manner.
So, in a packet-switched environment, there is the need to devise schemes to
prioritize real-time traffic.
So… in an integrated voice data network, QoS is essential to ensure the same high
quality as voice transmissions in the traditional circuit-switched environment.
Some of the quality of service issues customers face include the following:
Delay—Delay causes two problems: echo and talker overlap. Echo is cased by the
signal reflections of the speaker‘s voice from the far-end telephone equipment back
into the speaker‘s ear. Echo becomes a significant problem when the round-trip delay
becomes greater than 50 milliseconds (ms). Talker overlap becomes significant if the
one-way delay becomes greater than 250 ms.
Quality of service issues for voice may be handled by the H.323, VoIP, VoATM, or
VoFR standards, or by an internetworking device. Following are some solutions to
quality of service issues:
Jitter—Adjust the jitter buffer size to minimize jitter. On an ATM network, the
approach is to measure the variation of packet levels over a period of time and
incrementally adapt the buffer size to match the calculated jitter. On an IP network,
the approach is to count the number of packets successfully processed and adjust
the jitter buffer to target a predetermined allowable late packet ratio.
Lost packets—While dropped packets are not a problem for data (due to
retransmission), they cause a significant problem for voice applications. To
compensate, voice over packet software can interpolate for lost speech packets by
replaying the last packet, or can send redundant information at the expense of
bandwidth utilization.
Echo—Echo cancellation techniques are used to compare voice data received from
the packet network with voice data being transmitted to the packet network. The
echo from the telephone network hybrid is removed by a digital filter on the transmit
path into the packet network.
With all of the ―marketing hype‖ around QoS today, many customers have become
skeptical of the claims some vendors are making.
Here‘s one way to look at the actual effect of Cisco QoS technologies on voice quality.
The blue line represents the total network data load. The green line represents voice
quality without QoS. As you can see, the quality of a voice call rises and falls in
response to varying levels of background traffic.
The red line represents voice quality with QoS enabled, showing that high voice
quality remains constant as background traffic fluctuates.
We‘ve covered the building blocks for voice/data integration. Now, let‘s take a look at
the different transports customers can consider.
The most widely used is Voice over IP. Voice over Frame Relay and Voice over ATM
are also important so we‘ll cover these as well.
VoIP:
VoFR:
VoATM:
- ATM Forum:
- Traffic Management Specification Version 4.0—af-tm-0056.000
- Circuit Emulation Service 2.0—af-vtoa-0078.000
- ATM UNI Signaling, Version 4.0—af-sig-0061.0000
- PNNI V1.0—af-pnni-0055.000
Voice over Data Transports
All types of packetized voice implementations lend themselves well to both corporate
and service provider use.
The Voice over IP (VoIP) approach provide Internet service providers (ISPs) with a
competitive weapon against telecommunications companies, while
telecommunications companies prefer a virtual circuit environment using Voice over
Frame Relay (VoFR) or Voice over ATM (VoATM).
In terms of quality, voice over Frame Relay (VoFR), voice over ATM (VoATM), and voice
over IP (VoIP differ). However, they also differ in terms of cost and in terms of general
usability.
Frame Relay‘s variance does have an impact on voice quality, but Frame Relay can
maintain a business-quality level of communication at lower cost. Therefore, VoFR is
slightly lower cost than VoATM, but VoFR provides some usually undetectable
variations in quality.
VoIP can go anywhere from utility quality, if used over the Internet to toll quality, if
used over an intranet with QoS mechanisms enabled. Yet it will generally provide the
lowest cost for connectivity. Thus, VoIP in intranets is highly viable for the business
user today and provides the most attractive cost option of the three.
VoATM, meaning voice over real-time variable bit rate (RT-VBR) or constant bit rate
(RT-CBR), is fully deterministic in terms of QoS. Voice quality never varies. However,
VoATM is generally more costly to implement than is, say, VoFR.
All three options offer significantly lower costs than the costs of building a private or
using a PSTN, and usually require a fraction of the bandwidth.
Voice over IP Components
ITU-T H.323 is a standard approved by the ITU-T that defines how audiovisual
conferencing data is transmitted across networks.
H.323 provides a foundation for audio, video, and data communications across IP
networks, including the Internet.
H.323-compliant multimedia products and applications can interoperate, allowing
users to communicate without concern for compatibility.
H.323 provides important building blocks for a broad new range of collaborative,
LAN-based applications for multimedia communications.
H.323 sets multimedia standards for the existing infrastructure (for example, IP-
based networks). Designed to compensate for the effect of highly variable LAN
latency, H.323 allows customers to use multimedia applications without changing
their network infrastructure.
- Internet phones
- Desktop conferencing
- Multimedia Web sites
- Internet commerce
- And many others
H.323 Infrastructure
The H.323 standard specifies four kinds of components, which when networked
together, provide the point-to-point and point-to-multipoint multimedia
communication services: terminals, gateways, gatekeepers, multipoint control units
(MCUs).
A gatekeeper can be considered the ―brain‖ of the H.323 network. Although they are
not required, gatekeepers provide important services such as addressing,
authorization, and authentication of terminals and gateways, bandwidth
management, accounting, billing, and charging. Gatekeepers may also provide call-
routing services.
MCUs provide support for conferences of three or more H.323 terminals. All terminals
participating in the conference establish a connection with the MCU. The MCU
manages conference resources, negotiates between terminals for the purpose of
determining the audio or video CODEC to use, and may handle the media stream.
The gatekeepers, gateways, and MCUs are logically separate components of the H.323
standard, but can be implemented as a single physical device.
H.323 Interoperability
VoIP works with a company‘s existing telephony architecture, including its private
branch exchanges (PBXs) and analog phones.
VoIP and H.323 enables companies to complete office-to-office telephone and fax calls
across data networks, significantly reducing tolls. New applications are available,
including unified messaging that integrates e-mail with voice mail and fax.
Choosing VoIP
Customers may choose VoIP as their voice transport medium when they need a
solution that is simple to implement, offers voice and fax capabilities, and handles
phone-to-computer voice communications. IP networks are proliferating throughout
the marketplace. Thus, many customers can use VoIP today.
The Voice over IP and H.323 standards define how analog voice is converted to data
packets and back again. The next step is to use a company‘s existing wide-area
network (WAN) to transport voice traffic with data traffic.
Serial (Leased Line) Services
Frame Relay offers very high access speeds. In North America, initial Frame Relay
access rates start at 56 Kbps and go up to 1.544 Mbps. In Europe, the initial Frame
Relay access rates start at 64 Kbps and go up to 2.048 Mbps. Companies can
contract with their service provider for a committed information rate (CIR).
The Frame Relay standard today uses permanent virtual circuits (PVCs). All traffic for
a PVC uses the same path through the Frame Relay network. The endpoints of the
PVC are defined by a data-link connection identifier (DLCI). The CIR, DLCIs, and
PVCs are defined when the user initially subscribes to a Frame Relay service.
Frame Relay allows remote host access for applications such as the following:
Frame Relay supports multiple virtual connections over a single physical interface.
This means that Frame Relay is often the ideal solution to provide many users with
simultaneous access to a remote location. In these cases, the Frame Relay connection
helps optimize the return on investment of the host system.
Voice over Frame Relay (VoFR) technology consolidates voice and voice-band data
(including fax and analog modems) with data services over a Frame Relay network.
The VoFR standard is specified in FRF.11 by the Frame Relay Forum.
VoFR allows PBXs to be connected using Frame Relay PVCs. The goal is to replace
leased lines and lower costs. With VoFR, customers can easily increase their link
speeds to their Frame Relay service or their CIR to support additional voice, fax, and
data traffic.
A voice-capable router connects both a PBX and a data network to a public Frame
Relay network. A voice-capable router includes a Voice Frame Relay Adapter (VFRAD)
or a voice/fax module that supports voice traffic on the data network.
Choosing VoFR
Frame Relay provides another popular transport for multiservice networks since
Frame Relay networks are common in many areas. Frame Relay is a cost-effective
service that supports bursty traffic well.
Frame Relay enables customers to prioritize voice frames over data frames to
guarantee quality of service (QoS).
Asynchronous Transfer Mode (ATM) Services
Asynchronous Transfer Mode (ATM) is a technology that can transmit voice, video,
data, and graphics across LANs, metropolitan-area networks (MANs), and WANs.
ATM is an international standard defined by ANSI and ITU-T that implements a high-
speed, connection-oriented, cell-switching, and multiplexing technology that is
designed to provide users with virtually unlimited bandwidth. Many in the
telecommunications industry believe that ATM will revolutionize the way networks
are designed and managed.
Today‘s networks are running out of bandwidth. Network users are constantly
demanding more bandwidth than their network can provide. In the mid 1980s,
researchers in the telecommunications industry began to investigate the technologies
that would serve as the basis for the next generation of high-speed voice, video, and
data networks. The researchers took an approach that would take advantage of the
anticipated advances in technology and enable support for services that might be
required in the future. The result of this research was the development of the ATM
standard.
Using a WAN switch for ATM, customers can connect their PBX network and data
network to a public or private ATM network.
One attractive aspect of ATM is its ability to support different QoS, as appropriate for
various applications. The QoS spectrum ranges from circuit-style service, where
bandwidth, latency, and other parameters are guaranteed for each connection, to
packet-style service, where best-effort delivery allocates bandwidth for each active
connection.
The ATM Forum developed a set of terms for describing requirements placed on the
network by particular types of traffic. These five terms (AAL1 through AAL5) are
referred to as adaptation layers, and are used as a common language for discussing
what kinds of traffic requirements an application will present to the network.
Choosing VoATM
One attractive aspect of ATM is its ability to support different QoS features as
appropriate for various applications.
Constant bit rate (CBR)—An ATM service type for nonvarying, continuous streams of
bits or cell payloads. Applications, such as voice circuits, generate CBR traffic
patterns. The ATM network guarantees to meet the transmitter‘s bandwidth and
other QoS requirements. Many voice and circuit emulation applications can use CBR.
Variable bit rate (VBR)—An ATM service type for information flows with irregular
but fully characterized traffic patterns. VBR is divided into real-time VBR and non-
real-time VBR, in which the ATM network guarantees to meet the bandwidth
and other QoS requirements. Many applications, particularly compressed video, can
use VBR service. It is fairly common in real networks that will never receive the
ceiling value.
Unspecified bit rate (UBR)—An ATM service type that provides ―best effort‖ delivery
of transmitted data. It is similar to the datagram service available from today‘s
internetworks. Many data applications can use UBR service.
Available bit rate (ABR)—An ATM service type that provides ―best effort‖ delivery of
transmitted data. ABR differs from other ―best effort‖ service types, such as UBR,
because it employs feedback to notify users to reduce their transmission rate to
alleviate congestion. Hence, ABR offers a qualitative guarantee to minimize
undesirable cell loss. Many data applications can use ABR service.
Because Frame Relay technology was originally designed and optimized as a data
solution, you could dedicate a public or private Frame Relay network to data and pay
separate dialup or Virtual Private Network (VPN) rates for intracompany phone calls.
Provided you can afford the different types of equipment, services, and staff resources
required to manage both networks, this choice assures you of the highest quality for
each type of traffic today. This option is most likely desirable for sites that are very
data-heavy.
Another option is to achieve some level of integration by using one piece of circuit-
switching equipment, such as a time-division multiplexer (TDM), to connect both the
PBX and LAN server to a wide-area network. Customers gain economies by running
all WAN traffic over a single service (rather than receiving multiple WAN bills) and
avoiding paying phone company rates for intra-enterprise phone calls.
The costly downside is that within the network, bandwidth is likely to be wasted,
because you are still reserving circuits for certain types of traffic, and those circuits
sit idle when nothing travels across them.
Applications
Now let‘s put it all together. How does it actually work? Let‘s look at the voice
applications on an integrated voice/data network that replace traditional telephony.
- Inter-office calling
- Toll bypass
- On-net to off-net call rerouting
- PLAR replacement
- Tie trunk replacement
A voice-capable router can function as a local phone system for intra-office calls. In
the example, a user dials a phone extension, which is located in the same office. The
voice-capable router routes the call to the appropriate destination.
A voice-capable router can function as a phone system for inter-office calls to route
calls within an enterprise network.
In the example, a user dials a phone extension, which is located in another office
location. Notice that the extension number begins with a different leading number
than the on-net, intra-office call. The voice-capable router routes the call to another
voice-capable router over an ATM, Frame Relay, or HDLC network. The receiving
router then routes the call to the PBX, which routes the call to the appropriate phone
extension.
This solution eliminates the need for tie trunks between office locations, or eliminates
long-distance toll charges between locations.
A voice-capable router can provide off-net dialing to a location outside the local office,
through the PSTN.
In the example, a user dials 9 to indicate an outbound call, then dials the remaining
7-digit number (this is a local phone call). The voice-capable router routes the call to
another voice-capable router over a Frame Relay or HDLC network. The receiving
router recognizes that this is an outbound call and routes it to the company‘s PBX in
New York. Finally, the PBX routes the call to the PSTN and the call is routed to the
appropriate destination.
This solution places the call on-net as far as possible, allowing a local PBX to place a
local call. This saves significantly on toll charges.
Keep in mind that a PBX cannot reroute a call after a line is ―seized.‖ Therefore, a
voice-capable router can seize an off-net trunk and route a call. This solution
guarantees that a phone call is placed, regardless of the load on the network.
A voice-capable router can replace a Private Line, Automatic Ringdown (PLAR) service
from a telephone service provider.
In the example, a user takes the phone off-hook, causing another telephone
extension to ring. The voice-capable router recognizes that the phone is off-hook, and
routes the call over an ATM, Frame Relay, or HDLC network to the remote router. The
remote router then routes the call to the PBX, which rings the appropriate extension.
This solution eliminates the need for dedicated PLAR lines.
The next slides graphically illustrate the migration from traditional circuit-switched
voice networking to the new packet-switched integrated data/voice/video networking.
Here you see two offices… one in Vancouver and one in Toronto. Each has a PBX to
handle the office but all calls inter-office go through the PSTN.
By adding voice-capable routers to the existing data network, connecting them to the
existing PBXs, the company can first do toll bypass. This represents bandwidth no
longer needed for voice traffic that is now going through the routers.
The PBX tie line also goes away now that its function has been replaced by a path
between the voice-capable routers.
You can see here the end result. A much simplified network and considerable cost
savings.
- Summary -
Reduce costs: Phone toll charges; cost of multiple management methods and
multiple types of expertise required to support multiple types of networks; capital
expenditures on multiple networks
Simplify network design: Through strategic convergence of data, voice, and video
networks
customers like to have networkes which with new technologies like Performing ad hoc
device management on evolving networks and technologies. Struggling with the
transition to proactive, business-oriented service-level management.
Network Management Process
The following figger gives you the clear view of how should be the Management
Process done.
There are some three staps which are more importance when we conducting a
Management Segment
Plan / Design:
- Build history
- Baseline
- Trend analysis
- Capacity planning
- Procurement
- Topology design
Implement / Deploy
- Define thresholds
- Monitor exceptions
- Notify
- Correlate
- Isolate problems
- Troubleshoot
- Bypass/resolve
- Validate and report
In a network management system, the system manages the argent which are dirived
from the main system like Management Database, with the help of Network
Management Protocol,which are cleared by the figger.
this is a protocol which is comming under the management building blocks. this use
to provide status massages and problemreports across a network to the Management
system. SNMP uses Use DAtagram Protocol as a transport mechanism. It employs
different terms from TCP/IP, working with managers and agents instead of clients,
and servers. An agent usually provides information about a device, the manager
communicates across a network with the agents.
SNMP V2
SNMP V3
SNMP messages are the request and responses between the Manager and Agent.
Once the Agent gets a request from the manager as a MIB variable, then Agent gives
manager a response as the same variable. And also Trap for the unsolicited alarm
conditions.
MIB is a database of objects for a specific device within the network agent.
Types of MIBs:
MIB I
- 114 standard objects
- Objects included are considered essential for either fault or configuration
management
MIB II
- Extends MIB I
- 185 objects defined
Proprietary MIBs
- Extensions to standard MIBs
NMS playies the important rall at the Management system, That is it Polls agents on
network and Receives traps, Gathers and displays information about the status
around the Network and it is the Platform for integration
Example: HP OpenView
If a NMS faced a problem with the Traffic Overhead then there should be some
reasion, to reduce this the NMS should set polling interval wisely betwen the agents
and the bandwidth issues should lower than befor on lower-speed links
Example:
RMON or Remote MONitoring MIB was designed to manage the network itself. MIB
I/II could be used to check each machines network performance, but would lead to
large amounts of bandwidth for management traffic. Using RMON you see the wire
view of the network and not just a single host‘s view. RMON has the capability to set
performance thresholds and only report if the threshold is breached, again helping to
reduce management traffic (effectively distributing the network management smarts!).
RMON agents can reside in routers, switches, and dedicated boxes. The agents will
gather up to 19 groups of statistics. The agents then forward this information upon
request from a client.
Because RMON agents must look at every frame on the network, performance is a
must. Early RMON agent‘s performance could be classified based on processing
power and memory.
Automatic Network Discovery. and the following are the activities of CDP:
Maintains Switch-to-Switch Performance and the following are the activities of ISL:
Activities of VTP:
Here are the reasons, Why the Traditional Management Model can not keep pace
when the management Intranet Basics
For the Web model to deliver substantial value for the management software
industry, however, the vendors must agree on content standards for sharing of
management information. Such a set of Web-oriented standards for exchanging basic
management information is being defined under the Web-Based Enterprise
Management (WBEM) initiative, spearheaded by vendors such as Cisco, HP, Intel,
Compaq, BMC, Microsoft, IBM/Tivoli and others. The Desktop Management Task
Force (DMTF) is now leading the effort to standardize the technologies of WBEM. The
first of these, the CIM provides an extensible data model of the enterprise computing
environment. Recent work by the DMTF makes the CIM model the basis for Web-
based integration using XML (see sidebar on Web-Based Enterprise Management
Standards for details).
Role of Directories
- Single-user identity
- User profiles, applications, and network services
- Integrated policies
- Common information model
The future of the Directory Enabled Network is to extend the directory throughout the
elements of the network.
We can then provide a unified view of all the network resources at our disposal. From
a user perspective, you'll not need to be authenticated on a half a dozen different
devices just to get your job done.
Poicy management iis mast important one, Which coming under natwork
management.
Aligning Network Resources with Business Objectives
- Application-aware network
- Intelligent network services
- Network-wide service policy
- Control by application & user
The network Plicy is a set of high-level business directives that control the
deployment of network services (e.g., security and QoS). And areated on the basis
and in terms of established business practices
Example: Allow all members of the Engineering department access to corporate
resources using Telnet, FTP, HTTP, and e-mail, 24 x 7
Role of QoS
There are two broad application areas where QoS technologies are needed:
- Mission-critical applications need QoS to ensure delivery and that their traffic is
not impacted by misbehaving applications using the network.
- Real-time applications such as multimedia and voice need QoS to guarantee
bandwidth and minimize jitter. This ensures the stability and reliability of existing
applications when new applications are added.
Voice and data convergence is the first compelling application requiring delay-
sensitive traffic handling on the data network. The move to save costs and add new
features by converging the voice and data networks--using voice over IP, VoFR, or
VoATM--has a number of implications for network management:
- Users will expect the combined voice and data network to be as reliable as the
voice network: 99.999% availability
- To even approach such a level of reliability requires a sophisticated management
capability; policies come into play again
Cisco‘s unique service is the ability to offer products that let network managers
prioritize applications in today‘s evolving networks.
Let‘s take a look at QoS in more detail.
- Classification
- Policing
- Shaping
- Congestion avoidance
Enterprises are more aware of security issues than ever before, with business
globalization, growing numbers of remote users, and especially the press buzz about
the Internet and VPNs forcing security to their attention. Security needs to be tied to
policies, so that it can be applied consistently, without leaving hidden holes subject
to hacker penetration.
- SUMMARY -
- SNMP, MIBs, RMON, and network management systems are the building blocks
of network management tools
In this lesson, we‘re going to discuss the Internet. We‘ll cover how the Internet has
created a new business model that‘s changing how companies do business today.
We‘ll look at intranets, extranets, and e-commerce. Finally, we‘ll look at the
technology implications of the new Internet applications such as the need for higher
bandwidth technologies and security.
The Agenda
- Intranets
- Extranets
- E-Commerce
- The number of hosts (or computers) connected to the Internet has grown from a
handful in 1989 to hundreds of millions today.
- The MIT Media Lab says that the size of the World Wide Web is doubling every 50
days, and that a new home page is created every 4 seconds.
Internet Hierarchy
- The ―wires‖ are arranged in a loose hierarchy, with the fastest wires located in the
middle of the cloud on one of the Internet‘s many ―backbones.‖
- Regional networks connect to the Internet backbone at one of several Network
Access Points (NAPs), including MAE-EAST, in Herndon, Virginia; and MAE-
WEST, in Palo Alto, California.
- Internet service providers (ISPs) administer or connect to the regional networks,
and serve customers from one or more points of presence (POPs).
- Dynamic adaptive routing allows Internet traffic to be automatically rerouted
around circuit failures.
- Dataquest estimates that up to 88 percent of all traffic on the Internet touches a
Cisco router at some point.
From simple electronic mail to extensive intranets that include online ordering and
extranet services, the Internet is changing the way everyone does business. Small
and medium-sized companies seeking to remain competitive into the next century
must leverage the Internet as a business asset.
The Internet is forcing companies adopt technology faster. You‘ll discover several
themes that are driving the new Internet economy, as follows.
Compression—Everything happens faster: business cycles are shorter, and time and
distances are less relevant to your customers.
Market turbulence—Customers suddenly have more choices. They can shop farther
afield in search of good values. You have to compete even harder to retain customers.
- Companies today must be able to swiftly ―go to market‖ in new and expanded
locations.
- Moreover, the rigid border or boundaries of manufacturers are changing:
manufacturers are becoming retailers and distributors.
The need to ―do more with less‖ is essential to accommodate narrowing margins,
intensifying competition, and industry convergence. The network must raise the
productivity of the workforce.
Traditional Business Model Versus New Business Model
The Internet is transforming the way companies can use information and information
systems. Historically, businesses have ―protected‖ company information and allowed
limited sharing of systems.
Creating these ―silos‖ of information has meant that each ―link‖ of the ―extended‖
traditional business has lacked access to relevant information to make profit
maximizing decisions. That means your employees, suppliers, customers, and
partners were kept from information, not always by intention, but because limited
access created barriers to sharing it. The result was:
The Internet and networked applications have changed all that. They allow all
companies, no matter the size, to break the information barriers—to ―let loose the
power of information.‖
Accelerating this shift is the explosive growth and rapid adoption of Internet usage.
Let‘s take a look at some of the Internet business solutions that companies are driven
to implement in order to improve their productivity and stay competitive. These
include:
- Intranets
- Extranets
- E-commerce
Intranets
What Is an Intranet?
An intranet is an internal network based on Internet and World Wide Web technology
that delivers immediate, up-to-date information and services to networked employees
anytime, anywhere.
Intranet applications are platform-independent, so they are less costly to deploy than
traditional client/server applications, and they bear no installation and upgrade
costs since employees access them from the network using a standard Web browser.
Finally, and perhaps most important, intranets enhance employees‘ productivity by
equipping them with powerful, consistent tools.
Most companies can benefit from an intranet. Here are some sample applications:
Employee self-service—Employee self-service provides your employees with the
ability to access information at any time from anywhere they want. It enables
employees to independently access vital company information. Employee self-service
allows companies to save on labor costs as well as increase employee productivity
and communication. We‘ll look at this in more detail.
Another example is corporate travel. Many employees travel frequently. New intranet
applications that store an employee‘s travel preferences can make it easy for
employees to request or even book travel arrangements at any time of the day or
night, enabling companies to provide this vital service at a lower cost.
As you can see, intranet applications are a win/win for both employees and the
company.
Benefits of Intranets
Intranets are rapidly gaining wide acceptance because they make network
applications much easier to access and use. Intranets enable self-service.
Intranets allow you to:
- Share or access vital information at any time, from any location. For example, you
can extend intranets around the world, for instance, to sales offices in London
and Tokyo. Now sales teams or manufacturing plants in Asia can quickly access
information on servers at the central office in the United States—and it‘s easier to
use.
- Minimize downtime and cut maintenance costs by providing work teams with
complete electronic work packages.
Extranets
What Is an Extranet?
An extranet allows you to extend your company intranet to your supply chain.
- Supply-chain management
- Customer communications
- Distributor promotions
- Online continuing education/training
- Customer service
- Order status inquiry
- Inventory inquiry
- Account status inquiry
- Warranty registration
- Claims
- Online discussion forums
The traditional business fulfillment model is linear, with communication flowing from
supplier to manufacturers in a step-by-step process. Communication does not
transcend down the supply chain resulting in inefficiencies and time consuming
processes.
Effectively managing the supply chain is more critical now than ever. Customers
today are looking for a total solution—they want ease of purchase and
implementation, they want customized products, and they want them yesterday.
Today, in order to better service and retain customers, companies realize that they
need to improve their business processes in order to deliver products to customers in
reduced time. One effective way to do this is to improve the system processes that
make up the overall supply chain.
- Enable suppliers to see real-time market demand and inventory levels, thus
providing them with the necessary information to alter their production mix
accordingly.
- Give suppliers access to customer order information, so they can fulfill those
orders directly without having to route product through you.
- Using the network, demand forecasts can be updated in real time, and
manufacturing line statuses, and product fulfillment can be queried by any
member of the supply chain.
- Use the network to hold online meetings where product design teams work
together with suppliers to discuss prototype development, resulting in reduced
cycle times.
Benefits of Extranets
You can decrease inventories and cycle times, while improving on-time delivery.
You can increase customer satisfaction and, at the same time, more effectively
manage the supply chain.
You can improve sales channel performance by providing dealers and distributors
with product and promotional information online, while it‘s hot.
You can reduce costs by automating everyday processes.
You can improve customer satisfaction by streamlining processes and improving
productivity.
E-Commerce
However, the revenues that business-to-consumer companies are realizing are just
the tip of the iceberg. The bulk of business on the Internet is actually business-to-
business e-commerce which, as you can see by this chart, is skyrocketing.
In the last two years alone, the amount of business conducted over the Internet has
gone from $1 billion to $30 billion, with an 80 to 20 business-to-business and
business-to-consumer mix. The projections for the next two years and beyond are
even more dramatic. Internet commerce will likely reach from $350 to $400 billion in
2002. Some estimates are even more aggressive and place the size of Internet
commerce by 2002 at almost a trillion dollars.
And, most of us generally think that only big businesses are conducting e-commerce.
In fact, over 97 percent of businesses conducting electronic commerce are companies
with 499 employees or less, and 71 percent of those companies have less than 49
employees. As you can see, e-business has become a critical component of many
businesses.
- Online catalog
- Order entry
- Configuration
- Pricing
- Order verification
- Credit authorization
- Invoicing
- Payment and receivables
Benefits of E-Commerce
We also recognize how online ordering can cut costs significantly by reducing the
staff needed to man an 800 number or physically write up orders.
Additionally, we understand that the Internet allows companies to extend their reach
and sell into new markets without incurring global headcount costs
What most of us don‘t realize is that these are only a few of the benefits of e-
commerce.
- You can manage your inventory levels better. For example, an automobile
manufacturer has its suppliers linked via the Web for online ordering. A supplier
can place an order directly and can see immediately if the part is in stock or will
need to be back ordered.
- By putting valuable information on your Web site, customers can get answers
quickly to most of their questions at any time of the day, from any location.
Customer satisfaction soars when customers can get critical information at any
time, from any location. It allows them to do business when they want to, not
during the traditional 8 to 5 business day.
First is the need for increased bandwidth. Internets, intranets, and extranets have
totally reversed the 80/20 rule so that now 80% of the traffic is going over the
backbone and only 20% is local. Everyone is clamoring for Fast Ethernet and even
Gigabit Ethernet connections.
The need for security is obvious once a company is connected to the Internet. You
cannot read the paper without hearing about the latest hacking job.
- Individual users connecting to the Internet for e-mail or casual Web browsing can
usually get by using a simple modem.
- Power users or small offices should consider ISDN or Frame Relay.
- Larger offices or businesses that expect high levels of Internet traffic should look
into Frame Relay or leased lines.
- New technologies like asymmetric digital subscriber line (ADSL) and high-data-
rate digital subscriber line (HDSL) will make high-speed Internet access even more
affordable in the future.
One of the most vulnerable point in a customer‘s network is its connection to the
Internet. To secure the communication between a corporate headquarters and the
Internet, a customer needs all the integrity security tools at its disposal. These tools
include firewalls, Network Address Translation (NAT), and encryption, token cards,
and others.
Virtual Private Network
Virtual Private Networks (VPNs) can bring the power of the Internet to the local
enterprise network. Here is where the distinction between Internet and intranet starts
to blur. By building a VPN, an enterprise can use the ―public‖ Internet as its own
―private‖ WAN.
- Tunneling
- Encryption
- Resource Reservation Protocol (RSVP)
Electronic commerce can streamline regular business activities in new ways. Have
any of you used a fax machine to send purchase orders to vendors?
A fax machine turns your PO into bits, transmits them across a network, and then
turns them back into atoms on the other end. The disadvantage is that the atoms on
the other end can only be read by a human being, who probably has to retype the
data into another computer.
EDI provides a way for many companies to reduce their operating costs by
eliminating the atoms and keeping the bits.
For example, RJR Nabisco reduced PO processing costs from $70 to 93 cents by
replacing its paper-based system with EDI.
Public key/private key encryption is created by the PGP program (Pretty Good
Privacy). It creates a public key and a private key. Anyone can encrypt a file with your
public key, but only you can decrypt the file. To ensure security, an enterprise may
issue a public key to its customers, but only the enterprise will be able to decrypt a
message using the private key.
- SUMMARY -
The Internet has created the capability for almost ANY computer system to
communicate with any other. With Internet business solutions, companies can
redefine how they share relevant information with the key constituents in their
business—not just their internal functional groups, but also customers, partners,
and suppliers.
- Internet access can take a business into new markets, decrease costs, and
increase revenue through e-commerce applications. It can attract retail customers
by providing them with company information and the ability to order online.
- Intranets can provide your employees with access to information and help
compress business cycles.