Categorize WAN technology types and properties. It also includes MPLS concepts from topic 3.5 in the blueprint: Describe different network topologies. This chapter covers in detail different WAN characteristics and technologies, starting with T1 and E1 and going up to more recent technologies, like WiMAX and LTE.
You learn WAN types in detail in our Cisco CCNA video and lab course.
WAN Overview
Wide Area Networks (WANs) span across large geographical distances to provide connectivity for various parts of the network infrastructure. Most service providers are well trained in properly supporting not only traditional data traffic but also voice and video services (which are more delay-sensitive) over these large geographical distances.
Unlike the Local Area Network (LAN) environment, not all of the WAN components are owned by the specific enterprise they serve. Instead, WAN equipment and/or connectivity can be leased from service providers. Another interesting thing about WANs is that, unlike LANs, there is typically an initial fixed cost and periodic recurring fees for the services provided. These costs are one reason to avoid overprovisioning the network, and implementing effective Quality of Service mechanisms can keep you from buying additional WAN bandwidth that you might not need.
The design requirements for WAN technologies are typically derived from the following:
- Application types
- Availability of applications
- Reliability of applications
- Costs associated with a particular WAN technology
- Usage levels for the applications
WAN Categories
An essential concept in WAN categorization is circuit-switched technology, the most relevant example of which is the Public Switched Telephone Network (PSTN). One of the technologies that falls under this category is Integrated Services for Digital Network (ISDN). The way circuit-switched WAN connections function is by establishing them when needed and terminating them when they are no longer required. Another example that reflects this circuit-switching behavior is the old-fashioned dial-up connection (i.e., dial-up modem analog access over the PSTN).
Note: Not too long ago, dial-up technology was the only way to access Internet resources, offering an average usable bandwidth of around 40Kbps. Nowadays, this technology is almost extinct. |
The opposite of the circuit-switched option is leased-line technology. This is a fully dedicated connection that is permanently up and is owned by the company. Examples of leased lines include Time-Division Multiplexing (TDM)-based leased lines. These are usually very expensive because a single customer has full use of the connectivity offered.
Another popular category of WAN technology involves packet-switched concepts. In a packet-switched infrastructure, shared bandwidth utilizes virtual circuits. The customer can create a virtual path (similar to a leased line) through the service provider’s infrastructure cloud. This virtual circuit has a dedicated bandwidth, even though technically this is not a real leased line. Frame Relay is an example of this type of technology.
Some legacy WAN technologies include X.25, which is the predecessor of Frame Relay. This technology is still present in some implementations but it is very rare to find.
Another WAN category relates to cell-switched technology. This is often included in packet-switched technologies, as they are very similar. A cell-switched technology example is Asynchronous Transfer Mode (ATM). This operates using fixed-size cells, instead of using packets like in Frame Relay. Cell-switched technologies form a shared bandwidth environment from the service provider standpoint that guarantees customers some level of bandwidth through their infrastructure.
Broadband is another growing category for WAN and this includes technologies such as:
- DSL
- Cable
- Wireless
Broadband implies the capability of taking a connection, like the old-fashioned coaxial cable that carries TV signals, and figuring out how to use different aspects of that bandwidth. For example, using multiplexing, an additional data signal could be transmitted along with the original TV signal.
Figure 16.1 – WAN Categories
As detailed in Figure 16.1 above, there are many options when discussing WAN categories. All of these technologies can support the needs of modern networks that operate under the 20/80 rule, meaning 80% of the network traffic uses some kind of WAN technology to access remote resources.
NBMA Technologies
A special technology that appears in wide area networking is Non-Broadcast Multi-Access (NBMA). This presents some challenges that are not present in traditional Broadcast networking. The need for NBMA appears when there is no native Broadcast support for a group of systems that want to communicate over the same network. Some issues appear when the devices cannot natively send a packet destined for all the devices on the Multi-Access segment. Frame Relay, ATM, and ISDN are examples of technologies that are NBMA by default.
All of these technologies have no native ability to support Broadcasts. This prevents them from running, for example, routing protocols that use Broadcasts in their operation. Native Multicast support is also missing in Non-Broadcast networks. In the case of a routing protocol, all of the nodes that participate must get the Multicast updates. When using an NBMA network, one approach is to send the Multicast or Broadcast packets as replicated Unicast packets so that the Broadcast/Multicast frames are individually sent to every node in the topology. The tricky part in this scenario is that the device has to come up with a way to solve Layer 3-to-Layer 2 resolution, since particular packets have to be addressed for the specific machines that need to receive them.
Methodologies must exist for resolving this Layer 3-to-Layer 2 resolution. The Layer 3 address is typically the IP address and the Layer 2 address is usually variable, based on the technology used. In the case of Frame Relay, this will consist of the Data Link Connection Identifier (DLCI), so a way to resolve the DLCI to the IP address must be found.
In the case of Broadcast networks, Layer 3 resolution uses MAC addresses as the Layer 2 addresses and this has to be resolved with IPv4 addresses. This is accomplished by the Address Resolution Protocol (ARP). In a Broadcast-based network, the devices broadcast their requests by specifying the devices it wants to communicate with (typically learned via DNS) and asking for the MAC addresses specific to those devices. The reply is Unicast and includes the requested MAC address.
In NBMA environments, you still need to bind the Layer 3 address (IP address) to the Layer 2 address (DLCI). This can be done in an automated fashion using a technology called Inverse ARP. This is used to resolve the remote Layer 3 address to a Layer 2 address only used locally. Inverse ARP can be utilized in Frame Relay environments. The issue with Inverse ARP as the solution for Layer 3-to-Layer 2 resolution in an NBMA environment is that it is limited to directly connected devices. This creates issues in partial-mesh NBMA networks.
Figure 16.2 – NBMA Interface Types
As can be seen in Figure 16.2 above, different types of NBMA interfaces exist. One of these types is the Multipoint NBMA interface. As its name implies, it can be the termination point for multiple Layer 2 circuits. Multipoint interfaces require some kind of Layer 3-to-Layer 2 resolution methodology.
If Frame Relay is configured on the main physical interface of a device, that interface will be Multipoint by default. If a subinterface is created on a Frame Relay physical interface, the option of creating it as Multipoint exists. Layer 3-to-Layer 2 resolution has to be configured for both the physical interfaces and the subinterfaces. There are two options for doing this in Frame Relay:
- Dynamically (Inverse ARP)
- Statically (frame-relay map command)
Layer 3-to-Layer 2 resolution is not always an issue on NBMA interfaces because Point-to-Point WAN interfaces can be created. A Point-to-Point interface can only terminate a single Layer 2 circuit, so if the interface communicates with only one device, Layer 3-to-Layer 2 resolution is not necessary. With only one circuit, there is only one Layer 2 address to communicate with. Layer 3-to-Layer 2 resolution issues disappear when running, for example, a Frame Relay Point-to-Point subinterface type or an ATM Point-to-Point subinterface.
T1/E1
Standards for T1 and E1 WAN have been around for a very long time. T1 stands for T-Carrier Level 1 and describes a line that uses TDM, which consists of digital signals associated with different channels based on time. T1 is a standard often used in the following geographical regions:
- North America
- Japan
- South Korea
T1 operates using 24 separate channels at a 1.544Mbps line rate, thus allocating 64Kbps per individual channel. You can use the 24 channels any way you want to, and you can even buy just a few channels from the service provider based on your needs. In general terms, consider a T1 connection a trunk/bundle carrying 24 separate lines.
E1 (E-Carrier Level 1) is a standard similar to T1 but it is used exclusively in Europe. The main difference between E1 and T1 is that E1 uses 32 channels instead of 24, also operating at 64Kbps, thus offering a total line rate of 2.048Mbps. E1 also functions based on TDM, just like T1, so all other functionalities are common between the two standards.
T3/E3
T3 and E3 standards offer higher bandwidth than the T1 and E1 standards. T3 stands for T-Carrier Level 3 and is a type of connection usually based on a coaxial cable and a BNC connector. This is different from the T1 connection, which is usually offered over twisted-pair media.
T3 connections are often referred to as DS3 connections, which relates to the data carried on the T3 line. T3 offers additional throughput because it uses the equivalent of 28 T1 circuits, meaning 672 T1 channels. This offers a total line rate of 44.736Mbps.
E3 connections are similar to T3 connections, except they are equivalent to 16 E1 circuits, meaning 512 E1 channels and a total line rate of 33.368Mbps.
T3/E3 connections are usually used in large data centers because they offer the ability to increase the total amount of throughput when needed.
SONET, SDH, and OCX
Modern networks usually require WAN connectivity at speeds higher that T1/E1 or T3/E3. Optical connections that come directly into data centers represent a valid alternative that can offer increased bandwidth. This is also called synchronous optical networking because a lot of digital signals can come over a single fiber into the customer’s facility. This means that you do not need to have separate connections to carry different services.
The digital signals transmitted over the fiber are multiplexed and the connection is called synchronous because the signals are using the same clock over the connection, regardless of the type of signal. There are two primary standards associated with synchronous optical networking that operate in similar fashion:
- SONET
- SDH
Synchronous Optical Networking (SONET) is one of the most common standards, and it was developed by the American National Standards Institute (ANSI). SONET is primarily used in the United States and Canada.
SONET is a circuit-based technology that delivers very high-speed service over optical networks using a ring topology. This topology type offers fault tolerance, redundancy, and the capability of being highly available. SONET functions using two Layer 2 technologies:
- ATM
- Packet over SONET (POS)
Synchronous Digital Hierarchy (SDH) is an international standard developed by the International Telecommunications Union (ITU). SDH is used everywhere around the world, except for the United States and Canada.
Although SONET and SDH are similar standards, they have different ways of calculating throughput and bandwidth values. They also use different terms to quantify bandwidth levels:
- SONET uses the Synchronous Transport Signal (STS)
- SDH uses the Synchronous Transport Module (STM)
Note: Different SONET (STS) bandwidth levels are equivalent to different Optical Carrier (OCx) bandwidth levels. |
The different OCx, SONET, and SDH bandwidth levels and the similarities between them are presented in Table 16.1 below:
Table 16.1 – OCx, SONET, and SDH
OCx Standard | SONET Standard | SDH Standard | Capacity |
OC-1 | STS-1 | STM-0 | 50Mbps |
OC-3 | STS-3 | STM-1 | 150Mbps |
OC-12 | STS-12 | STM-4 | 600Mbps |
OC-24 | STS-24 | – | 1.2Gbps |
OC-48 | STS-48 | STM-16 | 2.4Gbps |
OC-192 | STS-192 | STM-64 | 9.6Gbps |
OC-768 | STS-768 | STM-256 | 38Gbps |
OC-3072 | STS-3072 | STM-1024 | 153Gbps |
Some of the considerations that must be taken into account when new SONET connections are purchased include:
- Details about transport usage (whether the link will be used for data or voice transport)
- Details about the topology (linear or ring-based)
- Details about single points of failure in the transport
- Customer needs
- Costs
- Implementation scenarios (multiple providers, multiple paths, etc.)
- The type of oversubscription offered by the service provider
You should also know whether you are getting dedicated bandwidth or are sharing the bandwidth with other users. If you are getting services from two providers to achieve high availability and redundancy, they may have different SONET implementations and may follow different paths.
Service providers usually share the same physical fiber paths (e.g., along gas pipelines or public electrical paths), so even if dual service providers are used, because the physical fiber path is often the same, the risk of failure does not decrease. If something happens to the pipes that have the fiber links attached, all the providers that follow that specific path will suffer. The recommended scenario is having two service providers with different physical cabling paths.
Dark Fiber and CWDM/DWDM
Fiber optic cable was heavily installed as a WAN technology before other technologies emerged. In the case of dark fiber, most of the expense went to the labor involved in its physical installation. Although dark fiber provides high bandwidth, it is currently not used much in modern networks.
Service providers usually implement SONET technology or CWDM/DWDM networks over existing dark fiber infrastructure. This allows end-user enterprises to expand their Ethernet implementations into LANs. This concept of Ethernet over large distances is also known as Metro Ethernet, which led to the creation of Metropolitan Area Networks (MANs).
Coarse Wave Division Multiplexing (CWDM) and Dense Wave Division Multiplexing (DWDM) are two different types of Wavelength Division Multiplexing (WDM). Both of these technologies use a multiplexor (MUX) at the transmitter in order to put several optical signals on the fiber. A de-multiplexer (DEMUX) installed at the receiver will achieve the inverse operation. This concept is similar to a modem (modulator-demodulator).
CWDM transmits up to 16 channels, with each channel operating in a different wavelength. CWDM boosts the bandwidth of the existing GigabitEthernet optical infrastructure without having to add new fiber optic strands. CWDM has wider spacing between the channels than the DWDM technology, so it is a much cheaper technology for transmitting multiple Gigabit signals on a single fiber strand. There is a lot of support for this equipment with Cisco, which offers many SFP transceivers that can be used with CWDM links.
CWDM is often used by enterprises on leased dark fiber topologies in order to boost the capacity from 1 to 8 or even 16Gbps over metropolitan area distances. The downside to CWDM is that it is not compatible with modern fiber amplifier technologies, like Erbium Doped Fiber Amplifier (EDFA). EDFA is a method used to amplify light signals, which is making repeaters obsolete. CWDM is also used in cable television implementations.
DWDM is a core technology for optical transport networks that is similar to CWDM in many ways. However, with DWDM, the wavelengths are a lot tighter so you get up to 160 channels as opposed to 16 channels with CWDM. This makes the transceivers and other equipment a lot more expensive. Even though you have 160 channels, the Cisco DWDM cards can support 32 different wavelengths. In addition, DWDM is compatible with EDFA, so you can achieve longer distances when using this technology. This technology also can support MAN and WAN applications better over longer distances; for example, if you are using EDFA with DWDM technology, you can achieve distances up to 120 km between amplifiers. This makes DWDM a high-speed enterprise WAN and MAN connectivity service.
Figure 16.3 – DWDM Topology
Figure 16.3 above shows a sample topology of a DWDM optical network that connects three locations. This type of solution typically includes three components:
- Transponders, which receive the optical signal from a client, convert it into the electrical domain, and retransmit it using a laser
- Multiplexers, which take the various signals and put them into a single-mode fiber (the multiplexer may support EDFA technology)
- Amplifiers, which provide powered amplification of the multi-wavelength optical signal
Metro Ethernet and Long Reach Ethernet
Metro Ethernet is a rapidly emerging solution that defines a network infrastructure based on the Ethernet standard, as opposed to Frame Relay or ATM, which are supported over a MAN. MAN technology is extended over the enterprise WAN at Layer 2 or Layer 3.
This flexible transport architecture can include some combination of optical networking, Ethernet, and IP technologies, and these infrastructure details are transparent to the user, who sees a service at the customer edge but not the underlying technology being used.
Metro Ethernet technologies are not really visible to customers but the customer is responsible for provisioning these services across their core network from the Metro Ethernet access ring. This represents a huge market for service providers because there are many customers who have existing Ethernet interfaces. The more customers know about their provider and the core, the more informed they are about the different types of services they can receive and problems that might happen with those services. In these situations it is critical to think about the appropriate Service Level Agreement (SLA) for advanced WAN services as provided to the customer via Metro Ethernet.
A generic Metro Ethernet infrastructure contains multiple blocks that provide a wide variety of services and they are all connected to the Metro Ethernet core. The core can be based on multiple technologies (e.g., TDM, MPLS, or IP services) that operate on top of GigabitEthernet. The service provider can use SONET/SDH rings, Point-to-Point links, DWDM, or RPR. The connection points of the different blocks use edge aggregation devices or User Provider Edge (UPE) devices that can multiplex multiple customers on a single optical circuit to Network Provider Edge (NPE) devices.
Long Reach Ethernet (LRE) is a WAN technology that evolved from DWDM and that supports 5 to 15Mbps performance over telephone-grade Category 1/2/3 wiring on distances up to 1.5 km (5,000 feet). LRE is a MAN technology that is very similar to VDSL. LRE is also known as Ethernet in the First Mile (EFM). This technology is not used very much in modern infrastructure architectures.
Satellites
One way of achieving wireless WAN connectivity is using satellite networking, also called non-terrestrial communication. This type of connection is usually used in remote, isolated areas that ISPs cannot easily reach via cable connections. This type of connectivity functions using a satellite dish that allows communication from the customer facility to a satellite (which sends and receives signals, as shown in Figure 16.4 below).
Figure 16.4 – Satellite Communication
Using satellite technology to ensure WAN connectivity is generally more expensive than using traditional terrestrial network connections. The speeds offered by such a connection can reach 5Mbps download and 1Mbps upload, which is usually enough for remote small sites.
A significant disadvantage of using satellite connectivity is the increased traffic latency, which can reach up to 250 ms one way (antenna to satellite or satellite to antenna) due to the use of radio signals over a very long distance. This should be carefully analyzed when planning to install a satellite WAN connection because the increased latency could prevent sensitive applications from functioning, while it has no impact on other applications.
Another challenge with satellite connectivity is that the satellite dish has to have a line of sight to the satellite. This means that you have to make use of high frequency ranges (2 GHz), and any type of interference (such as rain or storm clouds) may affect the connection throughput and availability.
ISDN
Integrated Services for Digital Network (ISDN) is a technology that allows digital communication over a traditional analog phone line, so both voice and data can be digitally transmitted over the PSTN. ISDN never had the popularity that it was expected to have because it came along at a time when other alternative technologies were developed. There are two flavors of ISDN:
- ISDN BRI (Basic Rate Interface)
- ISDN PRI (Primary Rate Interface)
The ISDN speaking devices are referred to as terminal emulation equipment and the devices can be categorized as either native ISDN equipment or non-native ISDN equipment. Native ISDN equipment is made up of devices that are built to be ISDN-ready and are called TE1 devices (Terminal Equipment 1). Non-native ISDN equipment is made up of TE2 devices. Non-native ISDN equipment can be integrated with native ISDN equipment using special Terminal Adapters (TAs), so only TE2 devices require TA modules.
Moving toward the ISDN provider, you will find Network Termination 2 (NT2) devices and Network Termination 1 (NT1) devices. These are translation devices for the media, transforming five-wire connections into two-wire connections (i.e., the local loop). The local loop is the user connection line and it is a two-wire link.
An interesting thing about the network termination devices is that in North America the customer is responsible for NT1 devices, while in other parts of the world this is the service providers’ responsibility. Because of this issue, some Cisco routers provide built-in NT1 functionality. These routers feature a visible “U” under the port number so that the user can quickly see this capability. The “U” notation comes from the ISDN reference point terminology that describes where you might have a problem in the ISDN infrastructure. These reference points are illustrated in Figure 16.5 below:
Figure 16.5 – ISDN Reference Points
These reference points are important during the troubleshooting or maintenance processes in an ISDN network. The ISDN switch is usually located at the service provider’s location. The different ISDN reference points are as follows:
- U reference point – between the ISDN switch and the NT1 devices
- T reference point – between the NT2 devices and the NT1 devices
- S reference point – between terminals (TE1 or TA) and the NT2 devices
- R reference point – between non-native ISDN devices and the TA
The ISDN BRI connectivity contains two B (bearer) channels for carrying data and one D (delta) channel for signaling. The BRI connection is abbreviated as 2B+D as a reminder about the number of channels for each type. Each of the bearer channels in ISDN operates at a speed of 64Kbps. Multilink PPP can be configured on top of these interfaces to allow the user to reach a bandwidth of 128Kbps. This bandwidth is considered to be very low according to modern network requirements.
The D channel in BRI ISDN is dedicated at 16Kbps for traffic control. There are also 48Kbps available overall for framing control and other overhead in the ISDN environment. The total ISDN bandwidth for BRI is 192Kbps (128Kbps from the B channels + 16Kbps from the D channel + 48Kbps overhead).
ISDN PRI has 23 B channels and one D channel in the United States and Japan. The bearer channels and the delta channels all support 64Kbps. Including the overhead, the total PRI bandwidth is 1.544Mbps. In other parts of the world (Europe and Australia), the PRI connection contains 30 B channels and one D channel.
ISDN PRI connections are commonly used as connectivity from the PSTN to large phone systems (PBX). Each of the 23 or 30 B channels can be used as a single phone line, so the entire PRI connection can be considered a trunk that carries multiple lines. The main advantage of using a PRI connection instead of multiple individual lines is that it is easier to manage and it offers scalability.
The technologies described above are also TDM technologies. TDM refers to being able to combine multiple channels over a single overall transmission medium and using these different channels for voice, video, and data. Time division refers to splitting the connection into small windows of time for the various communication channels.
In the PSTN, you need to be able to transmit multiple calls along the same transmission medium, so TDM is used to achieve this goal. TDM actually started in the days of the telegraph and later on gained popularity with fax machines and other devices that use TDM technologies.
With leased lines (i.e., buying dedicated bandwidth), the circuits that are sold are measured in terms of bandwidth. A DS1 or T1 circuit in North America provides 24 time slots of 64Kbps each and a 9Kbps control time slot (for a total of 1.544Mbps, as mentioned earlier). In this sense, TDM terminology is tightly connected with the leased line purchasing process.
DSL
Digital Subscriber Line (DSL) is used as an alternative to ISDN for home users. There are different types of DSL connections, but the most important ones include the following:
- ADSL
- HDSL
- VDSL
- SDSL
Asymmetric Digital Subscriber Line (ADSL) is the most common form of DSL connection that functions over standard telephone lines. The reason it is called asymmetric is that it offers unequal download and upload throughput, with the download rate being higher than the upload rate. A standard ADSL connection usually offers a maximum of 24Mbps download throughput and a maximum of 3.5Mbps upload throughput over a distance of up to 3 km.
With Asymmetric DSL, the customer is connected to a Digital Subscriber Line Access Multiplexer (DSLAM) located at the service provider. DSLAM is a DSL concentrator device that aggregates connections from multiple users.
Note: One of the issues with ADSL is the limited distance a subscriber can be from a DSLAM. |
High Bitrate DSL (HDSL) and Very High Bitrate DSL (VDSL) are other DSL technologies used on a large scale. They offer increased throughput compared with ADSL, and VDSL can operate at rates up to 100Mbps.
Symmetric DSL (SDSL) offers the same download and upload throughput, but it was never standardized or used on a large scale.
Cable
Digital signals can also be received by home users over standard TV cable connections. Internet access can be provided over cable using the Data Over Cable Service Interface Specification (DOCSIS) standard. This is usually a low-cost service, as the provider does not need to install a new infrastructure for the data services. The only upgrade to the existing network is the installation of a low-cost cable modem at the customer premises that usually offers RJ45 data connectivity for the user’s devices.
Data traffic transmission rates over cable technology can go up to 100Mbps, which is more than enough for home users and even small businesses.
Note: In addition to TV and data signals, cable connections can also carry voice traffic. |
Point-to-Point Protocol over Ethernet (PPPoE) is another technology that can be used in conjunction with cable. This can be used between the cable modem and the endpoint devices to add security to the cable modem infrastructure. This allows the user to log on and provide a username and a password that has to be authenticated in order for the cable service to be used. The credentials are carried across the Ethernet connection to the cable modem and beyond using PPPoE.
Cellular Networks
Cellular networks are used in conjunction with mobile devices (e.g., cell phones, tablets, PDAs, etc.) to send and receive data traffic, in addition to classic voice service. These networks cover large geographical areas by splitting them into cells. Antennas are strategically placed to ensure optimal coverage across these cells and to ensure seamless cell roaming for users who go from one location to another.
The traditional connectivity type is called 2G and includes the following:
- GSM (Global System for Mobile Communications)
- CDMA (Code Division Multiple Access)
Depending on the carrier you use and the country you live in, you might use GSM or CDMA communication, although functionally they are often referred to as 2G networks. These networks were designed as analog connections using circuit switching and were not originally designed to send data. Because data connections use packet-switching technology, 2G connections offer limited data transmission support.
Newer connection types over cellular networks, which allow full-featured packet switching and proper data transmission, include the following:
- HSPA+ (High Speed Packet Access)
- LTE (Long Term Evolution)
LTE and HSPA+ are standards created by the 3rd Generation Partnership Project (3GPP), which is a collaboration between a number of telecommunications companies that decided they needed a standardized way of sending data on cellular networks.
HSPA+ is a standard based on CDMA and it offers download rates up to 84Mbps and upload rates up to 22Mbps. LTE is a standard based on GSM/EDGE and it offers download rates up to 300Mbps and upload rates up to 75Mbps.
Note: Each of these standards continues to develop, so the throughput rates might increase in the future. |
WiMAX
Worldwide Interoperability for Microwave Access (WiMAX) is a recent standard that aims to provide wireless high-speed Internet access over large areas. WiMAX can offer up to a 30-mile signal radius. Although WiMAX uses a different technology than standard 802.11 Wi-Fi networks, from an end-user perspective the connectivity method is similar.
WiMAX networks can be either fixed or mobile. Fixed WiMAX uses the IEEE 802.16 standard and offers 37Mbps download and 17Mbps upload rates. On the other hand, mobile WiMAX is based on the newer IEEE 802.16e-2005 standard and offers a theoretical throughput of 1Gbps for fixed stations and 100Mbps for mobile devices.
Dial-up
Dial-up is a legacy standard that transmits data traffic over telephone lines. Dial-up falls under the category of circuit switching and it utilizes the PSTN. A connection is established when the user wants to use the dial-up option and the connection is terminated when the user is done using the link.
Because dial-up connections share the same media as the telephone system, they can only use a limited range of frequencies, which translates to limited bandwidth. Because dial-up connections use an analog signal, you need to use a modem to take the digital signal from the computer and convert it into analog communication on the PSTN, and vice versa, so it modulates and demodulates the digital information into an analog signal.
Dial-up connections can usually offer transmission rates up to 54Kbps and in special conditions even up to 320Kbps if you use data compression. This technology is often associated with fax machines, which often use integrated modems to transmit data.
Because of the low transmission rates offered by dial-up connections, they are not very popular in home and enterprise environments. Dial-up access offers very limited bandwidth capabilities but the advantage is that this option is available just about everywhere, because the PSTN spans across almost every geographical location. The technologies used over this connection type should not utilize much bandwidth because the theoretical throughput that can be achieved is 56Kbps; however, the real bandwidth is even less because of interference and other factors.
Modern networks may use dial-up technology as a backup connection that can be activated in an emergency when no other WAN connection type is available.
PON
A Passive Optical Network (PON) is a network in which you have a single provider that might be sending different traffic streams to many receivers. The advantage is the sender sends the traffic on the same link and different streams arrive at the receivers because the networks split the light beams using mirrors and prisms. This is also described as unpowered networking. The following components can be found in a PON infrastructure:
- OLT (Optical Line Terminal): devices at the sender and receiver sides
- ONT (Optical Network Terminal): devices that split the light beams before the signal reaches the end-user
PON uses WDM or DWDM to send multiple frequencies over the same connection and splits them as they are received by the end-users. PON uses the IEEE 802.3ah standard developed in 2004 and can offer 1Gbps throughput in both directions.
Note: PON makes use of encryption technologies to prevent end-users from seeing other customers’ data. |
Frame Relay
Frame Relay is a Non-Broadcast Multi-Access (NBMA) technology. This means that you have to deal with address resolution issues, except in situations where you use Point-to-Point interfaces. The local Layer 2 addresses in Frame Relay are called Data Link Connection Identifiers (DLCI) and these are only locally significant. For example, in a hub-and-spoke environment, the hub device should have a unique DLCI to communicate to each of its spokes, as illustrated in Figure 16.6 below:
Figure 16.6 – Frame Relay DLCI Example
Note: The DLCI number at the end of each link may or may not be identical. For ease of understanding, they are considered identical in Figure 16.6 above. |
The DLCI is the (Layer 2) Frame Relay address, so this is what you need to resolve to a Layer 3 IP address. Another fundamental Frame Relay component is the Local Management Interface (LMI). The service provider operates a DCE Frame Relay device (usually a switch) and the customer provides the DTE Frame Relay device (usually a router). The LMI is the language that permits these two devices to communicate. One of its duties is to report the status (health) information of the virtual circuit that makes up the Frame Relay communication. The LMI also provides DLCI information. LMI is enabled automatically when Frame Relay is initially enabled on a Cisco device interface.
When you inspect the Frame Relay PVC (Permanent Virtual Circuit) status on a Cisco device, you will see a status code defined by LMI that will be one of the following:
- Active (everything is okay)
- Inactive (no problems on the local node, but possible problems on the remote node)
- Deleted (problem in the service provider network)
As an example, Cisco devices offer three flavors of LMI:
- Cisco
- ANSI
- 933A
Cisco routers are configured to automatically try all three of these LMI types (starting with the Cisco LMI) and use the one that matches whatever the service provider is using. This should not be much of a concern in the design phase.
An important aspect that needs to be considered in the design phase is the address resolution methodology used. If you are utilizing Multipoint interfaces in your design (i.e., interfaces that can terminate multiple Layer 2 circuits), you need to find a way to provide Layer 3-to-Layer 2 resolution. As discussed, you have two options that can help you achieve this:
- Dynamically, utilizing Inverse ARP
- Statically, via the frame-relay map static configuration command on Cisco devices
Note: To verify that Layer 3-to-Layer 2 resolution has succeeded, you can issue the show frame-relay map command. |
On a Multipoint interface, Inverse ARP happens automatically. This functionality is enabled right after adding an IP address on an interface configured for Frame Relay. At that moment, requests start being sent out of all the circuits assigned to that specific interface for any supported protocol the interface is running.
The request process can be disabled with the no frame-relay inverse-arp command, but you can never design a network that will stop responding to requests. By design, Inverse ARP replies cannot be disabled, so the Frame Relay speaker will always attempt to assist somebody that attempts to perform Layer 3-to-Layer 2 resolution via Frame Relay Inverse ARP. The Inverse ARP behavior in the Frame Relay design automatically assists with Broadcasts through the replicated Unicast approach discussed before, so when using Inverse ARP, Broadcast support exists by default.
If you connect two routers to the Frame Relay cloud using physical interfaces, this means that the specific interfaces are Multipoint from a Frame Relay perspective, because a physical Frame Relay interface by default is a Multipoint structure. Even though the connection between the two routers may appear to be Point-to-Point, it is a Frame Relay Multipoint connection. This is illustrated in Figure 16.7 below:
Figure 16.7 – Frame Relay Multipoint Example
Because they use Multipoint interfaces by default, the two devices handle Layer 3-to-Layer 2 resolution dynamically using Inverse ARP.
If you want to design a solution where Inverse ARP is not used, you can turn off the dynamic mapping behavior on each device and then configure static Frame Relay mappings. The static mapping command has the following format in Cisco devices:
frame-relay map protocol address dlci [broadcast]
The protocol keyword is usually IP, the address keyword is the remote address, and the dlci keyword represents the local id. The broadcast keyword can be optionally added to activate the replicated Unicast behavior to support Broadcast functionality. The static mapping must be configured in order to override or turn off the default dynamic Inverse ARP behavior. This helps the administrator have full control over the Layer 3-to-Layer 2 resolution process in a Frame Relay environment.
A huge error that can appear on Cisco equipment is that once the physical interfaces come up and Inverse ARP starts to operate, you might find that there are dynamic mappings to 0.0.0.0. These mappings occur because of a clash between two features: Inverse ARP and Cisco Auto Install. To discard these mappings, a clear frame-relay inarp command should be issued and then the device should be restarted. This mapping can create a failure in the communication paths from the Frame Relay environment.
Point-to-Point configurations are the ideal choice when it comes to Layer 3-to-Layer 2 resolution because this process does not occur when using such interface types. When configuring Point-to-Point Frame Relay, use Point-to-Point subinterfaces, as these subinterfaces will not get the DLCI assignments from the LMI as they would in the Multipoint situation. The DLCI must be manually assigned to the subinterfaces with the frame-relay interface-dlci command.
The previous example can be modified so that Point-to-Point subinterfaces are created between the two routers and then manually assigned DLCI ids in order for Frame Relay to function correctly, as illustrated in Figure 16.8 below:
Figure 16.8 – Frame Relay Point-to-Point Example
There is no concern about Layer 3-to-Layer 2 resolution because each router has only one remote device it sends data to and it does this using the subinterface associated with the DLCI.
Another option would be to create subinterfaces and declare them Multipoint interfaces. These types of interfaces behave exactly like the physical Multipoint interfaces so you need to decide on the resolution method to be used: Inverse ARP or static mappings. A combination of these can be used, for example, by implementing Inverse ARP on one end of the connection and defining static maps on the other end.
The interface type settings and the selected Layer 3-to-Layer 2 resolution method is only locally significant. This means that you can have all kinds of variations in your Frame Relay design, such as the ones shown in Table 16.2 below:
Table 16.2 – Frame Relay Design Variations
Local Interface | connected to | Remote Interface |
Main interface | Main interface | |
Main interface | Multipoint subinterface | |
Main interface | Point-to-Point subinterface | |
Multipoint subinterface | Multipoint subinterface | |
Multipoint subinterface | Point-to-Point subinterface | |
Point-to-Point subinterface | Point-to-Point subinterface |
Partial-mesh designs and configurations will be the most challenging. This implies that Layer 2 circuits will not be provisioned between all the endpoints involved in the Frame Relay environment.
Note: The hub-and-spoke topology is just a special type of partial-mesh configuration. |
In a hub-and-spoke environment, the spokes are not directly connected to each other and this means that they cannot resolve each other via Inverse ARP. In order to solve these issues, you can do any of the following:
- Provide additional static mappings
- Configure Point-to-Point subinterfaces
- Design the hub-and-spoke infrastructure so that the Layer 3 routing design can solve the resolution problems (e.g., using the OSPF Point-to-Multipoint network type)
Frame Relay supports markings that can impact Quality of Service (QoS). For example, the Frame Relay header contains a Discard Eligible (DE) bit, so with Frame Relay environments for QoS, packets can be marked with the DE bit and this informs the service provider that those specific packets are not very important and can be discarded in case of congestion. This behavior will prioritize packets that do not have the DE bit set.
Other parameters that can be configured in the Frame Relay environment are Forward Explicit Congestion Notifications (FECNs) and Backward Explicit Congestion Notifications (BECNs). The Frame Relay equipment, if configured to do so, can notify devices of congestion and can cause the slowing down of the sending rates, as illustrated in Figure 16.9 below:
Figure 16.9 – Frame Relay Congestion Notifications
If you have a chain of Frame Relay nodes that support FECNs and BECNs, the first device can send an FECN that informs about existing congestion and the need for slower transmitting rates. The FECN marking is moved forward and this can cause problems when there is no return traffic sent backward. To make sure everybody knows about congestion, use BECNs that are empty frames that carry the BECN bit backward. This notifies the return path about the congestion. Devices respond to FECNs and BECNs by slowing down in terms of the transmitted rate to avoid further congestion.
ATM
Asynchronous Transfer Mode (ATM) was a WAN technology that used a combination of cell-based technology and SONET. ATM used 53-byte cells spaced evenly apart, which contained the following:
- 48 bytes for data
- 5 bytes for the routing header
These cells were transmitted into a constant stream over the network and traffic was transmitted at a high throughput with low latency, no matter whether data, voice, or video was sent. ATM offered throughput up to OC-192, but one disadvantage was that traffic had to be segmented due to the small available cell size. This meant that the effort had to be doubled at the receiving end, where packet reassembly had to occur. Other ATM disadvantages include the following:
- Complex installation and configuration
- Expensive ATM network equipment
All of these disadvantages led to the ATM technology being replaced by cheaper and more efficient technologies.
Virtual Private Networks
Even though the Virtual Private Network (VPN) concept implies security most of the time, unsecured VPNs also exist. Frame Relay is an example in this regard because it provides private communications between two locations but it might not have any security features on top of it. Whether you should add security to the VPN connection depends on the specific requirements for that connection.
VPN troubleshooting is difficult to manage because of the lack of visibility in the service provider infrastructure. The service provider is usually seen as a cloud that aggregates all the network location connections. When engaging in VPN troubleshooting, you should first make sure that the problem does not reside on your devices and only then should you contact your service provider.
There are many types of VPN technologies:
- Site-to-Site VPNs or Intranet VPNs, for example, Overlay VPNs (like Frame Relay) or Peer-to-Peer VPNs (like MPLS). These are used when you want to connect different locations over the public infrastructure. When using Peer-to-Peer infrastructures, you can seamlessly communicate between sites without worrying about IP addressing overlap.
- Remote Access VPNs, for example, Virtual Private Dial-up Network (VPDN), which is a dial-up approach to the VPN with security in mind.
- Extranet VPNs, when you want to connect to business partners or customer networks.
With VPNs the traffic is often tunneled when it is sent over an infrastructure. One tunneling methodology for Layer 3 is called Generic Routing Encapsulation (GRE). GRE allows you to tunnel traffic but it does not provide security. In order to tunnel traffic and also provide security, you can use a technology called IP Security (IPSec). This is a mandatory implementation component of IPv6 but it is not a requirement for IPv4. IPSec is also used in conjunction with Authentication, Authorization, and Accounting (AAA) services that allow the tracking of user activity.
The main benefits of VPNs include the following:
- Scalability (you can continuously add more sites to the VPN)
- Flexibility (you can use very flexible technologies like MPLS)
- Cost (you can tunnel traffic through the Internet without much expense)
MPLS
Multiprotocol Label Switching (MPLS) functions by appending a label to any type of packet. Then, the forwarding of the packet through the network infrastructure is accomplished based on this label value instead of any Layer 3 information. The labeling of the packet will provide very efficient forwarding and will make MPLS work with a wide range of underlying technologies. By simply adding a label in the packet header, MPLS can be used in many Physical and Data Link Layer WAN implementations.
The MPLS label is positioned between the Layer 2 header and the Layer 3 header. With MPLS, overhead is added a single time, when the packet goes into the service provider’s cloud. After entering the MPLS network, packet switching is done much faster than in traditional Layer 3 networks because this is based on only swapping the MPLS label instead of stripping the entire Layer 3 header.
MPLS-capable routers are also called Label Switched Routers (LSRs) and these routers come in two flavors:
- Edge LSR (PE routers)
- LSR (P routers)
PE routers are Provider Edge devices that take care of label distribution. They forward packets based on labels and they are responsible for label insertion and removal. P routers are Provider routers and their responsibility consists of label forwarding and efficient packet forwarding based on labels.
With MPLS there is a separation between the control plane and the data plane. This leads to great efficiency in how the LSR routers work. Resources that are constructed for the efficiency of control plane operations include the routing protocol, the routing table, and the exchange of labels, and these are completely separated from resources that are designed to only forward traffic as quickly as possible in the data plane.
Forwarding Equivalence Class (FEC) describes the class of packets that receives the same forwarding treatment (e.g., traffic forwarded based on a specific QoS marking through the service provider cloud).
Figure 16.10 – MPLS Label Fields
The MPLS label is 4 bytes in length and it consists of the following fields, as shown in Figure 16.10 above:
- 20-bit Label Value field
- 3-bit Experimental field (QoS marking)
- 1-bit Bottom of the Stack Indicator field (useful when multiple labels are used; it is set to 1 for the last label in the stack)
- 8-bit TTL field (to avoid looping)
You might need to use a stack of labels when dealing with MPLS VPNs. MPLS VPNs are the most important technology that uses MPLS, as it was developed to serve MPLS VPN technology.
Figure 16.11 – MPLS VPN Example
An example of an MPLS VPN application is illustrated in Figure 16.11 above, where you have a service provider that offers MPLS VPN services. The PE routers connect to different customers, with the same customer having multiple sites and each site is connected to a different PE router. With the MPLS approach, two sites with the same customer receive transparent secure communication capabilities based on the unique customer labels assigned. The service provider uses MPLS to carry the traffic between the PE routers, through the P devices.
Note: An important advantage of the MPLS VPN technology is that this secure connectivity is ensured without the customer having to run MPLS on any device. The customer just runs a standard routing protocol with the service provider and all the MPLS VPN logic is located in the ISP cloud. |
With MPLS VPNs, a stack of labels is used: a VPN label is used to identify the customer (VPN identification) and an IGP label is used to forward packets through the ISP cloud (egress router location).
Layer 3 MPLS VPN technology is a very powerful and flexible option for service providers to give customers the transparent WAN access connectivity they need. This is very scalable for the service provider because it is very easy for them to add customers and sites.
MPLS comes in two different flavors:
- Frame Mode MPLS
- Cell Mode MPLS
Frame Mode MPLS is the most popular MPLS type, and in this scenario the label is placed between the Layer 2 header and Layer 3 header (for this reason, MPLS is often considered a Layer 2.5 technology). Cell Mode MPLS is used in ATM networks and uses fields in the ATM header that are used as the label.
One of the important issues that must be solved with MPLS is determining the devices that will take care of inserting and removing the labels. The creation of labels (label pushing) is done on the Ingress Edge LSR and label removal (label popping) is done on the Egress Edge LSR. The label-switched routers in the interior of the MPLS topology are only responsible for label swapping (i.e., replacing the label with another label) to forward the traffic on a specific path.
MPLS devices need a way to exchange the labels that will be utilized for making forwarding decisions. This label exchange process is accomplished using a protocol. The most popular of these protocols is called Label Distribution Protocol (LDP). LDP is a session-based UDP technology that allows for the exchange of labels. UDP and Multicast are used initially to set up the peering, and then TCP makes sure that there is a reliable transmission of the label information.
A technology that improves MPLS efficiency is Penultimate Hop Popping (PHP). This allows for the second to last LSR in the MPLS path to be the one that pops out the label. This adds efficiency to the overall operation of MPLS.
The concept of Route Distinguisher (RD) describes the way in which the service provider can distinguish between the traffic of different customers. This allows different customers who are participating in the MPLS VPN to use the exact same IP address space, so you can have both customer A and customer B using the 10.10.10.0/24 range with the traffic differentiated between customers using route distinguishers.
Devices can create their own virtual routing tables, called VPN Routing and Forwarding (VRF) tables, so a PE router can store each specific customer data in a different isolated table, which provides increased security.
Prefixes are carried through the MPLS cloud by relying on Multiprotocol BGP (MP-BGP). MP-BGP carries the VPNv4 prefixes (i.e., the prefix that results after the RD is prepended to the normal prefix). You can filter customers being able to access each other’s prefixes with import and export targets.
Speed, Distance, and Transmission Media
Regardless of the WAN technology used, the physical media that supports that type of connection is critical for the network to operate in optimal parameters. Unless you make sure that you have a good physical infrastructure (Layer 1), you might end up having problems with the entire network.
When working with different types of media, you should take into consideration the following points:
- Type of media
- Speed
- Distance
All of the factors mentioned above work together, as different types of media offer different speeds and work over different distances. Depending on your specific needs, you should carefully choose the proper media type for your connection. The three most commonly used types of media include the following:
- Fiber
- Twisted-pair
- Coaxial cable
Optical fiber has been around for a very long time and functions by the principle of light reflection within a glass environment. Fiber functions over very long distances and it can offer high speeds over many kilometers. The light in optical fiber can travel such a long distance because it is not susceptible to electromagnetic interference and it experiences low degradation.
Twisted-pair cabling is often used in LAN environments and its major advantage is that the twisted wires cancel most interference. Other advantages of twisted-pair cables include the following:
- It is thin
- It is flexible
- It offers high throughput
- It is inexpensive
Coaxial cables are very thick and contain a copper wire inside. This type of cable was patented in 1880 and it can carry signals over very long distances; however, it has steadily been replaced by fiber. Coaxial cables are generally used in industrial environments due to the increased interference protection they offer. However, one disadvantage of coaxial cables is that they can experience signal leakage.
Summary
Remote office locations, such as branch offices or the homes of teleworkers, connect to the enterprise campus via the Enterprise Edge and Enterprise WAN areas. When selecting an appropriate WAN technology to extend to these remote locations, design considerations include ownership of the link, reliability of the link, and a backup link if the primary link were to fail.
A WAN spans a relatively broad geographical area and a wide variety of connectivity options exist. When designing WAN solutions, you should consider the characteristics of the following modern WAN technologies:
- Time-Division Multiplexing (TDM): A TDM circuit is a dedicated point-to-point connection that is constantly connected. T1 and E1 circuits are examples of TDM circuits.
- Integrated Services for Digital Network (ISDN): ISDN uses digital phone connections to support the simultaneous transmission of voice and data. ISDN is considered a circuit-switched technology.
- Frame Relay: Frame Relay is considered a packet-switched technology, which uses the concept of virtual circuits to potentially create multiple logical connections using a single physical connection.
- Multiprotocol Label Switching (MPLS): MPLS is considered a label-switching technology, where packets are forwarded based on a 32-bit label, as opposed to an IP address.
- Digital Subscriber Line (DSL): DSL provides high-bandwidth links over existing phone lines. A variety of DSL implementations exist. The most popular type of DSL found in homes is Asynchronous DSL (ADSL), which allows home users to simultaneously use their phone line for both high-speed data connectivity and traditional analog telephone access.
- Cable: Cable technology leverages existing coaxial cable, used for the delivery of television signals, to simultaneously deliver high-speed data access to the WAN.
- Wireless: Wireless technologies use radio waves to connect devices, such as cell phones and computers. An example of a wireless application is wireless bridges connecting two buildings that have a line-of-site path between them.
- Synchronous Optical Networking (SONET) and Synchronous Digital Hierarchy (SDH): SONET and SDH both use TDM technology to provide services over an optical network. Because of the optical transport used by these technologies, relatively high-bandwidth solutions are available.
- Dense Wavelength Division Multiplexing (DWDM): DWDM increases the bandwidth capacity of an optical cable by sending multiple traffic flows over the same fiber, with each flow using a different wavelength.
Configure WAN in our 101 Labs – CompTIA Network+ book.