Enterprise WAN Design
This chapter will cover the following topics:
WAN Design Overview
WAN technologies operate at the enterprise edge in the modular Cisco enterprise infrastructure. WANs span across large geographical distances in order to provide connectivity for various parts of the network infrastructure. Unlike the LAN environment, some WAN components are not owned by the specific enterprise they serve. Instead, WAN equipment or connectivity can be rented or leased from service providers. Most service providers are well trained in supporting not only traditional data traffic but also voice and video services (which are more delay-sensitive) over large geographical distances.
You can learn more in our Cisco CCNP course.
In addition, unlike LANs, WANs typically have an initial fixed cost and, thereafter, periodic recurring fees for services, which is one reason you should not engage in over-provisioning. This cost-and-fee structure requires implementing effective QoS mechanisms in order to avoid buying additional WAN bandwidth, which will result in unnecessarily high costs.
WAN technologies’ design requirements are typically derived from the following:
- Applications type
- Applications availability
- Applications reliability
- Costs associated with a particular WAN technology
- Usage levels for the applications
The enterprise edge represents a large block (or several blocks) of equipment. This large module is typically split into smaller blocks, each with specialized functionality, such as the following components:
- Internet connectivity block, which offers robust Internet access with some level of availability and redundancy
- DMZ block
- WAN block for branch offices/remote access connectivity
- E-commerce block, if this is a part of the organization
- Remote access VPN block, which provides secure connectivity for a large number of employees who work out of their home office
An important topic when considering CCDA certification is the common categories within various WAN technologies. An essential concept is circuit-switched technology, the most relevant example of which is the Public Switched Telephone Network (PSTN). One of the technologies that falls under this category is ISDN. The way circuit-switched WAN connections function is by being established when needed and terminated when they are no longer required. Another example that reflects this circuit-switching behavior is the old-fashioned dial-up connection (dial-up modem analog access over the PSTN).
Note: Before current technologies were established, dial-up technology was the only way to access Internet resources, offering an average usable bandwidth of around 40 kbps. Nowadays, this technology is almost extinct.
The opposite of the circuit-switched option is leased-line technology. This is a fully dedicated connection that is permanently up and owned by the company. Examples of leased lines include Time Division Multiplexing (TDM) based leased lines. These are usually very expensive because a single customer has full use of the offered connectivity.
Another popular category of wide area networking technology involves packet-switched concepts. In a packet-switched infrastructure, shared bandwidth utilizes virtual circuits. The customer can create a virtual path (similar to a leased line) through the service provider’s infrastructure cloud. This virtual circuit has a dedicated bandwidth, even though, technically, this is not a real leased line. Frame Relay is an example of this type of technology.
Some legacy WAN technologies include X.25, which is the predecessor of Frame Relay. This technology is still present in some implementations but it is very rare to find it in use.
Another WAN category relates to cell-switched technology. This is often included in packet-switched technologies, as they are very similar. An example of cell-switched technology is Asynchronous Transfer Mode (ATM). This operates by using fixed-size cells, instead of using packets such as in Frame Relay technology. Cell-switched technologies form a shared bandwidth environment from the service provider’s standpoint that can guarantee customers some level of bandwidth through their infrastructure.
Broadband is another growing category for wide area networking, and this includes technologies such as the following:
Broadband involves making a connection, such as an old-fashioned coaxial cable that carries TV signals, and figuring out how to use different aspects of that bandwidth. For example, by using multiplexing, an additional data signal could be transmitted along with the original TV signals.
Figure 5.1 – WAN Categories
As detailed above and summarized in Figure 5.1, there are many options when discussing WAN categories. All of these technologies can support the needs of modern networks that operate under the 80/20 rule, meaning 80% of the network traffic uses some kind of WAN technology to access remote resources.
WAN topologies are categorized as follows:
As presented in previous chapters, full-mesh topologies require a large number of nodes and add extra overhead. Referring back to the formula n*(n-1)/2, where “n” denotes nodes, connecting four nodes in a full-mesh topology will require six different connections. The full-mesh topology is the best option when considering availability and reliability because if there is any kind of failure, failover will occur on the other links/devices. The downside of the full-mesh topology is the extra overhead associated with building and maintaining all the connections and the high costs required to install all the links.
Figure 5.2 – WAN Hub-and-Spoke Topology
The hub-and-spoke topology, illustrated in Figure 5.2 above, is one of the most popular WAN topologies. The hub router is usually located at the headquarters location and connects to branch office routers in a hub-and-spoke fashion. The hub-and-spoke topology is not the best topology as far as redundancy and availability are concerned, as the hub device is the most common point of failure. In order to achieve some form of high availability, the hub device and/or the connections between the hub and the spokes should be duplicated. Hub-and-spoke topologies are less complex and less expensive than full-mesh topologies.
Note: In a hub-and-spoke topology, the minimum number of required connections is equal to the number of spokes.
Partial-mesh is another WAN topology, and this involves a combination of full-mesh and hub-and-spoke topologies within a larger area. The partial-mesh topology falls in the middle of full-mesh and hub–and-spoke topologies in terms of availability and costs. This topology is useful when a high level of availability and redundancy is required only in some areas (i.e., for certain nodes).
Non-Broadcast Multi-Access Technologies
A special technology that is used in wide area networking is Non-Broadcast Multi-Access (NBMA). Examples of situations in which NBMA is deployed include a group of systems wanting to communicate over the same network but lacking native broadcast support, and devices that cannot natively send a packet destined for all the devices on the multi-access segment. Frame Relay, ATM, and ISDN are examples of technologies that are NBMA by default, since these technologies do not have a native ability to support broadcast. This prevents them from running routing protocols that use its operation, for example.
Native multicast support is also missing in non-broadcast networks, so, in the case of a routing protocol, all the nodes that participate must get multicast updates. One approach that can be used when employing the NBMA network is sending the multicast or broadcast packets as replicated unicast packets, forcing the broadcast/multicast frames to be sent individually to every node in the topology. The tricky part of this scenario is that the device must come up with a way to solve the Layer 3 to Layer 2 resolution, as particular packets must be addressed for the specific machines that need to receive them.
To solve this Layer 3 to Layer 2 resolution, the Layer 3 address is typically the Layer 3 address and the Layer 2 address is usually variable, based on the technology used. For example, with Frame Relay, this involves the Data Link Connection Identifier (DLCI), which must find the IP address.
In the case of broadcast networks, Layer 3 resolution uses MAC addresses as the Layer 2 address, which must find IPv4 addresses. This is accomplished using the Address Resolution Protocol (ARP). In a broadcast-based network, the devices broadcast their requests by specifying the devices it wants to communicate with (typically learned via DNS) and asking for the MAC addresses specific to those devices. The reply is in unicast and includes the requested MAC address.
In NBMA environments, you still need to bind the Layer 3 address (IP address) to the Layer 2 address (DLCI). This can be accomplished in an automated fashion, using a technology called Inverse-ARP, which is used to resolve the remote Layer 3 address to a Layer 2 address only used locally. Inverse-ARP also can be utilized in Frame Relay environments. The issue with using Inverse-ARP as the solution for Layer 3 to Layer 2 resolution in an NBMA environment is that it is limited to directly connected devices, which creates issues for partial-mesh NBMA networks.
Figure 5.3 – NBMA Interface Types
Different types of NBMA interfaces exist, as illustrated in Figure 5.3 above. One of these types is the multipoint NBMA interface. As its name implies, it acts as the termination point for multiple Layer 2 circuits. Multipoint interfaces require some kind of Layer 3 to Layer 2 resolution methodology.
If Frame Relay is configured on the main physical interface of a device, that interface will be multipoint by default. If a subinterface is created on a Frame Relay physical interface, the option of creating it as multipoint exists. Layer 3 to Layer 2 resolution must be configured for both the physical interfaces and for the subinterfaces. The following are two options for doing this in Frame Relay:
- Statically map
Layer 3 to Layer 2 resolution is not always an issue on NBMA interfaces because point-to-point WAN interfaces can be created. A point-to-point interface can terminate only a single Layer 2 circuit. Therefore, if the interface communicates with only one device, Layer 3 to Layer 2 resolution is not necessary. With only one circuit, there is only one Layer 2 address with which to communicate. For example, Layer 3 to Layer 2 resolution issues disappear when running a Frame Relay or an ATM point-to-point subinterface.
Although dial-up technologies are not very common in modern networks, this is still a valid topic for CCDA certification. Dial-up falls under the category of circuit switching and it utilizes PSTN. A connection is established when the user wants to use the dial-up option and the connection ends when the user is done using the link.
Considering dial-up connections use an analog signal, users need to use a modem (modulator demodulator) in order to take the digital signal from the computer and convert it into analog communication on the PSTN, and vice versa.
Dial-up access offers very limited bandwidth capabilities but its advantage is that it is available just about everywhere, because PSTNs span across almost every geographical location. The technologies used over this connection type should not utilize much bandwidth, because the theoretical throughput that can be achieved is 56 Kbps; however, actual bandwidth throughput is almost always less than that because of interference and other factors.
Modern networks may use dial-up technology as a backup connection that can be activated in an emergency when no other WAN connection type is available.
Integrated Services Digital Network (ISDN) is a technology that allows digital communication over a traditional analog phone line, so that both voice and data can be transmitted digitally over the PSTN. ISDN never reached the level of popularity it was expected to reach because it emerged when alternative technologies were also being developed.
The two flavors of ISDN include the following:
- ISDN BRI (Basic Rate Interface)
- ISDN PRI (Primary Rate Interface)
ISDN speaking devices are called terminal emulation equipment, and they can be categorized as either native ISDN or non-native ISDN equipment. Native ISDN equipment is comprised of devices that were built to be ISDN-ready, and they are called TE1 devices (Terminal Equipment 1). Non-native ISDN equipment is comprised of TE2 devices. Non-native ISDN equipment can be integrated with native ISDN equipment by using special Terminal Adapters (TAs), which only TE2 devices require.
The ISDN service provider uses termination devices called Network Termination 1 (NT1) and Network Termination 2 (NT2). These are translation devices for media, transforming 5-wire connections into 2-wire connections (i.e., the local loop). The local loop is a 2-wire link connection line for users.
In North America, the customer is responsible for the NT1 device, while in other parts of the world, this falls under the service providers’ responsibility. Because of this issue, some Cisco routers provide built-in NT1 functionality that features a visible “U” under the port number so the user can see this capability quickly. The “U” notation is found in the ISDN reference point terminology that describes where there might be a problem in the ISDN infrastructure, as illustrated in Figure 5.4 below:
Figure 5.4 – ISDN Reference Points
These reference points are important for troubleshooting or maintenance issues in an ISDN network. The ISDN switch is usually located at the service provider’s location. The different ISDN reference points are as follows:
- U reference point – between the ISDN switch and the NT1 device
- T reference point – between the NT2 device and the NT1 device
- S reference point – between terminals (TE1 or TA) and the NT2 device
- R reference point – between non-ISDN native devices and TAs
ISDN BRI connectivity contains two B (bearer) channels for carrying data and one D (delta) channel for signaling, and is abbreviated as 2B+D. Each of these bearer channels in the ISDN operates at a speed of 64 Kbps. Multilink PPP can be configured on top of these interfaces to allow the user to reach a bandwidth of 128 Kbps. This bandwidth is considered very low according to modern networks requirements.
The delta (D) channel in ISDN BRI is dedicated 16 Kbps for traffic control. There are also 48 Kbps overall for framing control and other overhead in the ISDN environment. Therefore, the total ISDN bandwidth for BRI is 192 Kbps (128 Kbps from the B channels + 16 Kbps from the D channel + 48 Kbps of overhead).
ISDN PRI has 23 B channels and one D channel in the U.S. and Japan. The bearer channels and the delta channels all support 64 Kbps; including overhead, the total PRI bandwidth is 1.544 Mbps. In other parts of the world (i.e., Europe and Australia), the PRI connection contains 30 B channels and one D channel.
The technologies described above are called Time Division Multiplexing (TDM) technologies. TDM refers to being able to combine multiple channels over a single overall transmission medium and using these different channels for voice, video, and data. Time division refers to splitting the connection into small windows of time for various communication channels.
In the PSTN, you need to be able to transmit multiple calls along the same transmission medium, so TDM is used to achieve this goal. TDM actually started in the days of the telegraph and later gained popularity with fax machines and other devices that use TDM technologies.
With leased lines (i.e., buying dedicated bandwidth), the circuits that are sold are measured in terms of bandwidth. A DS1 or T1 circuit in North America provides 24 time slots of 64 Kbps each and a 9-Kbps control time slot (for a total of 1.544 Mbps, as mentioned earlier). For this reason, TDM terminology is tightly connected with the leased line purchasing process.
As discussed before, Frame Relay is a NBMA technology, which requires dealing with address resolution issues, except for situations in which point-to-point interfaces are used.
The local Layer 2 addresses in Frame Relay are called Data Link Connection Identifiers (DLCIs) and they are only locally significant. For example, in a hub-and-spoke environment, the hub device should have a unique DLCI to communicate to each of its spokes, as illustrated in Figure 5.5 below:
Figure 5.5 – Example of Frame Relay DLCI
Note: The DLCI number at the end of each link may or may not be identical. For ease of understanding, they are considered identical in the figure above.
The DLCI is the Frame Relay address, so this needs to be resolved to a Layer 3 IP address. Another fundamental Frame Relay component is the Local Management Interface (LMI). The service provider operates a DCE Frame Relay device (usually a switch) and the customer provides a DTE Frame Relay device (usually a router). The LMI is the language that permits these two devices to communicate. One of its duties is to report the status (health) information of the virtual circuit that makes up the Frame Relay communication. The LMI also provides the DLCI information. The LMI is enabled automatically when Frame Relay is initially enabled on a Cisco device interface.
When you inspect the Frame Relay Permanent Virtual Circuit (PVC) status on a Cisco device, you will see a status code defined by the LMI that will be one of the following:
- Active (everything is okay)
- Inactive (no problems on the local node but possible problems on the remote node)
- Deleted (problem in the service provider network)
The three flavors of LMI are as follows:
Cisco routers are configured to try all three of these LMI types automatically (starting with the Cisco LMI) and use the one that matches whatever the service provider is using. This should not be much of a concern in the design phase.
One of the most important aspects that must be considered in the design phase is the address resolution methodology used. If you are utilizing multipoint interfaces in your design (i.e., interfaces that can terminate multiple Layer 2 circuits), you need to find a way to provide the Layer 3 to Layer 2 resolution. As discussed, two options can help you achieve this, as follows:
- Dynamically, utilizing Inverse-ARP
- Statically, via “frame-relay map” static configuration commands
Note: In order to verify that the Layer 3 to Layer 2 resolution has succeeded, use the “show frame-relay map” command.
On a multipoint interface, Inverse-ARP happens automatically. This functionality is enabled right after adding an IP address on an interface configured for Frame Relay. At that moment, requests are sent out of all the circuits assigned to that specific interface for any supported protocol the interface is running.
The request process can be disabled with the “no frame-relay inverse-arp” command, but you can never design a network that will stop responding to requests. By design, Inverse-ARP replies that it cannot be disabled, so the Frame Relay speaker will always attempt to assist anybody who attempts to perform a Layer 3 to Layer 2 resolution via Frame Relay Inverse-ARP.
The Inverse-ARP behavior in the Frame Relay design assists automatically with broadcasts through the replicated unicast approach discussed before. Therefore, when using Inverse-ARP, broadcast support exists by default.
When connecting two routers to the Frame Relay cloud using physical interfaces, the specific interfaces are multipoint from a Frame Relay perspective, because a physical Frame Relay interface by default is a multipoint structure. Therefore, even though the connection between the two routers may appear to be point-to-point, it is a Frame Relay multipoint connection, as illustrated in Figure 5.6 below:
Figure 5.6 – Example of Frame Relay Multipoint Connection
Because they use multipoint interfaces, by default, the two devices will handle the Layer 3 to Layer 2 resolution dynamically using Inverse-ARP.
If you would like to design a solution not using Inverse-ARP, you can turn off the dynamic mapping behavior on each device and then configure static Frame Relay mappings. The static mapping command is formatted as follows:
frame-relay map protocol address dlci [broadcast]
The “protocol” is usually IP, the “address” is the remote address, and the “dlci” parameter represents the local id. The “broadcast” keyword can be added optionally in order to activate the replicated unicast behavior to support broadcast functionality. Static mapping must be configured in order to override or turn off the default dynamic Inverse-ARP behavior. This helps the administration maintain full control over the Layer 3 to Layer 2 resolution process in a Frame Relay environment.
A huge error that can appear on Cisco equipment is that once the physical interfaces come up and Inverse-ARP starts to operate, you will find that there are dynamic mappings to 0.0.0.0. These mappings occur because of a clash of two features – Inverse-ARP and Cisco Auto Install. To discard these mappings, a “clear frame-relay inarp” command should be issued and then the device should be restarted. This mapping can create a failure in the communication paths from the Frame Relay environment.
Point-to-point configurations are the ideal choice when it comes to Layer 3 to Layer 2 resolution because the process in multipoint configurations does not occur when using such interface types. When configuring point-to-point Frame Relay, use point-to-point subinterfaces, which will not get the DLCI assignments from the LMI as in the multipoint situation. The DLCI must be assigned manually to the subinterfaces with the “frame-relay interface-dlci” command.
Figure 5.6 above can be modified such that point-to-point subinterfaces are created between the two routers and then manually assigned DLCI ids in order for Frame Relay to function correctly, as illustrated in Figure 5.7 below:
Figure 5.7 – Example of Frame Relay Point-to-Point Connection
There is no concern about the Layer 3 to Layer 2 resolution because each router has only one remote device to which it sends data and it does this by using the subinterface associated with DLCI.
Another option would be creating subinterfaces and declaring them as multipoint. These types of interfaces behave exactly like the physical multipoint interfaces, but you need to decide on the resolution method to be used: Inverse-ARP or static mappings. A combination of these can be used, for example, by implementing Inverse-ARP on one end of the connection and defining static maps on the other end.
The interface type settings and the selected Layer 3 to Layer 2 resolution method is only locally significant. This means there can be all kinds of variations in your Frame Relay design, such as the following:
|Local Interface||connected to||Remote Interface|
|Main interface||Main interface|
|Main interface||Multipoint subinterface|
|Main interface||Point-to-point subinterface|
|Multipoint subinterface||Multipoint subinterface|
|Multipoint subinterface||Point-to-point subinterface|
|Point-to-point subinterface||Point-to-point subinterface|
Partial-mesh designs and configurations will be the most challenging. This implies Layer 2 circuits are not provisioned between all endpoints involved in the Frame Relay environment.
Note: The hub-and-spoke topology is a special type of partial-mesh configuration.
In a hub-and-spoke environment, the spokes are not directly connected to each other, which means they cannot resolve each other via Inverse-ARP. In order to solve these issues, you can do any of the following:
- Provide additional static mappings
- Configure point-to-point subinterfaces
- Design the hub-and-spoke infrastructure so that the Layer 3 routing design can solve resolution problems (e.g., by using the OSPF point-to-multipoint network type)
Frame Relay supports markings that can impact QoS. For example, the Frame Relay header contains a Discard Eligible (DE) bit. With Frame Relay environments for QoS, packets can be marked with the DE bit, and this informs the service provider that those specific packets are not very important and can be discarded in case of congestion. This behavior will prioritize packets that do not have the DE bit set.
Other parameters that can be configured in the Frame Relay environment are Forward Explicit Congestion Notifications (FECNs) and Backwards Explicit Congestion Notifications (BECNs). The Frame Relay equipment, if configured to do so, can notify devices of congestion and slow down the sending rates, as illustrated in Figure 5.8 below:
Figure 5.8 – Frame Relay Congestion Notifications
In sum, if you have a chain of Frame Relay nodes that support FECNs and BECNs, the first device can forward a FECN that informs about existing congestion and about the need for slower transmitting rates. The FECN marking is then moved forward, but this can cause problems when there is no return traffic sent backwards. To make sure everybody knows about the congestion, use BECNs with empty frames that carry the BECN bit backward. This notifies the return path about the congestion. Devices respond to FECNs and BECNs by slowing down in terms of the transmitted rate in order to avoid further congestion.
Multiprotocol Label Switching
The Multiprotocol Label Switching (MPLS) approach leverages the intelligence of the IP routing infrastructure and the efficiency of Cisco Express Forwarding (CEF). MPLS functions by appending a label to any type of packet. The packet will then be forwarded through the network infrastructure based on this label’s value instead of any Layer 3 information. This ability to label a packet for efficient forwarding allows MPLS to work with a wide range of underlying technologies. By simply adding a label in the packet header, MPLS can be used in many Physical and Data Link Layer WAN implementations.
The MPLS label is positioned between the Layer 2 header and the Layer 3 header. In MPLS, overhead is added a single time, when the packet goes into the service provider cloud. After entering the MPLS network, packet switching is performed much faster than in traditional Layer 3 networks because it only needs to swap the MPLS label instead of stripping the entire Layer 3 header.
MPLS-capable routers are also called Label Switched Routers (LSRs), and they come in the following two flavors:
- Edge LSR (PE routers)
- LSR (P routers)
PE routers are Provider Edge devices that ensure label distribution, forward packets based on labels, and are responsible for label insertion and removal. P routers are Provider routers, and they are responsible for label forwarding and efficient packet forwarding based on labels.
MPLS separates the control plane from the data plane. This leads to a great efficiency in how the LSR routers work. Resources that are constructed for efficient control plane operations include the routing protocol, the routing table, and the exchange of labels, and these are completely separated from resources that are designed only to forward traffic in the data plane as quickly as possible.
CEF contains a Forwarding Information Base (FIB) that is a copy of the routing table information in the cache memory and is used for quick forwarding. MPLS contains a Label Forwarding Information Base (LFIB), which is for label-based traffic exchange.
The term Forwarding Equivalence Class (FEC) describes a class of packets that receives the same forwarding treatment (e.g., traffic forwarded based on a specific QoS marking through the service provider cloud).
Figure 5.9 – MPLS Label Fields
The MPLS label has a length of 4 bytes and it consists of the following fields (as illustrated in Figure 5.9 above):
- 20-bit Label Value field
- 3-bit Experimental field (QoS marking)
- 1-bit Bottom of the Stack field (useful when multiple labels are used, it is set to 1 for the last label in the stack)
- 8-bit TTL field (to avoid looping)
You might need to use a stack of labels when dealing with MPLS VPNs. MPLS VPN is the most important technology that uses MPLS, which was developed to serve the MPLS VPN technology. This concept is illustrated in Figure 5.10 below:
Figure 5.10 – Example of MPLS VPN
An example of an MPLS VPN application would be an ISP that offers MPLS VPN services. The PE routers connect to different customers, with the same customer having multiple sites, each connected to a different PE router. With the MPLS approach, two sites with the same customer receive transparent secure communication capabilities based on the unique customer labels assigned. The ISP uses MPLS to carry the traffic between the PE routers, through the P devices.
Note: An important advantage of MPLS VPN technology is that its secure connectivity is assured without the customer having to run MPLS on any device. The customer only needs to run a standard routing protocol with the ISP because all the MPLS VPN logic is located in the ISP cloud.
When using MPLS VPNs, a stack of labels is used to identify the customer (VPN identification) and another label is used to initiate the forwarding through the ISP cloud (i.e., the egress router location).
Layer 3 MPLS VPN technology is a very powerful and flexible option that allows service providers to give customers the transparent WAN access connectivity they need. This is very scalable for the ISP because it is very easy for them to add customers and sites.
MPLS comes in the following two flavors:
- Frame Mode MPLS
- Cell Mode MPLS
Frame Mode MPLS is the most popular MPLS type, and in this scenario, the label is placed between the Layer 2 header and the Layer 3 header (for this reason, MPLS is often considered a Layer 2.5 technology). Cell Mode MPLS is used in ATM networks and uses fields in the ATM header that are used as the label.
One important issue that must be solved with MPLS is determining the devices that will ensure the insertion and removal of labels. The creation of labels (i.e., label push) is performed on the Ingress Edge LSR and the label removal (i.e., label popping) is performed on the Egress Edge LSR. The LSRs in the interior of the MPLS topology are only responsible for label swapping (i.e., replacing the label with another label) in order to forward the traffic on a specific path.
The MPLS devices need a way in which to exchange the labels that will be utilized for making forwarding decisions. This label exchange process is executed using a protocol. The most popular of these protocols is called Label Distribution Protocol (LDP). LDP is a session-based UDP technology that allows for the exchange of labels. UDP and multicast are used initially to set up the peering, and then TCP ensures that there is a reliable transmission of the label information.
A technology that improves MPLS efficiency is Penultimate Hop Popping (PHP). This allows for the second to last LSR in the MPLS path to be the one that pops out the label. This adds efficiency to the overall operations of MPLS.
Route Distinguisher (RD) is a way in which the ISP can distinguish between the traffic of different customers. This allows different customers who are participating in the MPLS VPN to use the exact same IP address space. For example, you can have both customer A and customer B using the 10.10.10.0/24 range, with the traffic being differentiated between customers by RDs.
Devices can create their own virtual routing tables, called VPN Routing and Forwarding (VRFs) tables, so a PE router can store each customer’s specific data in a different isolated table, providing increased security.
Prefixes are carried through the MPLS cloud by relying on Multiprotocol BGP (MP-BGP). MP-BGP carries the VPNv4 prefixes (the prefix that results after the RD is prepended to the normal prefix). You can filter customers’ access to each other’s prefixes with import and export targets.
Other WAN Technologies
Synchronous Optical Networking and Synchronous Digital Hierarchy
Synchronous Optical Networking (SONET) and Synchronous Digital Hierarchy (SDH) are both circuit-based technologies that deliver very high-speed service over optical networks using a ring topology. This type of topology offers fault tolerance, redundancy, and the capability of being highly available. SONET functions using the following Layer 2 technologies:
- Packet over SONET (POS)
Speed measurements for SONET are offered in the form of Optical Carrier Rates. Various bandwidth measurements and standards are as follows:
- OC-1: 51.85 Mbps
- OC-255: 13.21 Gbps
Digital Subscriber Line
Digital Subscriber Line (DSL) is a way in which high-speed Internet access is achieved over traditional copper PSTN wires. This is accomplished by taking advantage of the frequencies available on the copper voice lines that are not utilized in voice calls.
DSL is often generically referred to as xDSL because of the wide range of DSL variants, such as the following:
- ADSL (Asymmetric DSL)
- SDSL (Symmetric DSL)
- HDSL (High Data Rate DSL)
- VDSL (Very High Data Rate DSL)
- RADSL (Rate Adaptive DSL)
- IDSL (ISDN DSL)
The most popular form of DSL is ADSL, which is also considered residential DSL technology, because, like so many broadband technologies, the rate of downloading is different from the rate of uploading (i.e., the download rate is typically higher than the upload rate). With ADSL, the customer is connected to a Digital Subscriber Line Access Multiplexer (DSLAM) located at the ISP. DSLAM is a DSL concentrator device that aggregates connections from multiple users.
Note: One of the issues with ADSL is the limited distance a subscriber can be from a DSLAM.
Cable is another broadband WAN technology still used today. With cable technology, high-speed data is carried along the television signal within the coaxial cable. The customer can access the cable network via a cable modem located on the premises that connects to a Cable Modem Termination System (CMTS) located at the ISP. Modem technology utilizes the Data Over Cable Service Interface Specifications (DOCSIS) standard to ensure it functions over the cable WAN technology.
Point-to-Point Protocol over Ethernet (PPPoE) is another technology that can be used in conjunction with cable. This can be used between the cable modem and the endpoint devices in order to add security to the cable modem infrastructure. This allows the user to log on and provide a username and a password that must be authenticated in order for the cable service to be used. The credentials are carried across the Ethernet connection to the cable modem and beyond using PPPoE.
Another option for carrying data over wide areas is using wireless technologies that come in different flavors, such as the following:
- Mobile wireless technologies (e.g., 3G)
- Wireless LAN (802.11 b/g/n)
- Bridged wireless (e.g., line-of-sight connectivity between two close buildings)
Wireless networks will be further analyzed in Chapter 10.
Fiber optic cable was heavily installed as a WAN technology before other technologies emerged. In the case of dark fiber, most of the expense went into the labor of physical installation. Although it provided high bandwidth, it is currently not used much in modern networks.
Service providers usually implement SONET technology or Dense Wavelength Division Multiplexing (DWDM) networks over existing dark fiber infrastructure. This allows end-user enterprises to expand their Ethernet implementations into LANs. This concept of Ethernet over large distances is also known as Metro Ethernet, which led to the creation of Metropolitan Area Networks (MANs).
Dense Wavelength Division Multiplexing
DWDM technology allows the use of different wavelengths of light over optical fibers in order to add new bandwidth and services to existing channels of fiber. This is similar to the concept of coaxial cable technology, which carries both TV and data signals.
Long Reach Ethernet
Long Reach Ethernet (LRE) is a Cisco technology that evolved from DWDM and supports 5 to 15 Mbps performance over telephone-grade Category 1/2/3 wiring on distances up to 1.5 km (5,000 feet). LRE is a MAN technology that is very similar to VDSL. LRE is also known as Ethernet in the First Mile (EFM). This technology is not used much in modern infrastructure architectures.
WAN Design Methodologies
The enterprise edge design process must follow the PPDIOO (Prepare, Plan, Design, Implement, Operate, Optimize) process steps described previously. The designer should carefully analyze the following network requirements:
- The types of applications and their WAN requirements
- Traffic volume
- Traffic patterns (including possible points of congestion)
As part of the preparing and planning phases, you should analyze the existing network to investigate which technologies are currently being used for wide area networking and the issues regarding those technologies, and you should document the location of key hosts, servers, and network infrastructure devices.
The next phase is designing the particular topology that will be used in the enterprise edge module. You should analyze the existing technology and then choose the most appropriate topology; you should also project traffic usage and think about reliability, availability, and performance. You must also make sure to address the company’s constraints (i.e., financial or related to resources) and then build a solid implementation plan.
When designing the enterprise edge infrastructure, ensure that you have flexibility in mind at all times. An example of design flexibility is VoIP. Considering the strict requirements of this technology, make sure VoIP can function over the designed solution at any time, even if this is not an initial requirement from the customer. Flexibility in enterprise edge design consists of the ability to incorporate other technologies easily at any given time.
Other key design criteria when considering WAN design include the following:
- Response time
- Window size
- Data compression
Response times are of great importance to the WAN, as well as to its supported applications. Many modern applications will give an indication of the necessary response times. Again, VoIP is an excellent example. When a VoIP call is made over many network devices, you should know what the necessary response time must be for proper voice communications (one-way latency should not exceed 120 ms). You can test a response time using a feature on Cisco devices called IP SLA, as illustrated in Figure 5.11 below:
Figure 5.11 – Example of IP SLA
As an example of using IP SLA, consider a router (R1) to be the first hop in the traffic path and another device (R3) to be the last hop. You can configure IP SLA in an active configuration to generate synthetic traffic that flows over the intermediary devices and then measure the parameters needed. R1 is considered the IP SLA sender and R3 is considered the responder. The responder functionality on R3 assures that there are no false measurements, offering a much more accurate test.
Note: IP SLA used to be called the Service Assurance Agent (SAA) or the Real Time Responder (RTR).
The most important parameters measured with the IP SLA feature are delay and jitter. Delay represents the amount of time required for a packet to reach the destination and jitter is the variation in delay. These parameters are of great importance, especially in highly congested WAN environments.
Another important design parameter is overall available bandwidth (i.e., throughput). This measures the amount of data that can be sent in a particular timeframe through a specific WAN area.
Reliability is another aspect to consider. This gives information about the health of the WAN connection and its resources (i.e., whether the connection is up or down), as well as detailed information about how often the WAN functions as efficiently as possible.
Window size influences the amount of data that can be sent in the WAN in one “chunk”. TCP uses a sliding window concept that works by sending an amount of data, waiting for an acknowledgement of receipt, and then increasing the amount of data until it reaches the maximum window. In the case of a congested WAN link, everyone in the network that is sending data via TCP will start increasing the rate at which they send until the interface starts dropping packets, causing everyone to back off and use the sliding window. After the congestion disappears, everyone will start increasing the rate at which they send at the same time until a new congestion event occurs. This process, which repeats again and again, is called TCP global synchronization. This leads to a waste in bandwidth during the periods all the hosts decrease their window size simultaneously.
Another key WAN factor is whether traffic can be compressed. If the data is already highly compressed (e.g., JPEG images), any additional compression mechanisms are inefficient.
WAN QoS Considerations
One of the key aspects that must be considered when designing WAN solutions is Quality of Service. There are many different categories of the DiffServ QoS approach, including the following:
- Congestion management
- Congestion avoidance
- Link efficiency mechanisms
QoS techniques are often used in WANs because of their bandwidth characteristics, which are usually low compared to LAN implementations.
Shaping and Policing
Shaping is not the same as policing. Shaping is the process that tries to control the way in which traffic is sent, by buffering excess packets. Policing, on the other hand, will drop or remark (i.e., penalize) packets that exceed a given rate.
Policing might be used when fast WAN access is available but is not needed. This prevents certain applications from using all the connection resources. Another situation is when certain applications with clear bandwidth requirements are offered only as many resources as they need.
Shaping is often used to prevent WAN congestion in situations involving asymmetric bandwidth. An example of this is a headquarters router that connects to a branch router that has a lower bandwidth connection. In this type of environment, you can set up shaping when the HQ router sends data so it does not overwhelm the branch office router.
Many times, the contract between an ISP and its customers specifies a Committed Information Rate (CIR) value. This represents the amount of bandwidth purchased from the ISP. Shaping can be used to ensure that data sent conforms to the specified CIR.
When comparing shaping and policing, note that shaping is executed only in the outbound direction, while policing can be performed both in the ingress and egress direction. Another key distinction is that policing will drop or remark the packet, while shaping queues its excess traffic. Because of this behavior, policing will use less buffering. With shaping, you have the advantage of supporting Frame Relay congestion indicators by responding to FECN and BECN messages.
In situations in which the WAN link is constantly congested, the link may need to be upgraded. However, when experiencing occasional congestion on a particular link, you can use QoS congestion management techniques. Congestion occurs for many different reasons in modern networks. Speed mismatches are one such reason for congestion in the WAN link.
The congestion management approach is called queuing. Applying queuing techniques means using other techniques instead of the default First In First Out (FIFO) method. An interface consists in the following two queue areas (as illustrated in Figure 5.12 below):
- Hardware Queue (or Transmit Ring – TX Ring)
- Software Queue
Figure 5.12 – Interface Queue Types
The hardware queue on the interface always uses the FIFO method for packet treatment. This mode of operation ensures that the first packet in the hardware queue is the first packet that will leave the interface. The only TX Ring parameter that can be modified on most Cisco devices is the queue length.
The software queue is where most of the congestion management manipulations are carried out. The software queue is used to order packets before they use the hardware queue, and they can be configured with different queuing strategies.
Congestion might occur because of the high-speed LAN connections that aggregate into the lower-speed WAN connections. Aggregation refers to being able to support the cumulative effect of all the users who want to use the connection.
Many different approaches (i.e., queuing strategies) can be used in congestion management, such as the following:
- FIFO (First In First Out)
- PQ (Priority Queuing)
- RR (Round Robin)
- WRR (Weighted Round Robin)
- DRR (Deficit Round Robin)
- MDRR (Modified Deficit Round Robin)
- SRR (Shaped Round Robin)
- CQ (Custom Queuing)
- FQ (Fair Queuing)
- WFQ (Weighted Fair Queuing)
- CBWFQ (Class Based Weighted Fair Queuing)
- LLQ (Low Latency Queuing)
Note: All of the techniques mentioned above are used on the interface’s software queue. The hardware queue always uses FIFO.
FIFO is a technique used in the hardware queue and is the least complex method of queuing. It operates by giving priority to the first packets received. This is also the default queuing mechanism on software queues for high-speed Cisco interfaces. If you have a sufficient budget to over-provision the congested links, you can use FIFO on all your interfaces (both hardware and software queues). However, in most situations, this is not the case, so you will need to use some kind of advanced queuing technique, such as WFQ, CBWFQ, or LLQ. These are the most modern queuing strategies that will enable you to ensure that important packets are getting priority during times of congestion.
FIFO used in the software queue will not make a determination on packet priorities that are usually signaled using QoS markings. If you rely on FIFO and experience congestion, traffic could be affected by delay or jitter, and important traffic might be starved and might not reach its destination.
WFQ is a Cisco default technique used on slow-speed interfaces (less than 2 Mbps) because it is considered more efficient than FIFO in this case. WFQ functions by dynamically sorting the traffic into flows and then dedicating a queue for each flow while trying to allocate the bandwidth fairly. It will do this by inspecting the QoS markings and giving priority to higher priority traffic.
WFQ is not the best solution in every scenario because it does not provide enough control on the configuration (i.e., it does everything automatically), but it is far better than the FIFO approach because interactive flows that generally use small packets (e.g., VoIP) get prioritized to the front of the software queue. This ensures that high volume talkers do not use all the interface bandwidth. WFQ’s fairness component also ensures that high priority interactive conversations are not starved by high volume traffic flows. This concept is illustrated in Figure 5.13 below:
Figure 5.13 – Weighted Fair Queuing Logic
The different WFQ flows are placed in different queues before they hit a WFQ scheduler, which will allow them to pass to the hardware queue based on defined logic. If one queue fills, the packets will be dropped, but this will also be based on a WFQ approach (lower priority packets are dropped first), as opposed to the FIFO approach of tail dropping.
Because WFQ lacks a certain level of control, another congestion management technology was created called Custom Queuing (CQ). Even though CQ is a legacy technology, it is still implemented in some environments. CQ is similar to WFQ but it operates by manually defining 16 static queues. The network designer can assign a byte count for each queue (i.e., the number of bytes that are to be sent from each queue). Queue number 0 is reserved for the system to avoid starvation on key router messages. CQ allows for manual allocation of the number of bytes or packets for each queue.
Even though CQ provides flexible congestion management, it does not work well with VoIP implementations because of the round-robin nature of Custom Queuing. Consider an example with four queues which are allocated a different number of packets (Q1: 10 packets, Q2: 20 packets, Q3: 50 packets, and Q4: 100 packets) over a time interval. Even though Queue 4 has priority, the interface is still using a round-robin approach (Q4-Q3-Q2-Q1-Q4…and so on). This is not appropriate for VoIP scenarios because voice traffic needs strict priority to allow for a constant traffic flow that will minimize jitter. This is why another legacy technology was invented called Priority Queuing (PQ).
PQ places packets into the following priority queues:
Usually, VoIP traffic is placed in the high-priority queue in order to maintain absolute priority. This can even lead to the starvation of other queues. For this reason, PQ is not recommended for use in modern networks.
If you do not use VoIP in your network, the most recommended congestion management technique is CBWFQ. Using CBWFQ defines the amount of bandwidth that the various forms of traffic will receive. Minimum bandwidth reservations are defined for different classes of traffic. This concept is illustrated in Figure 5.14 below:
Figure 5.14 – Class Based Weighted Fair Queuing Logic
CBWFQ logic is based on a WFQ scheduler that receives information from queues defined for different forms of traffic. The traffic that does not fit any manually defined queue automatically falls into the “class-default” queue. These queues can be assigned minimum bandwidth guarantees to all traffic classes. CBWFQ offers powerful methodologies for controlling exactly how much bandwidth these various classifications will receive. If it contains more than one traffic type, each individual queue will use the FIFO method instead, so you must be careful not to combine too many forms of traffic inside a single queue.
Considering CBWFQ is not efficient when using VoIP, another QoS technique was developed called Low Latency Queuing (LLQ), which is illustrated below in Figure 5.15. This adds a priority queue (usually for voice traffic) to the CBWFQ system, so LLQ is often referred to as an extension of CBWFQ.
Figure 5.15 – Low Latency Queuing Logic
Adding a priority queue to CBWFQ will not lead to starvation because this queue is policed, as is the amount of bandwidth guaranteed for voice, ensuring that it cannot exceed a particular value. Voice traffic gets its own priority treatment, so the remaining traffic forms will rely on WFQ based on the bandwidth reservation values.
Congestion avoidance is another category of Differentiated Services QoS often deployed in WANs. When both the hardware and the software queues fill up, packets will tail drop at the end of the queue, which can lead to voice traffic starvation and/or to the TCP global synchronization process described earlier. Using congestion avoidance techniques helps to guard against global synchronization problems.
The most popular congestion avoidance mechanism is called Random Early Detection (RED). Cisco’s implementation is called Weighted Random Early Detection (WRED). This QoS tool tries to prevent congestion from occurring by randomly dropping unimportant traffic before the queue gets full. As the queue fills up, more and more unimportant packets will be dropped randomly.
Link Efficiency Mechanisms
Link efficiency mechanisms are composed of the following two categories:
- Link Fragmentation and Interleaving (LFI)
Compression involves reducing the size of certain packets in order to increase the available bandwidth and decrease delay. The following types of compression exist:
- TCP header compression (compresses the IP and TCP headers, reducing the overhead from 40 bytes to 3 to 5 bytes)
- RTP header compression (compresses the IP, UDP, and RTP headers of voice packets, reducing the overhead down to 2 to 4 bytes)
The three different flavors of LFI used today are as follows:
- Multilink PPP with interleaving (used in PPP environment)
- 12 (used with Frame Relay data connections)
- 11 Annex C (used with Voice over Frame Relay – VoFR)
LFI techniques are efficient on slow links where certain problems might appear, even when applying congestion management features. These problems are generated by big data packets that arrive at the interface before other smaller important packets. If a big packet enters the FIFO TX Ring before a small VoIP packet arrives at the software queue, the VoIP packet will be stuck behind the data packet, where it may wait a long time before its transmission is finished. This concept is illustrated in Figure 5.16 below:
Figure 5.16 – Link Fragmentation and Interleaving
In order to solve this problem, LFI splits the large data packet into smaller pieces (fragments) so the voice packets can be interleaved between them. In this way, the voice packets do not have to wait behind a large packet until it is completely transmitted.
Other Enterprise Edge Components
Remote Access Design
When designing the remote access block, you must ensure that the network users have transparent access to the network from wherever they are, just as when they are connected to the actual network. The users must be able to reach the resources they are authorized to use as they would from the enterprise campus.
In order to provide these services, the connection requirements must be analyzed carefully in order to ensure that they are fulfilled. Typical requirements include the following:
- VoIP support
- VPN support
- High-volume traffic or low-volume traffic
- Permanent connection (needed or not?)
- Type of flows
VPN Network Design
Even though the VPN concept involves security most of the time, unsecured VPNs also exist. Frame Relay is one example because it provides private communications between two locations but it might not have any security features on top of it. Whether you add security to the VPN connection depends on the specific requirements for that connection.
VPN troubleshooting is difficult to manage because of the lack of visibility in the provider infrastructure. The service provider is usually seen as a cloud that aggregates all the network locations’ connections. When performing VPN troubleshooting, you should first make sure the problem does not reside on your devices, and only then should you contact your ISP.
Types of VPN technologies include the following:
- Site-to-site VPNs or Intranet VPNs, for example, Overlay VPNs (such as Frame Relay) or Peer-to-Peer VPNs (such as MPLS). These are used to connect different locations over the public infrastructure. When using peer-to-peer infrastructure, you can communicate seamlessly between sites without worrying about IP addressing overlap.
- Remote Access VPNs, for example, Virtual Private Dial-up Network (VPDN), which is a dial-up approach for the VPN that is usually accomplished with security in mind.
- Extranet VPNs, to connect to business partners or customer networks.
With VPNs, traffic is often tunneled in order to send it over an infrastructure. The tunneling methodology for Layer 3 is called Generic Routing Encapsulation (GRE). GRE allows traffic to tunnel but it does not provide security. In order to tunnel traffic and provide security, you can use a technology called IP Security (IPSec). This is a mandatory implementation component of IPv6, but it is not a requirement for IPv4. IPSec is also used in conjunction with AAA services that allow tracking of user activity.
The main benefits of VPNs include the following:
- Scalability (you can continuously add more sites to the VPN)
- Flexibility (you can use very flexible technologies, such as MPLS)
- Cost (you can tunnel traffic through the Internet without much expense)
WAN Backup Design
WAN connectivity can achieve backup through the following approaches:
- Dial-up backup activated when the primary link fails
- Secondary WAN link used for backup and/or load balancing
- Shadow VPN, used when the ISP establishes a second permanent virtual circuit (PVC) but the user is only charged for its usage; this can be useful when the main PVC fails, or in situations where more bandwidth is needed (bandwidth overflow)
Enterprise Branch Module Design
Branch modules are sized based on the number of users it needs to accommodate, for example:
- Enterprise teleworker (1 user)
- Single-tier (tens of users)
- Dual-tier (hundreds of users)
- Multi-tier (thousands of users)
As the number of users in the branch modules grows, additional layers might be needed. Branch offices can even have a full-layer architecture (i.e., Access, Distribution, and Core Layers), as in the enterprise campus module.
Remote office locations, such as branch offices or the homes of teleworkers, connect to the enterprise campus via the enterprise edge and the enterprise WAN. When selecting an appropriate WAN technology to extend to these remote locations, design considerations include ownership of the link, reliability of the link, and a backup link if the primary link were to fail.
In the Cisco enterprise architecture model, the enterprise edge module allows the enterprise campus module to connect to remote offices using a variety of WAN, Internet-access, and remote-access technologies (e.g., secure virtual private network access). A WAN spans a relatively broad geographical area in which a wide variety of connectivity options exist.
The primary goals of WAN design include the following:
- The WAN must achieve the goals, meet the characteristics, and support the policies of the customer.
- The WAN must use a technology to meet present requirements, in addition to requirements for the near future.
- The expense of the WAN (one-time and recurring expenses) should not exceed customer-specified budgetary constraints.
When designing WAN solutions, you should consider the characteristics of the following modern WAN technologies:
- Time-Division Multiplexing (TDM): A TDM circuit is a dedicated point-to-point connection that is constantly connected. T1 and E1 circuits are examples of TDM circuits.
- Integrated Services Digital Network (ISDN): ISDN uses digital phone connections to support the simultaneous transmission of voice and data. ISDN is considered a circuit-switched technology.
- Frame Relay: Frame Relay is considered a packet-switched technology, which uses the concept of virtual circuits to create, potentially, multiple logical connections using a single physical connection.
- Multi-Protocol Label Switching (MPLS): MPLS is considered a label-switching technology where packets are forwarded based on a 32-bit label, as opposed to an IP address.
- Metro Ethernet: Metro Ethernet uses Ethernet technology to provide high-speed yet cost-effective links for some Metropolitan Area Networks (MANs) and WANs.
- Digital Subscriber Line (DSL): DSL provides high-bandwidth links over existing phone lines. A variety of DSL implementations exist. The most popular type of DSL found in homes is ADSL, which allows home users to use their phone line for both high-speed data connectivity and traditional analog telephone access simultaneously.
- Cable: Cable technology leverages existing coaxial cable, used for delivery of television signals, to deliver high-speed data access to the WAN simultaneously.
- Wireless: Wireless technologies use radio waves to connect devices, such as cell phones and computers. An example of a wireless application is wireless bridges, which can connect two buildings that have a line-of-site path between them.
- Synchronous Optical Networking (SONET) and Synchronous Digital Hierarchy (SDH): SONET and SDH both use TDM technology to provide services over an optical network. Because of the optical transport used by these technologies, relatively high-bandwidth solutions are available.
- Dense Wavelength Division Multiplexing (DWDM): DWDM increases the bandwidth capacity of an optical cable by sending multiple traffic flows over the same fiber, with each flow using a different wavelength.
Enterprise edge design uses the PPDIOO approach discussed previously. Specifically, the WAN designer should do the following:
- Determine network requirements
- Evaluate existing network technology
- Design the network topology
When designing networks to traverse the WAN, a primary design consideration is making the most efficient use of the relatively limited WAN bandwidth. Cisco and other vendors provide a variety of QoS mechanisms that can help in this regard, including the following:
- Compression: By making the packet smaller, it requires less bandwidth for transmission across a WAN. Therefore, compressing traffic is much like adding WAN bandwidth.
- Link aggregation: Cisco routers support the bonding of physical links into a virtual link.
- Window size: Network delay can be reduced by increasing the window size (i.e., sending more TCP segments before expecting an acknowledgment).
- Queuing: When a router is receiving traffic (e.g., from a LAN interface) faster than it can transmit that traffic (e.g., out of a WAN interface), the router delays the excess traffic in a buffer called a queue. To prevent bandwidth-intense applications from consuming too much of the limited WAN bandwidth, various queuing technologies can place different types of traffic into different queues, based on the traffic priority. Then, different amounts of bandwidth can be given to the different queues.
- Traffic shaping and policing: To prevent some types of traffic (e.g., music downloads from the Internet) from consuming too much WAN bandwidth, a traffic conditioner called policing can be used to set a “speed limit” on those specific traffic types and drop any traffic exceeding that limit. Similarly, to prevent a WAN link from becoming oversubscribed (e.g., oversubscribing a remote office’s 128 kbps link when receiving traffic from the headquarters, which is transmitting at a speed of 768 kbps), another traffic conditioner called shaping can be used to prevent traffic from exceeding a specified bandwidth. With shaping, compared to policing, excessive traffic is delayed and transmitted when bandwidth becomes available, instead of being dropped. Unlike shaping, policing mechanisms can also remark traffic, giving lower-priority QoS markings to traffic exceeding a bandwidth limit.
When considering design elements for the enterprise WAN, the following design categories exist:
- Traditional WAN design:
- Leased lines
- Circuit switched
- Packet/cell switched
- Remote-access network design
- Virtual private network (VPN) design
- WAN backup design:
- Dial-up backup routing
- Redundant WAN link
- Shadow PVC
- IPSec tunnel