Modular Network Solutions
This chapter will introduce modular network models provided by Cisco that assists in constructing the Intelligent Information Network (IIN), as presented in Chapter 2. The following topics will be covered in detail:
Two network models will be analyzed in this chapter. The first network model is a classic model that Cisco has been teaching for a long time, called the Cisco hierarchical network model. The second network model is an expanded and improved version of the basic model, called the Cisco enterprise architecture model. You can learn more in our Cisco CCNA course.
Cisco Hierarchical Network Model
The most important idea concerning the Cisco hierarchical network model is the step-by-step construction of the network, which implements one module at a time starting with the foundation. The implementation of each module can be supervised by the network architect, but the details are covered by specialized teams (e.g., routing, security, voice, and so on). This modular approach is the key to simplifying the network.
The main advantages of the Cisco hierarchical network model are as follows:
- Ease to understand and implement
- Cost savings
- Easily modified
- Easy network growth
- Facilitates summarization
- Fault isolation
This model was created in order to make the construction of the IIN easier to understand. Cisco has always tried to make efficient and cost-effective networks with a modular structure so they could easily be divided into building blocks. The modular network design facilitates modifications in certain modules, after their implementation, and makes it easy to track faults in the network.
A special feature promoted by the hierarchical network model is summarization. This facilitates smaller routing tables and smaller convergence domains, as well as translates into many advantages, such as summarizing routes from an OSPF area as they enter the backbone, or having a more stable network by not advertising specific network changes to other areas or domains. For example, a network failure or modification in an OSPF area means a specific prefix will not be advertised within that area, but this does not impact the rest of the network because that prefix is part of a larger, summarized network whose state does not change. This behavior results in efficiency in network functionality and allows for optimal network design.
The Cisco hierarchical network model is defined by the following three layers (as illustrated in Figure 3.1 below):
- The Core (backbone) Layer
- The Distribution Layer
- The Access Layer
Figure 3.1 – Cisco Hierarchical Network Model Layers
These three layers might sometimes collapse in order to simplify the network, especially in small networks. For example, a network might have only two layers, an Access Layer and a collapsed Distribution and Core Layer. Although the Distribution and Core Layers may be considered united in this scenario, there is a clear difference between the functionalities of the Distribution Layer and the functionalities of the Core Layer within the compressed layer.
The Core Layer describes the backbone of the network, and its main purpose is to move data as fast as possible through core devices. This is where all the data flows between the network users or company departments. The design purpose of the Core Layer, which contains Layer 2 switches and Layer 3 switches or high-speed routers, is to provide high bandwidth while obtaining a low overhead. This is often referred to as wire speed. For example, if there are 10 Gbps interfaces on the core devices, the devices should send data at 10 Gbps on those interfaces, meaning it is utilizing 100% of the interfaces’ capacity.
However, you cannot connect to all necessary resources through the local network because some resources are located remotely. A legacy network concept, the 80/20 rule, states that 80% of the network resources are located locally and 20% of the resources are located remotely. The Cisco hierarchical network model actually inverts this rule, so only 20% of the network resources are local while 80% are remote. The Distribution Layer allows access to remote resources located in other networks (different locations) or on the Internet, as fast as possible, usually using Layer 2 switches. Because of this functionality, the Distribution Layer is a critical component in network design.
The Access Layer connects the users to the network using Layer 2 and Layer 3 switches. The Access Layer is often called the desktop or workstation layer because it connects the users’ stations to the network infrastructure.
The Cisco hierarchical network model can be mapped to the OSI model, as shown below in Figure 3.2:
Figure 3.2 – Cisco Hierarchical Network Model Mapped to the OSI Model
Each layer of the hierarchical network model will involve physical and data link processes, but the higher OSI layers used in the Access, Distribution, and Core Layers might be different. The Core Layer will use mainly transport and network services, but there is some overlap between the Core and Distribution Layers in this regard. The Distribution Layer also handles transport and network services, but with more emphasis on the Network Layer (Layer 3). The Access Layer is involved with the upper protocols, the Transport, Session, Presentation, and Application Layers, since the speed decreases going from the Core Layer to the Access Layer due to added layers of protocols, applications, and services.
In summary, users connect to the network through the Access Layer, users access remote resources through the Distribution Layer, and data moves through the network as fast as possible through the Core Layer. Each of these layers will be covered in more detail in the following sections.
The Core Layer
The Core Layer of the network refers to the following aspects:
- High speed
- Reliability and availability
- Fault tolerance
- Load balancing
- Manageability and scalability
- No filters, packet handling, or other overhead
- Limited, consistent diameter
- Quality of Service
Speed refers to the ports operating at true wire speed on the core devices. Reliability is another important issue, especially when defining how often the equipment functions at normal parameters. Redundancy and fault tolerance influence the recovery time after a network device or service stops functioning. With advanced redundancy technologies, the network might be back up and running seamlessly (i.e., transparently) without impacting users or their work.
The Core Layer tends to be very manageable when using tools such as CiscoWorks. Another feature of the core operations is that it does not influence the traffic in any way, thus obtaining a small overhead. An example of this is not configuring complex security features and filtering on core devices, because those filters were already configured in the Access and Distribution Layers. Overhead should be avoided to allow traffic to move through core devices as quickly as possible.
Another important aspect to monitor carefully is the growth of the Core Layer. The network backbone usually consists of a limited and consistent diameter, and a solid network design should support Distribution and Access Layer upgrades without impacting the Core Layer.
A very powerful feature used in the Core Layer is called Quality of Service (QoS); however, the QoS parameters implemented in the Core Layer are usually different from the QoS parameters used at the network edge. QoS is a complex and sophisticated topic that will be covered in detail later in this manual. The major QoS techniques found in the Core Layer are as follows:
- Best effort: This means over-provisioning of the network, adding, for example, more bandwidth than is currently necessary. This is the most efficient QoS method, but it is also the most expensive.
- Integrated Services approach (IntServ): This is based on the resource reservation concept on the network devices. For example, when someone uses a VoIP telephone, all the devices in the path to the destination can communicate with each other and reserve the amount of bandwidth necessary for that call. In order to achieve this, IntServ uses protocols such as Resource Reservation Protocol (RSVP).
- Differentiated Services approach (DiffServ): This is the most modern and popular QoS approach. DiffServ classifies and marks traffic, and treats it a certain way on a hop-by-hop basis. The DiffServ approach is comprised of the following disciplines:
- Congestion management
- Congestion avoidance
- Traffic shaping and policing
- Traffic compression
- Link Fragmentation and Interleaving (LFI)
Usually, the classification and marking is carried out at the Access Layer, and other features, such as congestion avoidance, are configured at the Core Layer. When using DiffServ, each layer is assigned specific QoS mechanisms that integrate and work with QoS mechanisms at other layers.
Note: The QoS features presented here are not specific to the Core Layer. Rather, they are generic concepts that are applied to all three layers and are presented in this section only as a reference.
The Distribution Layer
Unlike the Core Layer, the Distribution Layer usually includes features that add some kind of overhead, because this is where policies are implemented in a network (e.g., who can access what resources from outside of the network). This is also where many security features are implemented due to the growing number of attacks from the Internet.
The Distribution Layer has some of the same features as the Core Layer does, such as QoS techniques, redundancy, and load balancing features. Except for these aspects, the Distribution Layer performs the following unique and completely different set of functions:
- Access control to core devices
- Redundancy to access devices
- Routing protocol boundaries
- Route summarization
- Policy routing
- Separate multicast and broadcast domains
- Routing between VLANs
- Media translation and boundaries (e.g., FastEthernet to GigabitEthernet)
The value of this separation of functions is that it is easier to understand, troubleshoot, and manage the network when different jobs are occurring in these hierarchical network layers.
The Access Layer
In a campus design, the Access Layer is characterized by shared LANs, switched LANs, and virtual LANs (VLANs) to the workstations and servers.
The Access Layer should provide high availability and flexible security features, such as port security, ARP inspection, and VLAN Access Control Lists (VACLs). These security features will be covered in detail in Chapter 8.
Other functions of the Access Layer are implementing authentication (using a TACACS+ or RADIUS server), broadcast control, and defining QoS trust boundaries (i.e., which devices are trusted in the QoS settings, or how far from the actual desktop stations classification and marking takes place). In addition, the Access Layer implements rate limiting techniques to mitigate Denial of Services (DoS) attacks on the network.
Other features implemented in the Access Layer include Spanning Tree Protocol (to ensure that the Layer 2 logical topology is loop-free), PoE (Power over Ethernet), and voice VLAN settings, if the network functionality requires them. An example would be automatically placing the voice traffic in a voice VLAN when plugging an IP phone into a Cisco switch. In addition, the IP phones need power in order to function, and this power can be taken off the Ethernet cable when using PoE-capable Cisco Catalyst switches.
From a WAN standpoint, the Access Layer can provide access to PSTN, Frame Relay, or ATM networks, and ISDN, DSL, or cable services.
Cisco Enterprise Architecture Model
Cisco’s legacy three-layer hierarchical network model must be mastered in order to understand the changes that developed the new and improved enterprise architecture model.
The enterprise architecture model leverages the hierarchical model by adding new modules, submodules, and components (i.e., building blocks) to make it more modular. The key aspect regarding this model is modularity, which offers a convenient building block approach to building an IIN. The modules in the enterprise architecture model include the following:
- Enterprise campus (LAN)
- Enterprise edge
- Enterprise WAN
- Enterprise data center
- Enterprise branch
- Enterprise teleworker
- Service provider edge
The purpose of using this model is to build modules that coordinate with the typical components of the IIN. The three-layer hierarchical model is often fully included in the enterprise campus module. This can be seen in Figure 3.3 below.
Figure 3.3 – Cisco Enterprise Architecture Model Components
In addition to the Access, Distribution, and Core Layers found in the enterprise campus, there is also a server farm block. Furthermore, the edge distribution submodule connects the enterprise campus module to other building blocks in the enterprise edge module (e.g., WAN, e-commerce, Internet connectivity, and remote access). Different networks will require different modules based on their particular role and function. The service provider edge module provides external connectivity services to the enterprise edge module; however, this module is not included in the diagram above because it is not considered part of the organization’s network.
The following sections will briefly describe each module and its components, while the remainder of this manual will cover all the technologies used in each of the building blocks.
Enterprise Campus Module
The enterprise campus module is often referred to as the LAN, and it consists of the campus Core, Distribution, and Access Layers and the server farm and connects to the edge distribution submodule. As in the hierarchical model, sometimes the Core and Distribution Layers collapse into a single layer. An important component of the enterprise campus module is the edge distribution submodule, which allows connectivity to the enterprise edge and the service provider edge modules.
Figure 3.4 – Enterprise Campus Module Components
Modern networks contain a server farm or a data center block (depending on the size and complexity of the network environment) inside the enterprise campus module, as depicted in Figure 3.4 above.
The Core, Distribution, and Access Layer blocks form the enterprise campus module’s infrastructure. The Access Layer block consists of workstations, IP telephones, and Layer 2 access switches. The Distribution Layer block is the central location that connects all the components and includes a high degree of routing and switching operations, access lists, QoS techniques and filtering, redundancy, load balancing, and multiple equal cost paths. The Core Layer block provides high-speed connectivity and fast convergence between other components (e.g., server farm block, edge distribution submodule, etc.) using high-end Layer 3 switches. The Core Layer block might also have some QoS or security techniques implemented, but caution must be taken so that these features do not add significant overhead. Remember, traffic must be switched at wire speed in the Core Layer.
The enterprise campus module might also include the network management submodule that interfaces with all the other modules and components of the network design. This is where you will find SNMP management, logging, monitoring, and configuration management using applications and tools such as CiscoWorks or TACACS+/RADIUS services (part of a larger AAA solution).
As mentioned, the server farm or data center block resides in the enterprise campus module. A recent technology implemented in the server farm is the Storage Area Network (SAN). The server farm block includes the following components:
- Database services (Oracle, SQL)
- E-mail or other collaboration systems
- Application services
- File services
- DNS services
Generally, the services mentioned above will reside on servers that are linked to different end switches for full redundancy, load balancing, and load sharing. They might also be cross-linked with the enterprise campus backbone switches in order to achieve high availability and high reliability.
The edge distribution submodule has redundant connections to the Core Layer block and to other enterprise edge components that will be presented in the following sections. This submodule is composed of one or multiple Layer 3 switches that connect redundantly to the Core Layer block, Cisco access servers, high-end routers, or firewall solutions. The edge distribution submodule is the aggregation point for all the different links of the enterprises, or the demarcation point between the enterprise campus and the enterprise edge, as can be seen in Figure 3.3.
A few enterprise campus design best practices are summarized below:
- Choose modules that map to different buildings/floors with Distribution and Access Layer functionality.
- Count the number of Access Layer switches to Distribution Layer switches; implement two or more Distribution Layer switches for redundancy and availability, and have at least two uplinks from each Access Layer switch.
- Design a server farm building block.
- Design the network management submodule.
- Locate and implement an edge distribution submodule.
- Design the enterprise campus infrastructure, including the campus backbone, and link all components to the campus backbone redundantly.
Enterprise Edge Module
The enterprise edge module consists of particular building blocks. Depending on the particular network you have, one or more of these components might be of interest (as illustrated in Figure 3.5 below):
- Internet and DMZ
- Remote access and VPN
- Enterprise WAN
Figure 3.5 – Enterprise Edge Components
The following best practices should be taken into consideration when designing the enterprise edge module:
- Isolate permanent links to remote offices first.
- Count the number of connections from the corporate network to the Internet connectivity block.
- Design the remote access/VPN block.
- Create and implement the e-commerce block, if needed.
One of the most interesting and often misunderstood concepts in the enterprise edge module is the DMZ (Demilitarized Zone) component, as shown in Figure 3.6 below:
Figure 3.6 – Example of the DMZ Component
To illustrate this concept, suppose a company connects a router (that also acts as a firewall) to the Internet (outside) through the Serial0/0 interface. The Internet is an untrusted, public network that might be a place from which attacks on the company’s network are generated. However, the router has another interface, FastEthernet0/0, that connects to the company network. This internal network is comprised of workstations, servers, and other devices, and it is considered a trusted network because the company owns it and has full control over it.
Note: According to statistics, from a security perspective, the majority of attacks come from inside the network and can be initiated by angry employees, curious employees, or users who are not technically trained and initiate attacks by mistake. Despite this consideration, for security design purposes, consider the internal network a trusted network.
In addition to devices located on the Internet and on the internal network, other resources might include the following:
- The company web server
- The company e-mail server
- An FTP server that users can access from outside the network to make downloads
All of these devices and services can be organized on a separate interface (Serial0/0 in the example above) that connects to a separate area, called the DMZ. In Figure 3.6, the DMZ is outside of the internal network, allowing people from outside the company to access resources in this area. From a security standpoint, as the DMZ is in between the inside and the outside areas of the network, it is part of the company but it does not offer the security provided to the inside devices. This allows DMZ devices to be accessed by inside users or by outside users, but outside users cannot connect to the internal network, which enforces the security policies regarding critical and confidential resources.
In a big enterprise edge module, an e-commerce block is often included, especially when electronic commerce is an important area in the organization. Devices such as the following are typically found inside the e-commerce block:
- Web servers
- Application servers
- Intrusion Detection Systems (IDSs)
- Intrusion Prevention Systems (IPSs)
The main objective of IDSs is to notify administrators of a common security attack in progress. When this occurs, the IDS will generate alarms notifying that a specific attack has been initiated on the network. IDSs do not prevent the attacks, they only detect them. IPSs, on the other hand, are more powerful than IDSs because, in addition to the alarming functionality, they can prevent attacks.
Internet Connectivity Block
The enterprise edge module will most often include an Internet connectivity block because most companies these days require Internet access. The Internet connectivity block contains the following devices:
- Internet routers
- FTP servers
- HTTP servers
- SMTP servers
- DNS servers
Regarding firewall devices, Cisco implements this functionality in both routers running IOS and dedicated firewall appliances, such as the modern Cisco ASAs that evolved from Cisco PIX firewalls.
The Internet connectivity block might have redundancy capabilities, but this depends on the following factors:
- The available budget for implementing redundancy
- How critical the Internet is for the organization
The importance of the Internet’s functionality for the organization will depend on the level of Internet connectivity available, examples of which are listed below:
- Single router, single link
- Single router, dual links to one ISP
- Single router, dual links to two ISPs
- Dual routers, dual links to one ISP
- Dual routers, dual links to two ISP
The scenario in which the organization has links to two ISPs is called dual-homed redundancy. This provides a high degree of redundancy because a link or a service provider can be lost without losing Internet connectivity. The most advanced and safe dual-homed redundancy is having multiple routers with multiple links to multiple service providers. This connectivity model must be chosen if Internet access is critical to the organization’s functionality.
VPN/Remote Access Block
The remote access block can contain the following devices:
- VPN concentrators
- Dial-in access concentrators
VPN concentrators and dial-in access concentrators are no longer available as independent devices because their functionality has been integrated and enhanced in the sophisticated Cisco ASA device. In addition to firewalls and VPN/dial-in concentrators, this component also contains attack prevention systems, such as IDSs and IPSs.
WAN is typically part of a network, so it is included in the enterprise edge module. The company’s WAN connections may be of different types and use different technologies that will be covered in other sections of this manual. These include the following:
- Metro Ethernet
- Leased lines
- SONET and SDH
- Frame Relay
The enterprise WAN block connects to the service provider edge module, which is described in the next section.
Service Provider Edge Module
The service provider (SP) edge module (Figure 3.7) has actual connections to ISPs that can offer the following:
- Internet connectivity (primary ISP, secondary ISP)
- WAN services (Frame Relay/ATM)
- PSTN services
Figure 3.7 – Service Provider Edge Module
In a modern network, PSTN technology is often replaced with the Voice over IP (VoIP) approach.
Some of the best practices that should be taken into consideration when designing the SP edge module are as follows:
- Try to get two ISPs for redundancy, or at least two links to a single ISP.
- Select the mobile/remote/dial-up technology.
- For slow WAN links, use Frame Relay or leased lines.
Depending on how much remote access you have to the network, there may be a certain number of remote modules, for example:
- Enterprise branch
- Site-to-site VPNs
- Enterprise data center
- High-speed LAN
- DC management
- Enterprise teleworker
- Remote access VPNs
The enterprise branch module typically creates site-to-site VPNs that connect branch offices to the headquarters. The enterprise data center is where various network blocks access data. The data center features high-speed LAN capabilities and sophisticated data center management software. Some companies have a large number of telecommuters (i.e., people who work from home). This would necessitate an enterprise teleworker module that features remote access VPN technologies.
The following sections will analyze different solutions that a large enterprise network can use to take advantage of emerging technologies, such as high availability, intelligent network services, security, voice transport, and content networking.
Intelligent Network Services
Intelligent network services are essential support services that are part of the network and enable applications. This involves a rich set of different processes that enable packet forwarding in an IP network. This may include the following:
- Network management tools
- Quality of Service mechanisms that help manage certain parts of the network, delay some applications, and prioritize others
- Security mechanisms that represent essential intelligent network services that provide productivity and support for applications
- High availability
Network management is an intelligent network service that allows the management and monitoring of the server farm, network devices in different network blocks, or WAN connections.
This also involves system administration for the servers, with software tools specific to each operating system provider (e.g., Microsoft-specific) or third-party tools (e.g., Tivoli, HP). Network management also includes logging, usually through a Syslog server implementation or security features, such as One Time Password (OTP). OTP represents a two-factor security mechanism (i.e., something you know, such as a password, and something you have, such as a card or a token) that allows for a high-security environment.
Details about designing network management can be found in the dedicated section in Chapter 2.
Security is also an intelligent service and it is vital to the health of the network. This involves features such as the following:
- Authentication services (RADIUS or TACACS)
- Failover techniques
Network security design principles will be presented in Chapter 8.
Quality of Service
Quality of Service (QoS) involves a wide variety of techniques used, especially in networks that offer multimedia services (voice and/or video), because these services are usually delay-sensitive and require low latency and low jitter. Traffic generated by these applications must be prioritized, which is the role of QoS techniques.
Network availability and network management are two of the most critical technology areas in network design, and these areas impact all the other technologies presented in this manual. The focus of this section is high availability network design.
High availability is often a factor taken into consideration when designing end-to-end solutions. This assures redundancy for network services and for the end-users and is accomplished by ensuring that the network devices are reliable and fault tolerant.
Some level of high availability solutions is designed for any network module proposed. The Access Layer can have multiple ways of connecting access devices and it can have multiple connections to multiple devices in the Distribution Layer, and the Distribution Layer can have some redundancy when connecting to the Core Layer. These aspects are illustrated in Figure 3.3.
Typically, design includes redundancy and high availability, which function under the following parameters:
- The available budget
- How mission critical a particular service or application is to the business goals of the organization
Many redundancy options can be utilized in different components of modern networks, including the following:
- Workstation-to-router redundancy in the Access Layer block
- Server redundancy in the server farm block
- Route redundancy
- Media redundancy in the Access Layer block
Each of these areas will be covered in detail in the following sections.
The most important topic in the list above is workstation-to-router redundancy because access devices must maintain their default gateway information. As mentioned before, modern networks respect the 80/20 model, which states that 80% of the traffic will pass through the default gateway and 20% of the destinations will be local. This is why default gateway availability is so critical.
Workstation-to-router redundancy can be accomplished in multiple ways, including the following:
- Proxy ARP on routers
- Explicit configuration
- ICMP Router Discovery Protocol (IRDP)
- Routing Information Protocol (RIP)
- Hot Standby Router Protocol (HSRP)
- Virtual Router Redundancy Protocol (VRRP)
- Gateway Load Balancing Protocol (GLBP)
Proxy ARP involves a workstation that has no default gateway configured but it wants to communicate with some remote host. A request for the address of the host is sent and the router that hears this request realizes that it can service it (i.e., knows it can reach the client), responding on behalf of the client and using a Proxy ARP. The router actually pretends to be the host, so the workstation can send traffic destined to that specific client to the router.
Explicit configuration is the most common way of accomplishing workstation-to-router redundancy, since some of the operating systems allow multiple default gateways configuration. The problem with this is the latency that appears while the device assumes that one gateway is down and switches to another gateway. Another problem with the explicit configuration of multiple default gateways is that not all operating systems support this feature.
One effective solution is one that can be used by every host in the infrastructure and, if possible, that is transparent to the hosts (to accomplish this, make the configuration only on the network devices). Some routers can run IRDP. If both routers and hosts can run IRDP, this is another option for hosts to discover an available default gateway dynamically. RIP is another protocol that can be used in this scenario, but with RIP, all the hosts must know this technology.
The preferred solution is a technology that does not place any burden on the hosts and that is completely transparent to them. The hosts need only to configure a single default gateway because the entire redundancy configuration is made on the routers. The protocols that can be used to accomplish this are generically called First Hop Redundancy Protocols (FHRPs), and they include the following:
HSRP is a Cisco proprietary protocol that inspired IEEE to create the open standard protocol VRRP. The functionality of the two protocols is almost identical. GLBP, another Cisco invention, is the most recent and the most sophisticated FHRP.
Hot Standby Router Protocol
Figure 3.8 below illustrates a network that has two gateway routers connected to one Layer 2 switch and that aggregates the network hosts:
Figure 3.8 – Hot Standby Router Protocol
Router 1 has one potential default gateway address (10.10.10.1) and Router 2 has another potential default gateway address (10.10.10.2). The two routers are configured in the HSRP group, presenting the clients with a virtual default gateway address of 10.10.10.3. This address will be configured as the host’s default gateway address, although it is not assigned to any router interface because it is a virtual address.
Router 1 in this example is the active device, which is actually forwarding traffic for the 10.10.10.3 virtual address. Router 2 is the standby HSRP device. The two routers exchange HSRP messages to each other in order to check on each other’s health status. For instance, if Router 2 no longer hears from Router 1, it realizes that Router 1 is down and it will take over as the active HSRP device.
These devices are transparently providing access for the clients, as they are transparently serving up the virtual default gateway address.
Virtual Router Redundancy Protocol
VRRP works in a way similar to HSRP. The differences are that the two routers are configured in a VRRP group and one of them is called the master device (instead of the active router), which does all of the forwarding, and other one is called the slave device (instead of the standby router), as shown below in Figure 3.9:
Figure 3.9 – Virtual Router Redundancy Protocol
As was the case for HSRP, the VRRP group presents a virtual IP address to the clients. An interesting aspect about VRRP is that you can utilize as the virtual IP address the same address that is on the master device. In this case, the virtual address is configured as 10.10.10.1, identical to the address on the Router 1 interface.
Gateway Load Balancing Protocol
GLBP is the most unique of the FHRPs. With GLBP, you have not only the ability to achieve redundancy but also the ability to perform load balancing, and it is much easier to use more than two devices, as illustrated in Figure 3.10 below:
Figure 3.10 – Gateway Load Balancing Protocol
In Figure 3.10, there are three routers configured in the GLBP group, which are assigned a virtual default gateway address of 10.10.10.4, which is also configured on the clients. Router 1 is elected as the Active Virtual Gateway (AVG) and the other devices are in the state of Active Virtual Forwarder (AVF).
When the hosts send an ARP request for the 10.10.10.4 MAC address, the AVG responds to the ARP request and performs a round robin with the virtual MAC addresses of the AVF machines. This means that Router 1 can respond to the first ARP it receives with its own virtual MAC address, then it can respond to the second ARP it receives with the second router’s virtual MAC address, and, finally, it can respond to the third ARP with the third router’s virtual MAC address, relaying the traffic over the available AVF devices. The round-robin simplistic-balancing approach can be changed within the configuration with other load balancing techniques for GLBP.
Note: The AVG can also function as an AVF, and it usually does so.
Server-based redundancy technologies can be implemented in server farms or data centers. This is often needed in order to assure high availability for key server functions, such as file or application sharing. One way to solve this problem is to mirror multiple servers so that if one server fails, the network can dynamically fail over to another server. In the case of VoIP, a critical component called Cisco Unified Communications Manager (CUCM, or CallManager) replaces the traditional PBX routers. Because CallManager is so critical for routing call traffic, it is typically configured in a cluster with the same idea in mind: if one device fails, the other device starts servicing the VoIP clients transparently.
With WAN configuration, configuring redundancy between the campus infrastructures is a best practice. In order to achieve this, you can implement load balancing at the routing protocols level, as illustrated in Figure 3.11 below:
Figure 3.11 – WAN Route Redundancy
This increases availability because in the case of a direct path failure between Site 1 and Site 2, the two sites can still reach each other by going through another location, for example, Site 1 to Site 3 to Site 2, as shown in Figure 3.11.
Although it features high availability functionality, the scenario presented above has a downside regarding full-mesh connectivity: the configuration overhead results in a high cost because of the number of circuits created, which is one of the reasons full-mesh topologies are not implemented very often.
In order to calculate the necessary number of circuits for a full-mesh architecture, you can use the n*(n-1)/2 formula, where n equals the number of sites (nodes). In the example above, there are four sites, so the total number of connections equals 4*3/2=6 circuits.
The specific capabilities of each particular protocol, including high availability and load balancing, will be covered in detail in Chapter 7.
Another type of redundancy in the infrastructure is media redundancy, especially when connecting devices to multiple paths. This is always useful in case one of the links fails. Media redundancy demands the configuration of the Spanning Tree Protocol (STP) at Layer 2 in order to avoid loops that can bring the network down.
WAN environments may contain floating static routes, meaning static routes that point to a backup path, as illustrated in Figure 3.12 below:
Figure 3.12 – Floating Static Routes
Analyzing Figure 3.12, Site 1 and Site 2 are connected via a direct WAN circuit and via a slower connection that involves more hops as a backup path (Site 3). Best practice is for traffic to take a direct path from Site 1 to Site 2, so the primary path is learned via the EIGRP route, which has an Administrative Distance (AD) of 90. In order to achieve redundancy over the backup path, a static route on Site 1 pointing to Site 3 is created. Next, to make this a “floating” static route, its AD is set higher than the AD of the primary path. In this example, the static route is set to an AD of 91, so it will be less preferred than the EIGRP route. This means the floating static route will not be used unless the EIGRP route goes away.
The floating static route technique is often seen in WAN scenarios, where it provides circuit redundancy.
Another technology used to achieve media redundancy is EtherChannel. This Layer 2 or Layer 3 logical bundling or channel aggregation technique can be used between switches. The bundled links can appear as one single link between specific devices, as illustrated in Figure 3.13 below:
Figure 3.13 – Example of an EtherChannel
EtherChannels are important when using STP because when all of the links look like one link, STP will not consider them a possible loop threat and will not shut down any link from that specific bundle. Therefore, the links in an EtherChannel can be load balanced across dynamically, without STP interfering in this situation.
Cisco switches support the following implementation options for EtherChannels:
- Port Aggregation Protocol (PagP) – a Cisco proprietary protocol
- Link Aggregation Control Protocol (LACP) – an open standard protocol
Channel aggregation techniques also can be used in WANs, for example, Multilink PPP (MPPP) technology.
In addition to the critical services presented above, the intelligent network services family also includes networking solutions that represent network-based applications that function over the existing network architecture. Examples of networking solutions include the following:
- Voice transport (including VoIP and IP Telephony)
- IP video conferencing (more and more popular, over LAN, WAN, or Internet)
- Content networking
- Storage networking
The most important networking solutions will be covered in the following sections.
Voice transport is a network solution that is implemented on top of the existing network infrastructure. When designing voice transport solutions, you must carefully consider the existing enterprise network already in place, and it is very important that you first implement the data solution. After that, you can integrate voice and data on the same network infrastructure.
VoIP versus IP Telephony
Most people tend to consider VoIP and IP Telephony the same network solution, differing only in name. In fact, there are clear differences between VoIP and IP Telephony, as summarized below:
|Uses voice-enabled routers||Uses IP phones|
|Converts analog to IP packets||Uses Cisco CallManager server (when using Cisco infrastructure)|
|Transparent – common phones||Phone converts voice to IP|
|Linked to PBX and VoIP routers||No voice-enabled routers needed, but QoS generally required|
|The main PBX router is not connected to PSTN or another PBX router||If PSTN, use voice-enabled router at enterprise edge to the PSTN|
VoIP uses voice-enabled routers that perform the actual conversion of analog to IP packets. Common phones are used and this makes the process transparent to the end-user. VoIP implies the existence of a link between the PBX routers and the VoIP-compatible routers, but the main PBX router will not actually be connected to the PSTN service or to another PBX router.
IP Telephony, on the other hand, uses IP telephones and Cisco CallManager servers, which control and manage all the signaling and call control. The IP phone converts voice to IP datagrams. Unlike VoIP, you do not need to use voice-enabled routers, but this can be done, for example, at the enterprise edge in order to connect to the PSTN. IP Telephony usually employs the use of QoS techniques to make sure the voice IP datagrams are given priority over other types of traffic (e.g., file transfer, print services, etc.).
IP Telephony Components
IP Telephony is a network solution that can be located in any components in the campus infrastructure, including the following (and as illustrated in Figure 3.14 below):
- Access Layer block
- Network management submodule
- Distribution Layer block
- Server farm block
Figure 3.14 – IP Telephony in the Campus Infrastructure
The IP telephones can be distributed throughout the entire campus infrastructure. The CUCM server is usually located in the server farm block. CUCM is the central location for call control, setup, routing, and management. CUCM servers can support clustering (by adding additional CUCM servers) for increased system capacity, functionality, and fault tolerance.
IP Telephony also uses switches that have inline power capabilities (i.e., Power over Ethernet, or PoE). These switches are generally located within wiring closets to provide centralized power sources for the entire IP Telephony network. IP Telephony switches are normal switches with the additional capability of providing PoE to LAN ports that are connected to IP phones. Usually, these switches implement QoS techniques (e.g., classification or queuing) that manage the different types of traffic and give priority to voice traffic. QoS configuration is usually performed in the Distribution Layer.
In situations that require PSTN connections, you may need to use voice-enabled routers, which can provide services such as compression, access to the PSTN network, packet routing, backup call processing, and other voice services. Voice routers are usually located in the enterprise edge module and can extend IP Telephony functionalities to other company branches, home offices, or the Internet.
IP Telephony design procedures will be covered in detail in Chapter 9.
Content networking, also known as Content Delivery Networking (CDN), is a service that is used more and more in modern and large enterprise networks, as it offers more sophisticated types of network solutions and applications that accommodate video and voice for online services. Using Intranet and Internet broadcasts, this can be delivered as training modules, using different audio and visual streaming technologies.
Content networking demands content-aware technologies from a Cisco environment in the campus infrastructure, including content-aware hardware and content-aware software. Content-aware technologies include the following components (as illustrated in Figure 3.15 below):
- Content routing
- Content caching
- Content switching
Figure 3.15 – Content Networking Components
As mentioned, the technology must be content-aware, meaning the infrastructure must be able to move this type of traffic through the enterprise efficiently and optimally, such as with IP Television, for example. A simple solution to these advanced services would be to offer more bandwidth, but this is not feasible in modern networks. As a result, network design must become more intelligent, which can be achieved by adding another layer of aptitude between the underlying network infrastructure and the applications that are overlaying the network.
CDN is characterized as a network solution as opposed to an intelligent solution because it is technically a non-essential component for most companies (at lease for the moment). However, there is a close connection between CDN and intelligent services because these services enhance content networking. Examples of intelligent services that support and enhance CDN functionalities include the following:
- High availability
- QoS techniques
- IP Multicast
- Network management
The first component of CDN, content routing, is the process that actually redirects a user to the best device in the network. Based on a set of well-defined user policies, there are specific rules (e.g., the server load) for the different types of content delivered in the network infrastructure. Content routing can deliver the contents as quickly as possible using high availability techniques and fast server responses, as illustrated in Figure 3.16 below:
Figure 3.16 – Example of Content Routing
For example, if a user wants to access training materials on the Internet and take a video class, content routing can be used to have caching servers (e.g., in content routers) perform this operation instead of the user. The user will send a request to the DNS server, which will respond that the content is stored in a cache on a local server, and then the content will be sent locally to the user. This inefficient way of gathering information means the DNS server will respond with the actual Internet location that hosts the specific service and the local user will access the content over the Internet.
From a Cisco standpoint, content caching could be delivered by a Cisco Content Engine (CCE) module on a router. This speeds up the delivery of content for end-users because it transparently caches information used on a regular basis, as well as frequently accessed content, so the requests can be fulfilled locally. This method avoids the situation in which the entire infrastructure must be crossed in order to get the content from the Internet. Content caching achieves the following effects:
- Reduces bottlenecks
- Speeds up content delivery
- Enhances productivity
CCE can be installed in the server farm block, in the e-commerce block, or in the Internet connectivity block, depending on the enterprise needs. If the enterprise core site performs much content networking (e.g., video streaming, web streaming, or online training), the CCE modules can also be placed in the Distribution Layer block of the enterprise.
The third component of CDN is content switching, which is also known as “web switching”. This can be used for content delivery to different network modules and is a sophisticated mechanism for load balancing and for accelerating the intelligence of the content. Content switching gives users a much better web experience by delivering the content much more quickly and by customizing the content for individual users.
Content switching leverages content routers and helps point the user to the best server, while also providing load balancing techniques to get the desired content as quickly as possible, using as little bandwidth as possible. Content switches can be placed in different locations within the network, including the following:
- Core Layer block
- Enterprise edge module
- E-commerce block
- Internet connectivity block
- Network management submodule
The component used in the network management submodule, created by Cisco to manage all the content control and content distribution, is called the Content Distribution Manager (CDM).
For many years, Cisco recommended a three-layer hierarchical network design model, which included the Access Layer, the Distribution Layer, and the Core Layer. However, to provide for enhanced scalability and flexibility, Cisco later introduced the Cisco enterprise architecture model, which categorizes enterprise networks into six modules. The three layers of the Cisco Service Oriented Network Architecture (SONA) can be found in each of these modules. Specifically, each module can contain its own network infrastructure, services, and applications. This section summarizes the design considerations surrounding the modules that comprise the Cisco enterprise architecture model.
Traditionally, Cisco prescribed a three-layer model for network designers, as follows:
- Access Layer: Typically, wiring closet switches connected to end-user stations.
- Distribution Layer: An aggregation point for wiring closet switches, where routing and packet manipulation occur.
- Core Layer: The network backbone, where high-speed traffic transport is the main priority.
The three-layer hierarchical approach offers limited scalability. For modern enterprise networks, Cisco developed the enterprise architecture model. The functional areas that comprise this model include the following:
- Enterprise campus: The portion of the network design providing performance, scalability, and availability that defines operations within the main campus.
- Enterprise edge: An aggregation point for components at the edge of the network (e.g., Internet and MAN/WAN connectivity) that routes traffic to and from the enterprise campus module.
- WAN and Internet: The portion of the network made available by a service provider (e.g., Frame Relay or ATM).
- Enterprise branch: Remote network locations that benefit from extended network services, such as security.
- Enterprise data center: A consolidation of applications, servers, and storage solutions (similar to a campus data center).
- Enterprise teleworker: A collection of small office/home office (SOHO) locations securely connected to the enterprise edge via an Internet Service Provider (ISP) or Public Switched Telephone Network (PSTN).
When designing the enterprise campus module, four primary areas must be addressed, as follows:
- Access Layer block: Connects end-user devices to the network.
- Distribution Layer block: Aggregates building access switches and performs Layer 3 switching (i.e., routing) functions.
- Core Layer block: Provides high-speed, redundant connectivity between buildings.
- Server farm/data center block: Consolidates application servers, e-mail servers, domain name servers, file servers, and network management applications.
The enterprise edge module connects the enterprise campus module with the WAN and Internet blocks. The four building blocks comprising the enterprise edge module are as follows:
- E-commerce: Contains the servers used to provide an e-commerce presence for an organization, including the following:
- Web servers
- Application servers
- Database servers
- Security servers
- Internet connectivity: Provides Internet-related services, including the following:
- E-mail servers
- Domain Name System (DNS) servers
- Public web servers
- Security servers
- Edge routers
- WAN and site-to-site VPNs (Virtual Private Networks): Interconnects a main office with remote offices over various transport technologies, such as the following:
- Frame Relay
- Remote access and VPN: Provides secure access for remote workers (e.g., telecommuters) or remote offices and includes components such as the following:
- Dial-in access concentrators
- VPN concentrators
- Cisco Adaptive Security Appliances (ASAs)
- Intrusion Detection System (IDS) appliances
The WAN and Internet blocks are sometimes referred to as the service provider module. This module is not explicitly designed because it is designed, owned, and operated by an ISP. However, the enterprise network designer can specify the type of connection to use in connecting to the ISP(s). Specifically, the service provider module includes the following types of connectivity:
- Frame Relay
- Point-to-point leased line
- SONET and Synchronous Digital Hierarchy (SDH)
- Cable modem
- Digital Subscriber Line (DSL)
- Wireless bridging
Enterprise locations are supported via the following modules:
- Enterprise branch
- Enterprise data center
- Enterprise teleworker
Layered on top of an enterprise’s network infrastructure are infrastructure services, which enable business applications. Examples of these infrastructure services include the following:
- Network management
- Network security
- Quality of Service
- Network availability, including:
- Workstation-to-router redundancy
- Server redundancy
- Route redundancy
- Media redundancy
- Voice transport
- Content networking