Enterprise LAN and Data Center Design
This chapter will cover the following topics:
Geographical Considerations
The geographical considerations regarding the enterprise campus network design involve locating the enterprise campus building blocks and the components in those blocks, and determining the distance between them. You can learn more in our Cisco CCNP ENCOR course.
The geographical structures can be broken down into four different types of geography, as follows:
- Intra-building (inside the building)
- Inter-building (between buildings)
- Remote building (less than 100 km away)
- Remote building (greater than 100 km away)
Intra-building Design
An intra-building structure (Figure 4.1) can be comprised of a single floor or multiple floors in a single building. The goal is to connect all the different components (e.g., servers, workstations, printers, etc.) and give all of them access to network resources depending on their type.
The Access and Distribution Layers are typically located in an intra-building area. Multiple components can be located within a single floor, for example, workstations, servers, printers, and wiring closets using UTP (Cat5 or Cat6) cabling. The wiring closets connect to a central Distribution Layer switch (usually a Layer 3 switch) that is connected to other switches. User workstations are connected either to a wiring closet or directly to an Access Layer switch that is connected to Distribution Layer switches, which are connected to backbone devices usually via fiber links.
Figure 4.1 – Intra-building Campus Design
Inter-building Design
The inter-building network structure (Figure 4.2) links individual buildings in the campus or corporate complex using the Distribution Layer or the Core Layer, depending on the size of the organization. The distance between buildings should range only from a few hundred meters to a few kilometers (less than one mile).
The need for high-speed connectivity is generally high in these situations because of the devices, servers, and applications shared between the two buildings. In addition, the connection between the buildings should provide as high a bandwidth and throughput as possible. Another issue is ensuring very little environmental interference, so the typical medium used in this case is optical fiber (FO). The FO used can be either multi-mode fiber (MM) or single-mode fiber (SM).
Figure 4.2 – Inter-building Campus Design
MM fiber and SM fiber share some common characteristics. FO cabling uses glass or plastic fibers to move its information. FO cables are made of a bundle of threads, each of which can transmit messages modulated under light waves. FO has greater bandwidth than copper cables, so it can carry more data and it is less susceptible to interference.
FO cables are also much thinner and lighter than metal wires. In addition, data can be transmitted digitally, which is the natural way in which computer data moves, rather than through analog signaling. The big disadvantage of FO is that cables are more expensive to install, are often more fragile, and are difficult to split up. Despite these disadvantages, FO is getting more and more popular for local area networking and telecom providers’ infrastructures.
Note: In a not so distant future, almost all connections will be based on fiber or wireless technologies, as copper-based connections will be deprecated.
Multi-mode fiber has the following characteristics:
- Specific installation and performance guidelines; it also has specific connectors
- Concurrently transports multiple light waves/modes within the core
- Used for relatively short distances
- Typical diameter is 50 or 62.5 micrometers
- Bandwidth is usually up to 10 Gbps
- Range is 550 meters when using GigabitEthernet
- Used between nodes and between buildings
- More expensive than copper
Single-mode fiber has the following characteristics:
- Specific installation and performance guidelines
- Also called mono-mode fiber
- Carries a single light laser
- Typical diameter of core is 2 to 10 micrometers
- Bandwidth is usually up to 10 Gbps
- Range is up to 100 km when using GigabitEthernet
- Used between nodes and buildings for longer distances than MM fiber
Distant Remote Building Geography
The campus infrastructure can be spread over a metropolitan area or over a larger area (e.g., different parts of a city). If you are dealing with distances within a few miles, you must focus on physical needs. First, you need to determine whether the company owns any of the copper lines; if it does, you can build from there. You might also need to connect an enterprise campus network through a WAN block. If this is the case, you should leverage the existing telecom providers in that specific area. In addition to these technologies, you might also use satellite or various wireless technologies to connect your sites. Figure 4.3 below illustrates this concept:
Figure 4.3 – Distant Remote Building Design
As this chapter focuses on the physical connectivity between the core of the infrastructure and a campus, or a metropolitan area network (MAN), SM or MM fiber will be used in most cases.
As the distance between sites grows, the following actions will occur:
- Connectivity costs will increase
- Required throughput will decrease
- Importance of availability will decrease
Network Applications
Another important factor when designing campus switching is considering the network applications that will be used. Once the physical and geographical aspects are clear, the network designer should characterize what types of applications will be processed within the network.
The first category of applications that must be identified involves the critical (core) applications. The rest of the services will fall under the optional intelligence category. This information should be well documented and gathered during the design process, which was detailed earlier in this manual.
The network applications (illustrated in Figure 4.4 below) can be divided into the following types:
- Client-to-client applications
- Client-to-distributed server applications
- Client-to-server farm applications
- Client-to-enterprise edge applications
Figure 4.4 – Network Applications
Client-to-client applications can include software such as Microsoft Office or other productivity applications, IP Telephony, file sharing between different workstations, print services, video conferencing, or chat systems. These applications have different requirements for bandwidth, and they may require QoS techniques to be implemented on the network devices.
Over the years, applications have moved away from the traditional client-to-client or client-to-server relationship to a more distributed computing approach using distributed services and databases. This can be accomplished by using VLANs that help place servers and users in the same VLAN, yet they can be dispersed throughout the entire physical area of the company (e.g., different floors of the building). In this way, departmental traffic can be on the same segment but data can be exchanged over the entire backbone of the campus. An example of client-to-distributed server applications is Microsoft’s .NET Framework.
In extremely large enterprise organizations or enterprise environments, where application traffic can pass through more than one VLAN or wiring closet within the same building or multiple buildings and on many floors, enterprise-wide applications use a server farm. Examples of this kind of traffic include web applications, e-mail servers (e.g., Microsoft Exchange), and database servers (e.g., Microsoft, Oracle, IBM, etc.).
The fourth type of network application involves the client communicating with the enterprise edge. The enterprise edge module can contain web, e-mail, chat, or other types of servers. The key aspect in this application is that it involves security features such as firewalls, IDSs, and packet filtering.
The common component in all of these application types is the client, as users’ workstations can communicate with other clients, with distributed servers, with services within server farms, or with the enterprise edge. In surveying all these applications, you must determine correctly which of these are critical/core applications and which are optional intelligence services.
Layer 2 Technologies
Layer 2 technologies relate to the OSI Data Link Layer. Today’s modern enterprise, distributed networking world of multimedia and client applications dictates the need for greater bandwidth and a greater degree of control. Over the past decade, almost all organizations have replaced their shared networking technology (i.e., hubs) with switches to create switched technologies.
A collision domain is comprised of nodes and devices that share the same bandwidth, and this is called a bandwidth domain. For instance, everything that is connected to a switch port via a hub is in a collision domain. This means there is always the possibility of a collision in the operations of that particular Ethernet.
A broadcast domain, on the other hand, represents a collection of devices that can see each other’s broadcast or multicast packets. Nodes that are in the same collision domain are also in the same broadcast domain. For instance, all devices associated with the port of a router are in a broadcast domain.
Note: By default, broadcasts do not traverse a router’s port interface.
When shared technology is used (e.g., hubs and repeaters), all the devices share the bandwidth of the specific network segment. When using switched technologies, each device in the switch port is in its own collision domain. However, all the devices are in the same broadcast domain.
Switched technologies, as opposed to shared technologies, provide many advantages, among which are as follows:
- Greater than 10 Mbps bandwidth
- Used for greater distances because they can connect a matrix of switches
- Support for intelligent services
- High availability with only a small increase in costs
Layer 3 Switching
Historically, LAN switching typically involves Layer 2 switching at the Access Layer and sometimes at the Distribution Layer. Layer 2 switches forward information based only on the MAC address (the Layer 2 frame address). Layer 3 switching, however, uses the MAC address in addition to the Layer 3 address (e.g., an IP address).
The following three options exist when considering designing a switched environment:
- Layer 2 switching throughout the network
- A combination of Layer 2 and Layer 3 switching (this requires a higher degree of planning)
- Layer 3 switching throughout the network
When deciding between Layer 2 and Layer 3 switching (Figure 4.5), the network designer must understand the impact it may make in the following areas:
- How the policies are implemented
- How intelligent load sharing is accomplished
- The way network failures are dealt with
- Convergence issues
- The cost factor
Figure 4.5 – Layer 2 versus Layer 3 Switching
Note: Using Layer 2 switching, Layer 3 switching, or a combination of the two also depends on the available switching platforms, as not all switches support Layer 3 technologies. Layer 3 switches are also called “multilayer switches” or “routing switches”.
At the heart of switched networking is the concept of VLANs. This represents the process of creating logical broadcast domains by grouping particular nodes attached to different switches within a switched environment.
When designing a full Layer 2 environment using VLANs, a router might be used to provide routing between VLANs. This technique is called “router on a stick” because only one router interface is used to carry all the VLANs.
The advantages of having a combination of Layer 2 and Layer 3 switches at the Distribution Layer or Core Layer, or Layer 3 switches throughout the network, is that a routing process, or router switch module, is built into the switch itself.
When exclusively using Layer 2 switches and VLANs throughout the network, all the policies, access lists, and QoS rules will be managed at the Data Link Layer. The policy capabilities are very limited at the Data Link Layer, but they are greatly enhanced in Layer 3 switches.
Another area in which Layer 2 switches are limited is load sharing capabilities used to ensure redundant links (multiple paths) throughout the network. This is because Layer 2 switches only know about MAC addresses, and they cannot perform intelligent load sharing, for example, based on a destination network (e.g., Layer 3 switches) that supports dynamic routing protocols. Therefore, with Layer 2 switching, the load can be shared only on a per-VLAN basis.
In addition, when using Layer 2 switches only, the basis of all failures or the failure domain will be isolated to the VLAN only. On the other hand, in a multilayer environment, the failures can be better isolated to the Access Layer, to the Core Layer, or even to particular network segments.
In a Layer 2 switched environment, only STP offers convergence and loop control. However, when using Layer 3 switching, this feature can also be implemented at the Distribution and Core Layers using routing protocol technologies such as OSPF, EIGRP, and others.
When considering cost, using Layer 2 everywhere is the cheapest solution, but this is also a less flexible and manageable option. Using Layer 3 switches throughout the network is the most expensive option but it is very powerful and flexible. A compromise would be to implement Layer 3 switches first only in the Distribution Layer and then, eventually, as the budget allows and the network scales, extend the Layer 3 switches into the Core Layer for full Layer 3 switching at the Distribution and Core Layers.
Physical Cabling
The most popular copper cable type is Unshielded Twisted Pair (UTP). This consists of two unshielded wires that are twisted around each other. Because of its low cost, UTP cabling is used extensively throughout LANs and for telephone connections.
The problem with UTP cabling is it does not offer as high a bandwidth as fiber, nor does it offer protection from interference like coaxial cable or fiber can. If the network operates at 100 Mbps or higher, the general recommendation is to use Cat5 or Cat6 UTP cabling throughout the network environment.
UTP cabling is generally used to connect end devices, such as servers, workstations, laptops, printers, or IP telephones that are no more than 100 meters away from the LAN switch or the Access Layer switch. The connection from the Access Layer switch to the multilayer switch usually requires more bandwidth, so it is recommended that you use MM fiber.
If you want to connect two buildings that are less than 10 km from each other, you can use standard SM fiber. When dealing with distances greater than 10 km, you should consider using special categories of SM fiber or other emerging WAN/radio technologies, eventually using the ISP’s infrastructure. The concept of physical cabling is illustrated in Figure 4.6 below:
Figure 4.6 – Physical Cabling
Overview of the Enterprise Campus Design
Creating a switched building design is not the same as creating a switched campus design. A building design can contain a few hundred or thousands of network nodes or components, whereas a campus design can comprise multiple buildings and will generally contain some kind of campus backbone that services the buildings. The important thing to analyze in these design types concerns the Layer 2 and Layer 3 switching technologies used at various network layers.
Figure 4.7 – Cisco Enterprise Composite Model
The Cisco enterprise composite model, shown above in Figure 4.7, consists of multiple modules. The Access Layer block combines and aggregates all of the different end-user devices and end-user connections that interact with other parts of the network. The Distribution Layer block acts as an aggregator for the network access devices and consists of multilayer switches connected to the campus backbone.
The campus backbone interconnects with the Distribution Layer block, and its main goal is to communicate with the edge distribution submodule in order to provide access to enterprise edge services (e.g., e-commerce, Internet connectivity, remote access, or WAN). From a geographical standpoint, the Core Layer block is located in the main building and the Distribution Layer block is located in every building connected to the campus backbone (i.e., main building).
Another component in the enterprise campus module is the server farm block, which either can be located within every building or can be centrally located in the main building and available for all the buildings in the campus. This block connects the servers to the enterprise network and handles server availability and traffic load balancing.
The edge distribution submodule connects the enterprise edge applications to the network campus. The main consideration of the edge distribution submodule is security. Once this submodule is penetrated, the internal network is available for the attacker.
The network management’s availability requirements are similar to those in the server farm block but it does not need as much bandwidth. Network control, verification, troubleshooting, and monitoring logic reside in this submodule.
The paragraphs to follow will analyze some of the requirements of these different components as they apply to the enterprise campus design and Layer 2 and Layer 3 switching.
The Access Layer block connects to the Distribution Layer block (one for every building). The closer the Distribution Layer block is located to the Access Layer block, the more scalability the network must build in. Scalability should not be an issue at the Core Layer block or the enterprise edge module because hardware purchased for those levels will function for many years without needing major upgrades. However, scalability in the Access Layer block is important from a switching standpoint, especially due to the possible growth of the company (i.e., adding users, customers, or partners). When this happens, you can easily accommodate the new requirements by adding Layer 3 switches at the Distribution Layer.
In the Access Layer block, it is possible to continue to use shared technology if some of the users already use it and the budget does not allow for full switching implementation. Nevertheless, the recommendation is to have a full switching environment for all of the Access Layer areas, which will require high scalability in order to be able to add employees and to help the business expand. From an availability and performance standpoint, at the Access Layer, you should not be concerned about redundancy or high performance per port. The Access Layer devices’ performance is the lowest within the entire network infrastructure, but performance, availability, and cost requirements will grow as you get closer to the backbone area. In addition, the Distribution Layer block can integrate Layer 2 or Layer 3 switching and has medium needs for scalability, high availability, better performance, and cost per port.
One of the biggest Layer 2 design mistakes within the Access and Distribution Layers is to forget about the importance of STP and the mapping between VLANS and STP instances. Another big mistake is cutting costs in the Distribution Layer area by not purchasing modular devices. This will cause scalability problems when the number of end-users increases because the entire distribution design will have to be reconsidered.
The backbone connects all the buildings in the campus and is comprised of a combination of Layer 2 and Layer 3 switching. As mentioned, the Core Layer’s scalability factor is low because backbone hardware should not have to be replaced for a long time. On the other hand, the network core should have a high degree of availability and performance because it integrates all the other network modules and all the network traffic passes through the backbone devices. A failure in the backbone area means a disrupted service at another network layer, whereas a failure in the Access or Distribution Layers does not affect Core Layer functionality.
Another feature of the Core Layer block is the high cost per port. The connections to the Distribution Layer block and the edge distribution submodule often uses MM fiber. In the server farm block, you will generally have Layer 3 switching with the SAN using fiber technology. The server farm block should operate on Layer 3 switching technology and should have medium scalability, but high availability and performance.
The edge distribution submodule uses Layer 3 switching and fiber to connect to the backbone and to the Distribution Layer block. As for the backbone, the edge distribution submodule does not have to be very scalable. The availability, performance, and costs are comparable to the ones in the server farm block. As mentioned earlier, you must pay extra attention to the security features implemented in the edge distribution submodule (e.g., firewalls, IDSs, etc.).
The importance of different features per network components, when considering the switching infrastructure, is summarized below:
Components | Scalability | Availability | Performance
|
Cost |
Access Layer | HIGH | LOW | LOW | LOW |
Distribution Layer | HIGH | MEDIUM | MEDIUM | MEDIUM |
Backbone | LOW | HIGH | HIGH | HIGH |
Server Farm | MEDIUM | HIGH | HIGH | HIGH |
Edge Distribution | LOW | HIGH | HIGH | HIGH |
Analyzing Application Traffic
One of the first enterprise campus design issues refers to analyzing the application traffic as it relates to the switched network design. The traffic patterns usually fall into one of the following scenarios:
- Local within a segment/module/submodule
- Distance (remote) traffic patterns; this implies traversing different segments or crossing submodules or modules in the campus design
Networks were originally designed according to the 80/20 rule, which states that 80% of the traffic is internal and 20% is remote. This concept has changed with the evolution of enterprise networking and distributed server networking; in modern campus networks, the ratio is now 20/80, whereas 20% is for local traffic and 80% is for traffic that crosses between modules and segments. This change occurred as a result of servers no longer sitting in the workgroup areas.
Generally, the application and backbone servers are placed in a server farm area. This puts a much higher load on the backbone because much of the traffic from the client side is going to the servers in the server farm through Core Layer devices. This changes the way you will analyze application traffic.
In order to exemplify the 80/20 rule, consider a workgroup area with various devices (e.g., workstations, laptops, printers, etc.) connected to a basic Layer 2 switch using VLANs. The inter-VLAN routing is accomplished on the routers that also allow access to an e-mail server. According to the 80/20 rule, 80% of the traffic stays within the VLAN, while 20% of the traffic is going to the e-mail server, as illustrated in Figure 4.8 below:
Figure 4.8 – Example of the 80/20 Rule
On the other hand, for an example of the modern 20/80 rule, consider a situation in which there are multiple logical departments using common resources, with applications distributed throughout the organization. This means there are no dedicated servers located within the department, for example, database servers or file servers. All the data is stored in the server farm block. The end-user devices connect to a series of Layer 2 or low-end Layer 3 switches before reaching the Distribution Layer block, where the high-end Layer 3 switches with high availability are located. The data flow finally reaches the server farm block consisting of modern database servers (e.g., e-mail, applications, collaborations, or databases). In this example, the traffic distribution reflects the 20/80 rule, meaning 20% of the network traffic stays local, while 80% of the traffic is moving across the Distribution Layer and backbone of the network, as illustrated below in Figure 4.9. This is the reason you want your Distribution Layer and Core Layer links to be very highly available and fast, in order to move data across the entire enterprise.
Figure 4.9 – Example of the 20/80 Rule
The diagrams presented above represent a single building, but in a large campus enterprise, you will have multiple buildings connected by the network backbone module. The network backbone is connected to the edge distribution submodule in order to provide external access from the network (e.g., WAN, remote access, e-commerce, or VPN).
Analyzing Multicast Traffic
With the incredible advances of collaboration tools using the World Wide Web and the Internet, it is very likely that the organization will have to support multicast traffic.
The process of multicasting, opposed to the process of broadcasting or unicasting, has the advantage of saving bandwidth because it sends a single stream of data to multiple nodes. The multicasting concept is used by every modern corporation around the world to deliver data to groups via the following methods:
- Corporate meetings
- Video conferencing
- E-learning solutions
- Webcasting information
- Distributing applications
- Streaming news feeds
- Streaming stock quotes
The multicast data is sent to a multicast group, and users receive the information by joining the specific multicast group using Internet Group Management Protocol (IGMP). Cisco multicast-enabled routers can be used, running multicast routing protocols such as Protocol Independent Multicast (PIM), to forward the incoming multicast stream to a particular switch port.
Cisco switches effectively implement multicasting using two main protocols: Cisco Group Management Protocol (CGMP) and IGMP snooping. CGMP allows switches to communicate with multicast-enabled routers to figure out whether any users attached to the switches are part of any particular multicasting groups and whether they are qualified to receive the specific stream of data. IGMP snooping allows the switch to intercept the multicast receiver registration message, and based on the gathered information it makes changes to its forwarding table. IGMP snooping works only on Layer 3-aware switches because IGMP is a Layer 3 protocol.
Note: CGMP is a Cisco-specific protocol that is becoming deprecated. IGMP snooping is used more often, both with Cisco and non-Cisco devices.
Understanding whether the organization will use multicast traffic and the way this will be accomplished is another important issue when designing the campus switching infrastructure.
Analyzing Delay-Sensitive Traffic
If using multicasting or web streaming, e-commerce, e-learning solutions, or IP Telephony, the traffic involved in this process will be delay-sensitive and QoS techniques might be necessary to ensure that this type of traffic is treated with priority.
In Layer 3 applications such as Frame Relay environments using EIGRP, OSPF, or BGP as the routing protocols with the ISP, it is very common to use QoS techniques to shape and control traffic at the IP layer. You can also use QoS at Layer 2. When using QoS or analyzing or controlling delay-sensitive traffic at Layer 2, there are four categories of QoS techniques, as follows:
- Tagging and traffic classification
- Congestion control
- Policy and traffic shaping
- Scheduling
Figure 4.10 – Layer 2 QoS Techniques
By examining Figure 4.10 above, you can see that tagging and traffic classification can happen between the end-user nodes, through the Access Layer and up to the Distribution Layer. This is where packets are classified, grouped, and partitioned based on different priority levels or classes of service. This occurs by inspecting the Layer 2 packet headers and determining the priority of the traffic based on the type of traffic (e.g., IP Telephony, Telnet traffic, printer traffic, file services, or other traffic). In this way, the traffic can be tagged and classified and the Layer 2 frame can be changed depending on the priority.
Note: Tagging and traffic classification is also called traffic marking.
The next three techniques – congestion control, policy and traffic shaping, and scheduling – occur in the Distribution Layer block and the edge distribution submodule, primarily on Layer 3 switches. Avoid applying any QoS techniques at the Core Layer because you want as little overhead as possible on the backbone devices so they can successfully achieve their goals, which are fast connectivity, high availability, and reliability.
Congestion control involves the interfaces of the Access Layer switches and the queuing mechanisms configured on them. Queuing techniques are used in order to deal with the congestion of packets coming into and going out of the switch ports. This method ensures that traffic from critical applications will be forwarded properly, especially when using real-time applications (e.g., VoIP), while reducing the amount of jitter and latency as much as possible so the specific application will function in an optimal way, with as little delay as possible.
Policing and shaping help move important traffic to the top of the list and dynamically adjust the priorities of certain types of traffic during periods of congestion. Scheduling is the process of establishing the order in which the congested queues will be served.
QoS techniques can be used to allow Layer 2 switches to assist in the queuing process by scheduling certain processes. Much of the QoS activity happens at Layer 3, but many features can also be taken into consideration at Layer 2, and they can be implemented on higher-end switches to provide support for tagging, traffic classification, congestion control, policing and shaping traffic, and traffic scheduling.
Designing the Access Layer Block
Whenever a network designer is in the process of designing the campus infrastructure’s Access Layer block, the following important questions must be answered:
- What are the current and future needs for end-users or node ports in the existing wiring closets of that particular building?
- What kind of hardware can the company or the client afford? Can it afford modular devices? This will determine the degree of scalability in the Access Layer, an important factor that will allow the business to grow.
- Is the existing cabling adequate? Do you have Cat5 or Cat6 UTP cabling?
- Can you afford fiber cabling? If you are moving into a new building, you might consider installing FO cabling, even at the Access Layer.
- What are the performance and bandwidth requirements?
- What level of high availability is needed at the Access Layer? Generally, in the Access Layer block, you will not need as much redundancy. A certain degree of high availability might be needed if using modular network devices.
- What are the requirements to support VLAN, VTP, and STP? In a large enterprise campus design, you might not need to use multiple VLANs, so you can go straight to using Layer 3 technologies in the Access Layer to avoid having multiple broadcast domains and to decrease the complexity of the STP.
- What are the Layer 2 traffic patterns for applications?
- What multicasting needs and QoS services are necessary at Layer 2?
Designing the Distribution Layer Block
The Distribution Layer block combines and aggregates the Access Layer block components, and it uses Layer 2 and Layer 3 switching to break up the workgroups or VLANs, isolate the different network segments as failure domains, and reduce broadcast storms. It also acts as a transit module between the Access and the Core Layers.
Some important questions that must be answered before designing the Distribution Layer block include the following:
- Should Layer 2 or Layer 3 switches be used? Cost is a big issue in this decision, and the available budget will dictate the hardware to be used.
- How many total users will you have to support? With a high number of users (>500), Layer 3 switching will be essential in the Distribution Layer.
- What are the high availability needs?
- Are the Distribution Layer switches modular and scalable?
- What types of intelligence services will be used in the Distribution Layer? You must consider different features that will be implemented in the Distribution Layer, such as security, QoS, or multicasting. If any of these features are implemented, Layer 3 switching is mandatory.
- Is the company prepared to manage and configure the Distribution Layer block? Should training or consultancy be added to the project budget to ensure that this particular block will be properly managed?
- Will advanced STP features be implemented? You should think about features such as RSTP, backbone fast, or uplinkfast when connecting to the backbone block via Layer 2. These kinds of features can be found on almost all high-end modern switches and can help optimize and speed up the STP process. If a complete Layer 3 switching model is used, you do not have to think about STP or additional features.
Designing the Campus Backbone Block
The campus backbone design occurs very early in the overall infrastructure design process. As such, what follows are a few important questions you should ask yourself and your customers when it is time to design the campus backbone block:
- Do you have three or more separate locations (buildings) in the campus that are connected through an enterprise campus infrastructure? If you have only two buildings, you might not need a separate backbone block. A possible solution in this scenario would be to use high-speed fiber connections between the two buildings’ Distribution Layers.
- Based on the present infrastructure, will the solution to the campus backbone be a Layer 2, Layer 2/3, or Layer 3 switching solution? In the case of a large enterprise campus, do you have the budget for a full multilayer backbone solution throughout?
- Is the organization ready for a high performance, multilayer switching environment? Things to consider here are training, personnel, budget, applications, support services, and intelligence services.
- Does the customer want to simplify and lower the number of links between Distribution Layer switches and the server farm block/edge distribution submodule? If so, you could make changes to or augment the present network infrastructure and redesign the campus backbone.
- What are the performance needs? The bandwidth needs for all the applications and services should be analyzed.
- How many high-capacity links/ports are necessary for the campus backbone block?
- What are the high availability/redundancy demands? Multiple aspects should be taken into consideration, such as redundant connections, modules, and hardware platforms.
Data Center Considerations
The data center concept has greatly evolved over the last few years, passing through many phases because of evolving technology, as illustrated in Figure 4.11 below.
Figure 4.11 – Evolution of Data Centers
At the time of their appearance, data centers were centralized and they used mainframes to manage their data. Mainframes were managed using terminals, which are still used in modern data centers because of their resiliency, although they can be considered legacy components of data centers.
Note: Managing the data in a data center refers to both the storing and the processing of data.
The second-generation data centers used a distributing processing model and introduced the client/server architecture. Business applications were installed on servers, which were accessed by clients using their PCs. This brought a cost benefit compared to the mainframe model.
The third-generation data centers are focused on modern technologies such as virtualization, which further reduces costs (i.e., communication equipment performance has increased while costs have decreased). These factors make this approach more efficient than the distributed data center model. Virtualization results in higher utilization of computing and network resources by sharing and allocating them on a temporary basis.
Data Center Components
From an architecture standpoint, modern data centers include virtualization technologies and processing, storage, and networking services. All of these features combined enable the following:
- Flexibility
- Visibility
- Policy enforcement
The three major components of the Cisco data center architecture framework are as follows:
- Virtualization:
- Cisco Nexus 1000V virtual switch for VMware ESX delivers per-virtual-machine visibility and policy control for SAN, LAN, and unified fabric.
- Cisco Unified Computing System (UCS) unifies data center resources into a single system that offers end-to-end optimization for virtualized environments.
- Virtualization of SAN and device contents helps to converge multiple virtual networks.
- All of the features above lead to simplification of the data center architecture and reduce the TCO.
- Unified fabric:
- Unified fabric technologies include Fibre channel over Ethernet (FCoE) and Internet Small Computer Systems (iSCSI) that usually offer 10 Gbps transfer rates.
- Unified fabric is supported on Cisco Catalyst and Nexus series (iSCSI). Cisco MDS storage series is designed and optimized to support iSCSI.
- Converged network adapters are required for FCoE.
- FCoE is supported on VMware ESX.
- Unified computing:
- Cisco introduced UCS as an innovative next-generation data center platform that converges virtualization, processing, network, and storage into a single system.
- Unified computing allows the virtualization of the network interfaces on servers.
- Unified computing increases productivity with temporal provisioning using service profiles.
Figure 4.12 – Topology of Data Centers
In Figure 4.12 above, the top layer of the data center topology includes virtual machines that are hardware-abstracted into software entities running a guest OS on top of a hypervisor (i.e., resource scheduler).
Below this layer are the unified computing resources, which contain the service profiles that map to the identity of the server. The identity of the server contains details such as the following:
- Memory
- CPU
- Network interfaces
- Storage information
- Boot image
The next layer, consolidated connectivity, contains technologies such as 10 GigabitEthernet, FCoE, and Fibre channel, and all of these are supported on the Cisco Nexus series. FCoE allows native Fibre channel to be used on 10 Gbps Ethernet networks.
Below this layer is the virtualization layer, which includes technologies such as VLANs and VSANs that provide connectivity for virtualized LANs and SANs by segmenting multiple LANs and SANs on the same physical equipment. The logical LANs and SANs do not communicate with each other.
The bottom layer is formed by the virtualized hardware, which consists of storage pools and virtualized network devices.
Server Considerations
Some very important aspects to consider when deploying servers in a data center include the following:
- Required power
- Rack space needed
- Server security
- Virtualization support
- Server management
The increasing number of servers used necessitates more power, and this has led to the need for energy efficiency in the data center. Rack servers usually consume a great deal of energy, even though they are low cost and provide high performance.
An alternative to standalone servers are blade servers. They provide similar computing power but require less space, power, and cabling. Blade servers are installed in a chassis that allows them to share the network connections and power, which reduces the number of cables needed.
Server virtualization is supported on both standalone and blade servers and provides scalability and better utilization of hardware resources, which increases efficiency. Server management is a key factor in server deployment, which is accomplished using different products that offer secure remote management capabilities.
Data Center Facility Spacing Considerations
Facility spacing and other considerations help to size the overall data center and decide where to position the equipment in order to provide scalability. The available space defines the number of racks that can be installed for servers and network equipment. An important factor to consider is the floor loading parameters.
Estimating the correct size of the data center has great influence on costs, longevity, and flexibility. Several factors must be considered, including the following:
- The number of servers
- The amount of storage equipment
- The amount of network equipment
- The number of employees served by the data center infrastructure
- The space needed for non-infrastructure areas: storage rooms, office space, and other areas
- The weight of the equipment
- Loading (this determines how many devices should be installed)
- Heat dissipation
- Power consumption and power type (including UPS and PDU)
An oversized data center will induce unnecessary costs; on the other hand, an undersized data center will not satisfy computing, storage, and networking requirements but will impact productivity.
When thinking about the data center facility, the following factors must be considered:
- Available space
- Floor load
- Power capacity
- Cooling capacity (required temperature and humidity levels)
- Cabling needs
Physical security is another important consideration because data centers contain equipment that hosts sensitive company data, which must be secured from outsiders. Access to the data center must be well controlled. In addition to physical security, other factors are of great importance, including fire suppression, which will help protect the equipment from disasters.
Data center facilities must be properly designed for future use because they are limited in capacity. They must also provide an infrastructure that can recover network services, data, and applications, as well as provide high availability.
Data center design must be considered early in the building development process, and this must be accomplished with a team of experts in various fields such as networking, power, heating, ventilation, and air conditioning. The team members must work together to ensure the component systems interoperate effectively.
Data Center Power Considerations
The power in the data center facility is used to power servers, storage, network equipment, cooling devices, sensors, and other additional systems. The most power-consuming systems include servers, storage, and cooling systems. The process of determining the power requirements for data center equipment is difficult because of the many variables that must be taken into consideration. In addition, power usage is greatly impacted by the server load.
Various levels of power redundancy can affect the capital and operational expenses, based on the chosen options. Determining the right amount of power redundancy to meet the data center’s requirements requires careful planning.
Estimating necessary power capacity involves collecting the requirements for all the current and future equipment, such as the following:
- Servers
- Storage
- Network devices
- UPS
- Generators
- HVAC
- Lighting
The power system should be designed so it also includes additional components such as PDUs, electrical conduits, and circuit breaker panels. Implementing a redundant system will provide protection for utility power failures, surges, and other electrical problems.
Some key considerations related to the data center power system include the following:
- Provide physical electrical infrastructure
- Design for redundancy
- Define the overall power capacity
Data Center Cooling Considerations
Based on the type of equipment used, careful heating and cooling calculations must be provided. Blade server deployments allow for more efficient use of space but increase the amount of heat per server.
The increased use of high-density servers must be addressed through careful data center design. Considerations for cooling need to be taken into account for proper sizing of the servers. Some cooling solutions to address increased heat production include the following:
- Increase the space between the racks
- Increase the number of HVAC units
- Increase the airflow between the devices
Data center equipment produces variable amounts of heat depending on the load. Heat has a negative effect of decreasing the reliability of the devices, so cooling must be used to control their temperature and humidity levels. This applies to data center subsystems, racks, and individual devices.
In order to design a proper cooling system, environmental conditions must be measured. Computing power and memory requirements demand more power, which leads to more heat being dissipated. The increase in device density also leads to an increase in the heat level.
The amount of heat can be reduced by designing proper airflow. Sufficient cooling equipment must be available to produce acceptable temperatures within the data center. An efficient technique is arranging the data center racks with an alternating pattern of “hot” and “cold” aisles. The cold aisle should have equipment arranged face to face and the hot aisle should have them arranged back to back. The cold aisle should have perforated floor tiles drawing cold air from the floor into the face of the equipment, while the hot aisle should be isolated in order to prevent hot air mixing with cold air.
Some other cooling techniques that can be used for equipment that does not exhaust heat to the rear include the following:
- Using cabinets with mesh fronts and backs
- Increasing the height of the raised floor
- Spreading out equipment onto unused racks
- Blocking unnecessary air escapes to increase airflow
Data Center Cabling Considerations
A passive infrastructure for the data center is essential for optimal system performance. The physical network cabling between devices determines how these devices communicate with one another and with external systems.
The cabling infrastructure type chosen impacts the physical connectors and the media type of the connectors. This must be compatible with the equipment interfaces. Two options are widely used today: copper and fiber optic cabling.
The advantages of fiber optics are they are less susceptible to external interferences and they operate over greater distances than copper cables do. Cabling must remain well organized in order to maintain the passive infrastructure easily. Cabling infrastructure usability and simplicity is influenced by the following:
- Number of connections
- Media selection
- Type of cabling termination organizers
All of these parameters must be considered in the initial design of the data center. Suitable cabling infrastructure design can prevent against issues such as the following:
- Hard to implement troubleshooting
- Downtimes
- Improper cooling
Using cable management systems is essential in preventing these issues. These systems consist of integrated channels located above the rack for connectivity. Cabling should be located in the front or rear of the rack for easy access. When data center cabling is deployed, space and device location constraints make cabling infrastructure reconfiguration difficult. Scalable cabling is essential for proper data center operation and lifespan. On the other hand, badly designed cabling will lead to downtime because of expansion requirements that were not planned in the design phase.
Enterprise Data Center Infrastructure
As with any enterprise network, the enterprise data center architecture follows the multilayer approach, and it can be structured in the Core, Distribution/Aggregation, and Access Layers. This is the most common model used, and it supports a variety of devices, including standalone servers, blade servers, and mainframes, as illustrated in Figure 4.13 below:
Figure 4.13 – Enterprise Data Center Infrastructure
In the diagram above, the Access Layer provides physical port density and both Layer 2 and Layer 3 services for server connectivity. The Aggregation Layer connects the Access and Core Layers and provides security (i.e., ACLs, firewalls, or IPSs) and other server farm services, such as caching, content switching, and SSL offloading. Redundancy is an essential factor at the Aggregation and Core Layers, as well as between them, in order to avoid single points of failure.
Data Center Core Layer
The data center’s Core Layer (Figure 4.14) is usually a centralized Layer 3 routing layer in which one or more Aggregation Layers connect. The Core Layer can inject a default route into the data center Aggregation Layer after the data center networks are summarized. If multicast applications are used, the data center’s Core Layer must also support IP Multicast features.
Figure 4.14 – Data Center’s Core Layer
Smaller data centers may use a collapsed Core Layer design that combines the Aggregation Layer and the Core Layer into a single entity. Some of the most important data center Core Layer features include the following:
- Low-latency switching
- 10 GigabitEthernet
- Scalable IP Multicast support
Data Center Aggregation Layer
The role of the data center’s Aggregation Layer (Figure 4.15) is to aggregate Layer 2 and/or Layer 3 links from the Access Layer and to connect upstream links to the data center’s Core Layer. Layer 3 connectivity is usually implemented between the Aggregation and Core Layers.
Figure 4.15 – Data Center’s Aggregation Layer
This is a critical point for security (Layer 4) and application services, including the following:
- Server load balancing
- Firewalls
- IPS services
- SSL offloading
These services are commonly deployed in pairs using Cisco Catalyst 6500 chassis clusters, and their role is to maintain the session state for redundancy purposes. By implementing this kind of architecture, the management overhead is reduced due to simplifying the number of devices that must be managed.
The boundary between Layer 2 and Layer 3 can be located at multilayer switches, firewalls, or content switching devices in the Aggregation (Distribution) Layer. If the enterprise needs separate policy and security zones, multiple independent Aggregation Layers can be built in order to support these requirements. This is also the place where FHRPs (e.g., HSRP or GLBP) are implemented. The Aggregation Layer is usually the place where the STP Root Bridges are positioned in order to help control the loop-free topology and support a larger STP processing load.
The most important data center Aggregation Layer benefits include the following:
- Aggregates traffic from the Access Layer and connects to the Core Layer
- Supports advanced security features
- Supports advanced application features
- Supports STP processing load
- Flexibility and scalability
Data Center Access Layer
The main purpose of the data center’s Access Layer (Figure 4.16) is to provide Layer 2 and Layer 3 physical port access for different kinds of servers. This layer consists of high-performance and low-latency switches. Most data centers are built using Layer 2 connectivity, although Layer 3 (routed access) is also available from a design standpoint. The Layer 2-based design uses VLAN trunking upstream, allowing aggregation services to be shared across the same VLAN and across multiple switches. Other benefits of Layer 2 access are support for NIC teaming and server clustering.
Figure 4.16 – Data Center’s Access Layer
Possible physical loops that might be present at Layer 2 are managed by STP. The recommended STP operation modes are Rapid Spanning Tree Protocol (RSTP) and Multiple Spanning Tree Protocol (MSTP), which assures improved scalability and fast convergence.
Using STP can be avoided by implementing routed access (Layer 3 switching). In this case, the hosts must be provided with default gateway information because the Access Layer switch becomes the first-hop router in the network.
The most important benefits of the data center’s Access Layer are as follows:
- Provides port density for server farms
- Provides high-performance, low-latency Layer 2 switching
- Supports single-homed and dual-homed servers
- Supports a mix of oversubscription requirements
Virtualization Considerations
Virtualization has become a critical component in most enterprise networks because of modern demands in IT, including increasing efficiency while reducing capital and operational costs. Virtualization is a critical component of the Cisco enterprise network architecture.
Virtualization can represent a variety of technologies, including extracting the logical components from hardware or networks and implementing them into a virtual environment. Some of the advantages of virtualization include the following:
- Flexibility in managing system resources
- Better use of computing resources
- Consolidating low-performance devices into high-performance devices
- Providing flexible security policies
Some of the drivers behind implementing a virtualized environment are as follows:
- The need to reduce the number of physical devices that perform individual tasks
- The need to reduce operational costs
- The need to increase productivity
- The need for flexible connectivity
- The need to eliminate underutilized hardware
Virtualization can be implemented at both the network and the device level. Network virtualization involves the creation of network partitions that run on the physical infrastructure, with each logical partition acting as an independent network. Network virtualization can include VLANs, VSANs, VPNs, and VRFs (Virtual Routing and Forwarding).
On the other hand, device virtualization allows logical devices to run independently of each other on the single physical machine. Virtual hardware devices are created in software and have the same functionality as real hardware devices. The possibility of combining multiple physical devices into a single logical unit also exists.
The Cisco enterprise network architecture contains multiple forms of network and device virtualization, such as the following:
- Virtual machines
- Virtual switches
- Virtual local area networks
- Virtual private networks
- Virtual storage area networks
- Virtual switching systems
- Virtual routing and forwarding
- Virtual port channels
- Virtual device contexts
Device contexts allow the partitioning of a single partition into multiple virtual devices called contexts. A context acts as an independent device with its own set of policies. The majority of features implemented on the real device are also functional on the virtual context. Some of the devices in the Cisco portfolio that support virtual contexts include the following:
- Cisco ASA
- Cisco ACE
- Cisco IPS
- Cisco Nexus series
Server virtualization allows the server’s resources to be extracted in order to offer flexibility and usage optimization in the infrastructure. The result is that data center applications are no longer tied to specific hardware resources, so the applications are unaware of the underlying hardware. Server virtualization solutions are produced by companies such as VMware (ESX), Microsoft (Hyper-V), and Citrix (XenServer).
The network virtualization design process must take into consideration the preservation of high availability, scalability, and security in the data center blocks. Access control issues must be considered in order to ensure legitimate user access and protection from external threats. Proper path isolation ensures that users and devices are mapped to the correct secure set of resources by creating independent logical traffic paths over a shared physical infrastructure.
Summary
An enterprise campus might be composed of multiple buildings that share centrally located campus resources. Enterprise campus design considerations fall under the following categories:
- Network application considerations: Network applications might include the following:
- Peer-to-peer applications (file sharing, instant messaging, IP Telephony, and video conferencing)
- Client/local server applications (applications on servers located close to clients or servers on the same LAN)
- Client/server farm applications (e-mail, file sharing, and database applications)
- Client/enterprise edge server applications (Internet-accessible web and e-commerce applications)
- Environmental considerations: Network environmental considerations vary with the scope of the network, such as the following:
- Intra-building: An intra-building network provides connectivity within a building. The network contains both Access and Distribution Layers. Typical transmission media includes twisted pair, fiber optics, and wireless technology.
- Inter-building: An inter-building network provides connectivity between buildings that are within 2 km of each other. Inter building networks contain the Distribution and campus Core Layers. Fiber optic cabling is typically used as the transmission media.
- Remote buildings: Buildings separated by more than 2 km might be interconnected by company-owned fiber, a company-owned WAN, or by service provider offerings (e.g., metropolitan-area network [MAN]).
Common transmission media choices include the following:
- Twisted pair copper (up to 1 km, max. 10 Gbps, low cost)
- Multi-mode fiber (up to 2 km, max. 10 Gbps, moderate cost)
- Single-mode fiber (up to 80 km, max. 10 Gbps, high cost)
- Wireless technologies (up to 500 m, max. 54 Mbps, moderate cost)
When selecting infrastructure devices, Layer 2 switches are commonly used for Access Layer devices, whereas multilayer switches are typically found in the Distribution and Core Layers. Selection criteria for switches include the need for QoS, the number of network segments to be supported, required network convergence times, and the cost of the switch.
When designing the enterprise campus, different areas of the campus (i.e., building access, building distribution, campus core, and server farm) require different device characteristics (i.e., Layer 2 versus multilayer technology, scalability, availability, performance, and per port cost).
Access Layer best practices include the following:
- Limit the scope of most VLANs to a wiring closet.
- If you use the Spanning Tree Protocol (STP), select Rapid Per VLAN Spanning Tree Plus (RPVST+) for improved convergence.
- Remove unneeded VLANs from trunks.
- Consider the potential benefits of implementing routing at the Access Layer to achieve, for example, faster convergence times.
Distribution Layer considerations include the following:
- Switches selected for the Distribution Layer block require wire speed performance on all their ports.
- The need for high performance is related to the roles of a Distribution Layer switch: acting as an aggregation point for Access Layer switches and supporting high-speed connectivity to campus Core Layer switches.
- The key roles of a Distribution Layer switch demand redundant connections to the campus Core Layer; design redundancy such that a Distribution Layer switch could perform load balancing to the campus Core Layer.
- Distribution Layer switches should support network services such as high availability, QoS, and policy enforcement.
Campus Core Layer considerations include the following:
- Evaluate whether a campus Core Layer is needed; campus Core Layer switches interconnect Distribution Layer switches, and Cisco recommends that you deploy a campus Core Layer when interconnecting three or more buildings or when interconnecting four or more pairs of Distribution Layer switches.
- Determine the number of high-speed ports required to aggregate the Distribution Layer.
- For high-availability purposes, the campus Core Layer should always include at least two switches, each of which can provide redundancy to the other.
Server farm considerations include the following:
- Determine server placement in the network; for networks with moderate server requirements, common types of servers can be grouped together in a separate server farm block connected to the campus core using multilayer switches .
- All server-to-server traffic should be kept within the server farm block, not propagated to the campus Core Layer.
- For large network designs, consider placing the servers in a separate data center, which could potentially reside in a remote location.
- For security, place servers with similar access policies in the same VLANs and then limit interconnections between servers in different policy domains using ACLs on the server farm’s multilayer switches.
Enterprise data center architecture uses hierarchical design models, much like the campus infrastructure. However, there are some differences in these models. Large networks that contain many servers traditionally consolidate server resources in a data center. However, data center resources do not tend to be used effectively because the supported applications require a variety of operating systems, platforms, and storage solutions.
Evolution factors of modern data centers include the following:
- Using virtual machine software, such as VMware, to remove the requirement that applications running on different operating systems must be located on different servers
- Removing network storage from the individual servers and consolidating the storage in shared storage pools
- Consolidating I/O resources, such that servers have on-demand access to I/O resources
Data centers can leverage the Cisco enterprise data center architecture to host a wide range of legacy and emerging technologies, web applications, blade servers, clustering, service-oriented architecture, and mainframe computing.
An enterprise data center infrastructure design requires sufficient port density and Layer 2/Layer 3 connectivity at the Access Layer. The design must also support security services (i.e., ACLs, firewalls, and IPSs) and server farm services (i.e., content switching and caching). Some of the design best practices for an enterprise data center’s Access, Aggregation, and Core Layers include the following:
- Provide for both Layer 2 and Layer 3 connectivity
- Ensure sufficient port density to meet server farm requirements
- Support both single-attached and dual-attached servers
- Use RSTP or MSTP for loop-free Layer 2 topologies
- Use the data center’s Aggregation Layer to aggregate traffic from the data center’s Access Layer
- Provide for advanced application and security options
Designers commonly use modular chassis (e.g., Cisco Catalyst 6500 or 4500 series switches) in an enterprise Access Layer. Although this design approach does offer high performance and scalability, challenges can emerge in a data center environment. Server density has increased because of the one rack unit (1RU) concept and blade servers, resulting in the following issues:
- Cabling: Each server typically contains three to four connections, making cable management between high-density servers and modular switches more difficult.
- Power: Increased server and switch port density requires additional power to feed a cabinet of equipment.
- Heat: Additional cabling under a raised floor and within a cabinet can restrict the airflow required to cool equipment located in cabinets. In addition, due to higher-density components, additional cooling is required to dissipate the heat generated by switches and servers.