Identify virtual network components. Network virtualization technology is something that has been around for a long time in the computing world, but only recently has it begun to be popular in server and desktop environments, based on its ability to scale out very large environments. Many virtual environments are made up of hundreds or thousands of devices and because of this there are a large number of related networking concerns.
This chapter aims to cover topics that include virtual desktops and servers, virtual network devices, the network as a service, and onsite versus offsite virtualization. In addition, it will cover the devices that can be virtualized, how to identify virtual network components, and why and how this is accomplished. We cover network virtualization in our Cisco CCNP ENCOR course.
Virtual Desktops and Servers
Virtualization allows you to take multiple physical devices and move them to a single physical device that is logically divided into smaller virtual domains (see Figure 7.1 below). In other words, it allows you to create a software environment that emulates the hardware that used to be there before. The single device that will host all those virtual servers will have many resources available, in particular the following:
- CPU capacity
- Memory
- Disk space
- Bandwidth
Figure 7.1 – Network Virtualization
Virtualization involves having a single physical device on top of which you use some virtualization software that is able to separate virtual machines inside the physical device. The virtualization software will allocate a certain amount of disk space, memory, and CPU capacity to each virtual machine (VM) defined inside. If you wanted to build a new server, you would just carve out a new section of the physical device to create another virtual operating system and allocate the necessary resources, making it act and feel exactly as if it were a physical device (see Figure 7.2 below).
The software that makes this happen is called a virtual machine manager or a hypervisor (which is more than a supervisor). The hypervisor has the following responsibilities:
- Manages all the virtual systems
- Manages physical hardware resources
- Manages the VM relationships to the hardware components inside the physical server
- Bridges the virtual world to the physical world
- Maintains separation between virtual machines when you don’t want them to communicate with each other
Note: It is very important that developers of the hypervisor software make sure that it has proper security features in place to restrict visibility and access between all VMs, even though they are sitting on the same physical device. |
Figure 7.2 – Virtualization Components and Hypervisor
There are two types of hypervisors:
- Type 1 – Bare metal machine managers: With this type of virtual machine manager, you purchase a big server and simply load the VM software on the raw hardware. There is no underlying operating system involved and nothing else that you have to think about from an OS perspective. You simply load the hypervisor (e.g., VMware ESXi or Microsoft Hyper-V), which is the actual OS. This hypervisor type is often seen in very large enterprise server environments.
- Type 2 – Hypervisors that run on existing OS: This type of virtual machine manager runs on top of Windows/Linux/Mac OS hosts and it is often used in desktop environments.
The hypervisor allows you to start all the virtual machines at one time, as well as to network between them by configuring how different systems can communicate across the network. This offers the system administrator a lot of power from both an OS and a networking perspective.
Regarding enterprise environments, it’s not about users running their virtual systems and servers on Windows or Linux platforms; instead, it’s about a bare metal installation. Because you will usually run tens or hundreds of servers on a single piece of hardware, that device needs to have a lot of resources allocated to it, including:
- Multi-core CPU and multi-CPU sockets
- Large memory capacities (usually above 128GB, compared to 2 to 4GB used in desktop environments)
- Massive amounts of storage, internal or network-attached storage (NAS)
These large resource requirements make sense because you are consolidating all the servers into a single physical machine. You used to have a data center that had hundreds of servers (physical devices) plugged in at the same time. Now you have taken them all away and moved them into a single physical device. This server consolidation offers the following benefits:
- Saves a lot of room in the data center
- Increases flexibility on what you can do with the hardware
- Lowers costs on hardware, electricity, cooling, etc., both from a CAPEX (initial investment) and an OPEX (recurring operational and maintenance costs) perspective
Virtualization also affords a number of advantages from a management perspective:
- Fast deployment: You don’t have to buy a new computer, load an operating system, plug it into the network, find a place in the rack for it, and do all the administrative tasks necessary with a physical server. Using virtualization, you can build an OS in a matter of minutes with the VM manager software, which includes an IP address and pre-built software that you might have configured as a template.
- Managing the load across servers: If one particular server is very busy during a particular time of year, you can allocate additional memory and disk space during that time. As other servers become utilized more often, you can allocate the resources in other directions. Unlike using a physical server, where you would normally have to unplug or upgrade the device memory, turn off the machine, and physically install memory chips, in a virtual environment you don’t have to worry about these time-consuming tasks. If you need more disk space or memory, you can increase the virtual resources with just a few clicks from the hypervisor. Virtualization offers many advantages and this is the main reason virtual servers and networks have become so popular in modern data centers.
Virtual Switches
As with real servers, the virtual machines managed by the hypervisor need to communicate with each other and with the outside world to accomplish different tasks (e.g., an application server communicating with a database server). This leads to the concept of virtualizing networking devices, in addition to virtualizing desktops and servers as detailed in the previous section (see Figure 7.3 below).
Before moving to the virtual world, servers and desktops were connected to networks composed of enterprise switches, firewalls, routers, and other devices that offered necessary functionality and features, including redundancy features. Now that servers and desktops have moved to virtual worlds, network devices also have to migrate to the virtual environment to provide similar functionality. This is an important consideration when making the change from the physical world to the virtual world.
Figure 7.3 – Virtualization of Network Devices
Network virtualization is often almost as important as the actual server virtualization. When migrating from a physical to a virtual network infrastructure, a number of challenges must be taken into consideration:
- Integration with the outside world: how many NICs will the physical hosting machine have?
- How will the cumulative bandwidth from all the servers in the physical world be transposed to the virtual world and be accommodated with a limited number of Ethernet connections (sometimes just one)?
- Will the throughput offered by the physical server be enough to properly serve the virtualized servers running on the system?
- How will network redundancy be built into the virtual environment (multiple network connections into the VMs)?
The considerations presented above become very important in terms of uptime and availability, especially in large data centers that host critical business applications. Considering that network virtualization eliminates the need for a dedicated connection per server, everything should now be accomplished in software, including assigning IP addresses, VLANs, and other specific configuration. This can become even more difficult to manage because you cannot physically touch the network equipment or trace the cabling to and from the servers. All of these functionalities are fully accomplished using hypervisor software.
By virtualizing the network layer, you not only transfer all the functionality to the virtual world but also obtain extra features. Traditional switches don’t have built-in functionalities like redundancy, load balancing, or QoS. These features can be easily implemented and configured in a virtual environment because everything is done in software, and the virtual system manufacturer might implement extra tweaks so you can manage certain applications to perform at a higher priority than others. For example, you can use integrated load balancing hypervisor functionality to balance the traffic between multiple VM Web servers.
Virtualizing network components offers two major advantages over using physical devices:
- Cost savings
- Centralized control
Many virtual systems also have some basic integrated security features, perhaps some firewall functionalities built right into the virtualization software. An important note is that third-party providers are starting to create virtual firewalls and Intrusion Prevention Systems (IPS) that can be loaded into these virtual environments to provide exactly the same security posture in the virtual world as you had in the physical world.
Note: Virtual network devices can be part of the hypervisor system or they can be dedicated virtual machines that can be loaded just like any other VM server. |
Network as a Service
After virtualizing desktops, servers, and network devices, the next step is moving the entire network infrastructure into the “cloud” where it will operate as a Network as a Service (NaaS). If things become too complicated within the network and you don’t have the expertise to build and maintain it, you can outsource this process to another company and use it as a service, with all the required functionalities (usually by purchasing a subscription), and the network is now part of the cloud.
As virtualization software has become more popular, third-party providers have started to offer virtualization inside the cloud, with the customer not having anything at his facility. This implies that all the applications, platforms, and the network are moved into the cloud and all the IT functions of the company are virtualized so that everything is running in a completely separate facility. The network and everything associated with the management of the network then becomes invisible to the customer, who simply uses a single link that connects the local facility to the cloud without worrying about any network configuration aspects. In this case, everything is done separately because the network is running as a service at a third-party facility.
When offering NaaS in the cloud, any changes that occur within the network are invisible to the customer. The customer has a single connection to the cloud and does not care how the networking aspect works once the information is sent to the cloud, as the ISP is responsible for all of the virtualization services. This offers great flexibility in situations in which the ISP wants to take all the servers and move them into a data center that has much more capacity and availability. This is simply done by picking up the virtual system and deploying it almost immediately to a new physical location that may be geographically dispersed from the initial one, transparently and without the customer being affected in any way. Ultimately, the customer is not even interested in such details, as the main concern is that the service provided by the network and its applications are running as expected.
There might be many reasons why you would want to take your network and move it into the cloud, running it as a service. One situation might be that you have an important application that is used by thousands of people, which requires a lot of resources and bandwidth to operate. Instead of having all the networking and communication resources at your facility, including large network pipes and very expensive connections, you can simply put this into the cloud and have it managed by third-party providers. These service providers already have high-capacity connections to the Internet so you don’t have to spend the money on the bandwidth and maintenance services of these connections.
Complete network virtualization offers another interesting advantage, which is commonly referred to as a “follow the sun” service. This is a concept that is based on the fact that servers can be relocated relatively quickly, and based on their geographical region, service providers can optimize resource utilization and response times (most of the traffic for certain applications is done during the day, which happens at different intervals across the globe).
Another advantage of network virtualization is the ease of expanding and contracting how many resources you are using. If your applications are used by millions of people on a particular day or time period (e.g., tax applications), you can easily allocate more bandwidth, disk space, or memory with just a few clicks and suddenly increase the application’s capacity. When the busy period has passed and you don’t need all the allocated resources, there is no need to pay for them, so you can decrease certain parameters (e.g., network throughput or CPU cycles) to a level that is more reasonable for what the application is doing, again with just a few clicks.
If a customer uses NaaS and someday decides to move to a different location, it makes absolutely no difference how the applications will perform because they are hosted and managed by the service provider somewhere inside the cloud, which can be accessed from anywhere. Running NaaS inside the cloud provides a lot of functionality, which can be a perfect fit for certain business applications and services.
On-Site versus Off-Site Virtualization
Virtualization technology offers many choices regarding where you manage and maintain the virtualized environment. You might have everything on your premises or you might choose to install them in a different location, off-site.
In an on-site configuration, you own and manage the infrastructure within your premises. You are responsible for building and maintaining it, and if there are any issues associated with the hosting aspect, you are responsible for solving them. There are a number of advantages to hosting the virtualized environment on-site:
- You have control over what happens. If anything needs to be changed or moved, you have complete control over every modification on the hosting devices and connections.
- You also have control over possible resource upgrades on the devices, including memory, disk space, CPU capacity, and bandwidth.
- You have complete security over the entire infrastructure. You can install the equipment in a locked room and limit access to the physical servers, which is something that you usually don’t have available if the virtual environment is hosted in a remote location (off-site).
There are also some disadvantages to the on-site approach:
- It is more costly than hosting the equipment at a third party because you have to purchase the servers, racks, connections, and the operating systems, and you have to make sure that you own a controlled environment. All of these aspects involve both CAPEX (initial expenses) and OPEX (recurring expenses) that you need to think about.
- You need a networking infrastructure that includes enterprise switches, routers, firewalls with redundancy, and security features built in.
- All of the factors above make the infrastructure hard to upgrade. You have to think about how much room is available in the racks, you have to purchase new equipment, and sometimes it is difficult to make rapid changes because there are a number of physical devices that offer limited performance.
In an off-site environment everything is hosted in the cloud. You don’t have to worry about where these particular systems are in the data center as they don’t even exist at your facility. All the applications, servers, and operating systems are somewhere else and you don’t necessarily care where that is. This brings the following main advantages:
- You don’t have any kind of infrastructure costs: no servers, no cooling, or anything that requires an initial investment.
- The management and maintenance of all the infrastructure is handled by a third-party service provider, so you don’t need a lot of staff to manage the devices and make sure they operate properly.
- The infrastructure can be located anywhere in the world (single hosting location or multiple hosting locations).
- Many service providers offer huge capacity virtual environments, so if you need more resources (e.g., disk space, memory, bandwidth, etc.), the ISP can provide this with minimum effort.
Hosting the infrastructure off-site also has some disadvantages:
- All of the customer data is stored at a different facility, with no physical access to it. In cases where the data is extremely sensitive, having your virtualized environment somewhere in the cloud may not be the best option.
- Off-site hosting has some associated contractual limitations. Usually, this involves signing a long-term contract with the service provider that offers limited flexibility for that duration. If the environment changes rapidly, you may need to modify some of the contractual terms to avoid different kinds of limitations.
Virtual PBX
PBX stands for Private Branch Exchange, which is a phone control system. In this system, all of the telephones used internally in different companies are not directly connected to the service provider but are instead connected to a box in a local data center called the PBX. This generally offers more than simple voice communication services, including advanced features such as:
- Voice mail services
- Interactive voice response (IVR), which is a feature that allows users to navigate through a voice menu by pressing various phone buttons
- Ability to create records from the inbound and outbound call information available on the PBX
- Music-on-hold services
One thing that tends to be very common with PBX devices is their reliability. The telephone system is something you can always count on: you can pick up the desk phone at any time and call anyone you want to. When there is a problem with a PBX, it is usually unavailability of the phones.
For many companies (especially small and medium-sized) a PBX requires a lot of upkeep and it is difficult to install and maintain. The logical decision would be to have them located at a third-party provider. It makes perfect sense to put voice communications in the cloud because of the advantages of server and network virtualization. With virtual PBX systems, you have the capability of simply contracting with a third party and having all the phones connected to the virtual PBX. This way you won’t have any concerns about maintaining the equipment on-site but you will still have the same PBX capabilities (except everything is now offered by a virtual PBX hosted in another facility).
Virtualizing the PBX means that the only voice-related infrastructure present on-site will be the actual telephones. When you pick up the phone, it will communicate with the remote PBX over IP and this allows having a minimal infrastructure at your site while at the same time keeping all the required functionality. In order to make this happen, you may need additional network configuration:
- There will be extra bandwidth requirements because now all the voice communications are going out of the external connection.
- You should carefully analyze Quality of Service requirements because you don’t want other applications that use the same connection influencing the call quality in any way.
Virtual PBX systems might offer significant cost savings because PBX boxes are expensive, and the more your organization grows, the more capacity those devices have to offer to support all the users and required features. Having this provided by a third party can keep the costs low considering the following aspects:
- You don’t have any kind of management overhead. This includes maintenance and operations of the equipment hosted on-site.
- You don’t have to purchase any hardware in order for the system to operate properly.
- There is no power cost because the entire infrastructure is hosted by a third party.
- As the organization changes, either by growing or becoming smaller in size, it is very easy to change exactly the way the telephone system will work and the functionalities it will offer.
Note: Voice communications use dedicated protocols like SIP (control logic) and RTP (sending traffic), as described in previous chapters. |
Summary
Virtualization technology is something that has been around for a long time in the computing world, but only recently has it begun to be popular in server and desktop environments, as it is able to scale out very large environments. Many virtual environments are made up of hundreds or thousands of devices and because of this there are a large number of related networking concerns.
Virtualization allows you to take multiple physical devices and move them to a single physical device that is logically divided into smaller virtual domains. In other words, it allows you to create a software environment that emulates the hardware that used to be there before.
Virtualization involves having a single physical device on top of which you use some virtualization software (hypervisor) that is able to separate virtual machines inside the physical device. The virtualization software will allocate a certain amount of disk space, memory, and CPU capacity to each virtual machine (VM) defined inside.
There are two types of hypervisors:
- Type 1 – Bare metal machine managers
- Type 2 – Hypervisors that run on an existing OS
Before moving to the virtual world, servers and desktops were connected to networks composed of enterprise switches, firewalls, routers, and other devices that offered necessary functionality and features, including redundancy features. Now network devices have to migrate to the virtual environment to provide similar functionality. This is an important consideration when making the change from the physical world to the virtual world.
After virtualizing desktops, servers, and network devices, the next step is moving the entire network infrastructure into the cloud where it will operate as a NaaS (Network as a Service). If things become too complicated within the network and you don’t have the expertise to build and maintain it, you can outsource this process to another company and use it as a service, with all the required functionalities (usually by purchasing a subscription), and the network is now part of the cloud.
Virtualization technology offers many choices regarding where you manage and maintain the virtualized environment. You might have everything on your premises or you might choose to install them in a different location, off-site.
For many companies (especially small and medium-sized) a PBX requires a lot of upkeep and it is difficult to install and maintain. The logical decision would be to have them located at a third-party provider. It makes perfect sense to put voice communications in the cloud because of the advantages of server and network virtualization. With virtual PBX systems, you have the capability to simply contract with a third party and have all the phones connected to the virtual PBX.
VMware guide to network virtualization.
Pass your Network+ exam by using our 101 Labs – CompTIA Network+ book.