Explain the purpose and features of various network appliances. This chapter describes the following devices: load balancers, proxy servers, content filters, and VPN concentrators. We cover network appliances in our Network+ course.
Load Balancers
Load balancers are usually present in large environments in which different servers (e.g., Web, application, database, etc.) and are accessed by a large number of internal or external users. These servers are grouped in server farms that contain a number of servers that together offer a single service based on performance and redundancy.
The load is usually shared across all servers in a server farm and load balancer devices are used to distribute the load from the users to multiple servers. This operation is invisible to the end-user, as the load balancer balances the requests from the users to the servers and responds to the users on behalf of the servers. In this operation, the users think they are communicating with a single server.
Using load balancers also provides fault tolerance in situations in which a server from a server farm goes down. The other servers will still be used by the load balancer and the end-users’ service will not be affected. Load balancers usually offer fast convergence because they can quickly detect that a server is down and remove it from the list, thus using only the other healthy servers to transmit requests.
Based on the servers’ capabilities, load balancers can be configured to use them in different proportions. Some of the common load balancing algorithms include the following:
- Round robin (each request is sent to a different server)
- Weighted round robin (requests can be sent more often to specific servers based on weights assigned to them – unequal load balancing)
- Least connection (requests are sent to servers that are not busy with other connections)
- Weighted least connection (the same as least connection but with weights assigned to servers)
Load balancers are also capable of TCP offloading, which means that the device sets up a TCP connection to each server and uses it every time. The alternative would waste more time and resources and would require the load balancer to initiate a TCP handshake with the servers every time it had to forward a user request.
Load balancers also include dedicated hardware components specialized for different functions, like SSL offloading (i.e., the encryption/decryption process is managed by the load balancers instead of the servers). Another function is caching, which allows the load balancers to answer some user requests faster using the built-in cache instead of requesting the data from the servers. Usually only the most frequent requests are cached because of memory limitations.
Load balancers also offer QoS capabilities (e.g., prioritizing certain services over others) and content switching (provides load balancing on the Application Level, where certain servers may only use certain applications).
Load balancers can be deployed in the following modes:
- Router mode
- Bridge mode
- One-armed or two-armed mode
Router Mode
Router mode is one of the most popular implementation modes and in this mode the Server Load Balancer (SLB) device routes between the outside subnets (toward the clients) and the inside subnets (toward the servers). In this scenario, which is illustrated in Figure 21.1 below, the services addresses are typically globally routable public IP subnet addresses, and the external subnets represent the public network while the internal subnets represent the private network:
Figure 21.1 – Load Balancer Router Mode
The load balancer routes between the public and the private networks and the servers will use the SLB’s inside address as their default gateway. Since the replies that come back from the servers (i.e., responses to clients) pass through the SLB and the SLB changes the servers’ IP addresses to the appropriate addresses, the end-users do not know there is an SLB device in the path because they do not see the real IP addresses of the servers and internal applications. This process is similar to NAT (Network Address Translation) technology. Load balancer router mode is easy to deploy, works with many server subnets, and is the recommended mode to be used for a majority of appliance-based content load balancers.
Bridge Mode
Load balancer bridge mode, shown in Figure 21.2 below, is also called the inline mode and it works by having the SLB device operate as a transparent firewall. In this situation, you need an upstream router between the clients and the SLB.
Figure 21.2 – Load Balancer Bridge Mode
In this design, the physical servers are in globally routable IP subnets, and the IP addresses on the SLB can be in the same or in different IP subnets. Each one of the inside server farms (i.e., the Web server farm, the application server farm, or the database server farm) has to be in a single IP subnet because the SLB will modify the MAC addresses associated with the virtual IP (VIP) to the specific MAC address of a physical server in order to direct traffic to the appropriate server.
This design method is seen most often with integrated load balancers, such as the Cisco Content Switching Module or the Application Control Engine in a 6500 or 7600 chassis. However, if the physical servers have to be deployed in a redundant configuration, you should be aware of the implications of Spanning Tree Protocol and how it will affect the devices and the backend servers. It is typically easier to configure the SLB device in router mode because troubleshooting STP can become very complicated.
One-Armed/Two-Armed Modes
The one-armed/two-armed modes are pretty popular approaches as well and they involve running the SLB device in an out-of-band fashion, similar to an IDS sensor that analyzes mirrored traffic from a switch. The load balancer is connected to a switch with one or two links, but it is not placed directly in line with the traffic as is the case with the router and bridge modes presented previously. These types of modes are illustrated in Figure 21.3 below:
Figure 21.3 – Load Balancer One-Armed/Two-Armed Modes
In a one-armed topology, the SLB and the physical servers are in the same VLAN (or subnet), while with the two-armed approach, the SLB device routes the traffic to the physical server subnet, which can also be a private subnet with NAT.
Proxy Servers
A proxy server is a server placed in the middle of the communication between a user and a server that offers a certain service. The client (user) makes requests to the proxy server and the proxy server makes requests on behalf of the user to a Web server, for example. After the proxy server receives the response from the Web server, it sends the result back to the client. This process is depicted in Figure 21.4 below:
Figure 21.4 – Proxy Server Operations
Proxy servers offer the following benefits:
- Access control: controls client access to specific services
- Caching: the most frequent requests are cached on the proxy, which can respond without interrogating the Web server
- URL filtering: limits the websites users can access
- Content scanning: the response from the Web server can be scanned for malicious or unauthorized content
Proxy servers can be set up in multiple ways:
- Forward proxy
- Reverse proxy
- Open proxy
With forward proxy, an end-user on the internal network communicates with a proxy server on the internal network. The proxy server then makes a request to public servers on the Internet and then sends the information back to the end-user, as illustrated in Figure 21.5 below:
Figure 21.5 – Forward Proxy
With reverse proxy, you may have users on the Internet who want to access an internal enterprise Web server. In order to avoid direct access to the Web server (for security reasons), you can install a proxy server between the two endpoints that will forward the client requests to the internal server and then reply to the public users, as illustrated in Figure 21.6 below:
Figure 21.6 – Reverse Proxy
Open proxy is placed on the Internet for anyone to use, as shown in Figure 21.7 below. It is often used for security reasons, for example, when a user does not want a public server to know that the request came directly from him. The user communicates with the server via the open proxy, which will exchange messages between the two parties. Users should trust the open proxy before using it because a malicious open proxy can act as a man-in-the-middle (attacker) between the user and the server and insert unwanted data when responding back to the client.
Figure 21.7 – Open Proxy
Content Filters
Content filters are devices that restrict or allow traffic based on the information contained in the packets traversing the network. This is generated primarily for security reasons. You might want to control documents that go out of the company and restrict sensitive materials from being leaked to unauthorized destinations. This corporate control ensures the desired confidentiality and privacy level within an organization.
Another reason to use content filtering is preventing users from viewing inappropriate content and this might happen in both enterprise environments (content that is not safe for the organization) and home environments (parental control). From a high-level security standpoint, content filters offer protection in many forms, including anti-virus and anti-malware.
Content filtering can be enabled at multiple points in the network, for example:
- E-mail filtering
- URL filtering
It is very common for large organizations to control the filtering of their e-mails to check for viruses whenever there is any inbound attachment in an e-mail message. E-mail attachments can be very dangerous because they might contain malicious code. Sometimes control filters may be configured to block unsolicited e-mail advertisements (spam) that might contain dangerous files or links. E-mail content filtering might also control phishing attempts coming in via e-mail because this is a very easy way of accessing many people in the organization. Hackers might insert links in e-mails that, when clicked, redirect the users to websites that can contain malicious code.
Another common way of enabling content filtering is by URL, which limits the website addresses a user can access with his browser. Many organizations use this type of filtering to control what people see on their Web browsers. URL content filtering uses Allow and Block lists, which provide granular control on allowed websites. A very common way of enabling URL filtering is by website category. The content filtering engine sorts the URLs in a few common categories, such as the following:
- Business
- Travel
- Recreation
- Shopping
- Hacking
- Malware
A URL control filtering rule might allow users to visit business and travel websites but they might restrict them from accessing sites that are categorized as hacking or malware. Even though URL filtering is an effective way of controlling enterprise traffic, this should not be the only security mechanism configured, as there are ways of accessing external resources without using URLs, like encryption. Some URL filters can decrypt traffic and inspect the content of the packets, but other URL filters do not have this capability. Hackers might take advantage of this and send encrypted information inside the organization, information that cannot always be scanned by the content filters.
Sometimes, Web browsers and search engines might have content filtering mechanisms embedded that alert users when they are trying to access a possible malicious site. These mechanisms do not need a dedicated content filtering device and are managed by the third-party browser/search engine providers.
VPN Concentrators
Virtual Private Network (VPN) refers to a method of creating a private (encrypted) communication path between two devices, even if this path is over the private Internet. With VPN, if an attacker captures the conversation between two endpoints, he will only see encrypted packets that have no relevance and do not reveal their true content.
VPNs are usually maintained by dedicated devices called VPN concentrators, which have the main purpose of initiating and ending VPN tunnels and doing all the necessary encryption/decryption processes. In small environments, these operations can be accomplished by dedicated software applications. VPN concentrators can come in two forms:
- Hardware appliances
- Software applications
Note: Dedicated devices are used for VPN tunneling on a large scale because encrypting and decrypting traffic is a very CPU-intensive task and dedicated hardware speeds up this process. |
The VPN concentrator is usually situated in the enterprise infrastructure and represents one of the two endpoints of a VPN tunnel, as shown in Figure 21.8 below. The other endpoint is located at the remote user who wants to connect to the enterprise’s internal resources via the VPN concentrator. Depending on the connection method and purpose, remote VPN client functionality can be accomplished in two ways:
- Dedicated hardware (VPN router/firewall)
- Dedicated client software (third party or built into the operating system)
Figure 21.8 – Establishing a VPN Tunnel
From the remote end-user’s perspective the process of connecting to the enterprise’s internal resources is simple. Whenever he wants to connect to the enterprise network, he can use the VPN client software to initiate a secure, encrypted VPN tunnel from his machine to the VPN concentrator. After the tunnel has been successfully built, the user can access internal resources just like he would be able to in the enterprise’s internal network.
All of the communication from the VPN concentrator to the remote user is completely encrypted. As data comes from the remote user to the internal network, the VPN client software encrypts the data and sends it to VPN concentrator, which decrypts the information and forwards it to the internal network. As data goes from the internal network to the remote user, the VPN concentrator encrypts it and forwards it to the remote user. At that end, the VPN client software decrypts it and sends it to the user. When the communication is over, the VPN client tears down the tunnel to the concentrator.
Summary
Load balancers are usually present in large environments in which different servers (e.g., Web, application, database, etc.) are accessed by a large number of internal or external users. These servers are grouped in server farms that contain a number of servers that together offer a single service based on performance and redundancy.
Based on the servers’ capabilities, load balancers can be configured to use them in different proportions. Some of the common load balancing algorithms include the following:
- Round robin (each request is sent to a different server)
- Weighted round robin (requests can be sent more often to specific servers based on weights assigned to them)
- Least connection (requests are sent to servers that are not busy with other connections)
- Weighted least connection (the same as least connection but with weights assigned to servers)
Load balancers can be deployed in the following modes:
- Router mode
- Bridge mode
- One-armed or two-armed mode
A proxy server is a server placed in the middle of the communication between a user and a server that offers a certain service. The client (user) makes requests to the proxy server and the proxy server makes requests on behalf of the user to a Web server, for example. After the proxy server receives the response from the Web server, it sends the result back to the client.
Content filters are devices that restrict or allow traffic based on the information contained in the packets traversing the network. This is generated primarily for security reasons. You might want to control documents that go out of the company and restrict sensitive materials from being leaked to unauthorized destinations. This corporate control ensures the desired confidentiality and privacy level within an organization.
Virtual Private Network (VPN) refers to a method of creating a private (encrypted) communication path between two devices, even if this path is over the private Internet. If an attacker captures the conversation between two endpoints, he will only see encrypted packets that have no relevance and do not reveal their true content.
VPNs are usually maintained by dedicated devices called VPN concentrators, which have the main purpose of initiating and ending VPN tunnels and doing all the necessary encryption/decryption processes. In small environments, these operations can be accomplished by dedicated software applications. VPN concentrators can come in two forms:
- Hardware appliances
- Software applications
Read the Cisco load balancer guide.
Use the 101 Labs – CompTIA Network+ guide to prepare for the exam.