CompTIA Security+ Network Security Fundamentals
In this section, we will cover all topics related to networks, from devices (e.g., routers and switches) to network principles (e.g., firewall rules) and network design (e.g., subnetting). You will learn about, and how to implement, common protocols, such as SSL and IPSec, as well as how to identify common ports, such as HTTP, HTTPS, and FTP. Lastly, we will cover wireless network security and considerations, such as encryption and SSIDs. The core Security+ exam objectives covered in this chapter are as follows:
- Explain the security function and purpose of network devices and technologies
- Apply and implement secure network administration principles
- Distinguish and differentiate network design elements and compounds
- Implement and use common protocols
- Identify commonly used default network ports
- Implement a wireless network in a secure manner
Security Function and Purpose of Network Devices and Technologies
Have you ever wondered why you need a firewall, or what a “proxy server” is? In this section, we will discuss those types of network-level devices and technologies, and include diagrams, examples, and a quiz at the end to help you along your way to Security+ mastery. This section will cover the following topics:
- Load balancers
- Web security gateways
- VPN concentrators
- NIDS and NIPS
- Protocol analyzers
- Spam filters
- Web application firewalls
- URL filtering
- Content filtering
- Content inspection
The Internet has become a necessary place for us to be. Unfortunately, the Internet harbors many potential threats to our computers, our data, and even our personal information. The first line of defense, and thus the first topic we will discuss in this guide, is firewalls.
Firewalls are the number one device most likely installed by network administrators to prevent malicious attacks. That’s because firewalls are network-based (or host-based) devices that operate by comparing (inspecting) incoming or outgoing network traffic, and they can allow or deny network traffic based on a previously determined set of rules. This is a very powerful form of access control.
Firewall rules are the mechanism by which firewalls identify permitted or blocked traffic. Rules can be based on four different traffic components: port, protocol, address, or direction of traffic (inbound or outbound). For example, if you want to block a specific program from connecting over the Internet, such as Telnet, you can create a firewall rule that blocks telnet.exe or port 23, which is what Telnet uses. If you wanted to block remote access to databases, you would create a rule that blocks port 1433, which is a port most commonly used for Microsoft SQL Server connections. (We will cover firewall rules in another section.)
Firewalls can be software- or hardware-based. Often, individual personal computers have software-based firewalls, such as the built-in Windows Firewall, while companies employ hardware-based firewalls for network protection, ease of administration, and single point of entry. Software-based firewalls are commonly called “host-based firewalls”.
Routers are devices that forward traffic between networks using an internal routing mechanism called a routing table (or some other routing methodology), which is stored on the router, and the packet header, which contains IP address information for the intended recipient. Simply put, a router works by reading the information in the packet header of the data, consulting its routing table to find the recipient (or some other intermediary network), and then sending it on its way. You can think of the packet header as a shipping label for the packet package and the router as a shipping mechanism that reads the label and diverts it to New York, or wherever its ultimate destination happens to be.
As far as network hardware is concerned, a router is a “smart” device that is capable of understanding the packets that pass through it. Some devices, such as hubs, are “dumb” devices that are unaware of the packets traveling through them, forwarding all traffic without abandon. Other network devices, such as switches, are “smart” devices as well, although they perform a slightly different function.
Switches are another type of network component, and they are commonly confused with routers. The difference between switches and routers is that while routers connect different networks, switches connect computers within a network. (Hubs, mentioned previously in the Routers section, are “dumb” versions of switches that simply forward traffic to all hosts connected to it.) A switch can be used to control traffic and forward certain packets to select recipients (rather than to everyone, as in the case of a hub), which is far more useful in the realm of security.
To secure a switch and ensure it is not accessed physically, you should be sure to disable unused ports by configuring it to do so. But be careful! Don’t disable the port you’re using to configure it!
Often, it is vital for a computer system to be able to handle the traffic of many users accessing services all at once. If a system cannot handle the amount of traffic, it will deny users access, provide unacceptably slow service, or even crash. That is where load balancers come in.
Load balancers distribute traffic across multiple systems to provide redundancy. This is especially important in the case of web servers that host popular websites, where traffic can spike during massive sales or even attacks. Load balancers are usually found in server clusters where software provides the load balancing service; however, load balancing can be hardware-based as well, as in the form of multilayer switches.
Proxies (Proxy Server)
When browsing the Internet at work, you might find you cannot access some websites. There may be many reasons for this, but one of them (and the most likely) is that your web content is being delivered to you through something called a proxy server.
Many corporations have rules that require content filtering of some sort when browsing the Internet. A proxy, or proxy server, is used mainly as an intermediary to provide web proxy services. What does this mean? When users browse the Internet, they request web pages through the proxy server (since it is an intermediary or a “go-between” between the user’s computer and the Internet) and the proxy server checks its filtering rules to ensure the user is able to access the resource. Then, if the user has sufficient permissions, the proxy delivers the page or content. This is a method of filtering and access control: if, perhaps, all social networking sites at your workplace are blocked for security reasons, and you were sitting at a work computer behind that proxy server trying to browse to a well-known site, such as Facebook or Twitter, those websites would come back to you as blocked sites.
Proxy servers have many alternate uses: web proxies are the most common today. Proxy servers may also be used to cache content for later requests for the same content by different users. In addition, computers using proxy servers are anonymous – open proxies allow any user to connect to it over the Internet and thus make requests through it (browse) anonymously.
Web Security Gateways
While proxy servers typically filter URLs that users access over the Internet, it is also important to consider the potential security problems inherent in the content being delivered by those websites.
Web security gateways, or secure web gateways, can help prevent malware attacks that originate from websites. You may wonder, “Why would I need a web security gateway? Don’t my anti-virus product and proxy server protect me?” The increasingly apparent problem with relying on only anti-virus products and proxy servers to provide protection against web-based threats is three-fold, as follows:
- Anti-virus products update their definitions on a cycle, usually daily, which does not guarantee protection against real-time threats;
- Trusted websites can be compromised and converted into “drive-by” malware depositories at any time; and
- Some web pages can include dynamic content to bypass website categorization (and therefore filtering) and present users with undesirable content.
Web security gateways work by performing content filtering and real-time content scanning for both inbound and outbound traffic. This means that when users visit websites, create secure connections (SSL), or encounter dynamic content on the Internet, such as user-generated content on a social networking site, users’ computers and data are always protected against potential threats.
When many users need to connect to corporate resources while away from the office, companies may find it useful to employ VPN concentrators, or VPN gateways. These devices allow remote users to create a connection to the corporate office that encrypts all data that flows between the remote user and the office. This link can function regardless of the physical location, and it protects data even as it travels over unsecured public networks. The encrypted links require no special equipment for the remote client. As these connections are virtual, rather than physical, the connections are known as Virtual Private Network (VPN) connections.
VPN concentrators are network appliances that provide multiple secure connection points for remote users. The device is exposed to the Internet using only one port (443) and supports multiple concurrent VPN connections, usually 25 or more, up to approximately 500 users each, depending on the hardware. While connected to the secure VPN connection, users can then access their company resources as if they were sitting at their computers at work. The connection is encrypted so users do not have to worry about sensitive information being lost over the Internet.
VPN concentrators usually support authentication mechanisms, such as Active Directory, LDAP, and RADIUS, and provide database support.
NIDS and NIPS
It is vital that attacks are caught and stopped before they can happen, if possible. That is why many companies use NIDS or NIPS (network intrusion detection systems and network intrusion prevention systems, respectively).
NIDS provide a mechanism for detecting (and only detecting, not preventing) network-based attacks, and alerting administrators to said attacks, by examining network traffic. Such attacks can occur in the form of port scanning, packet flooding, etc. NIDS come in the following two forms:
Anomaly-based NIDS monitor network traffic and detect whether certain traffic patterns fall outside the accepted limits. Signature-based NIDS compare network traffic with known attack “signatures” (i.e., patterns) and analyze it to determine whether there are any recognized attacks occurring. All forms of NIDS log information about anomalous network activity. A few important distinctions are as follows:
- NIPS provide all the same services as NIDS, but they also attempt to stop an attack from happening.
- NIDS and NIPS can come in software-based or hardware-based forms.
- NIDS is not the same as HIDS (host-based intrusion detection systems).
Administrators sometimes find it necessary to intercept traffic en route and review it for potential attacks. Certain devices (or software) called protocol analyzers allow this. Protocol analyzers can be software- or hardware-based, and they intercept, decode, and analyze network packet information, such as IP address. Protocol analyzers help detect network intrusions and Internet and network abuse, and log network traffic information for later use. Protocol analyzers are also called packet sniffers.
E-mail is a critical function of a corporation, and spam e-mails simply waste time and productivity, and may contain malicious threats. Spam filters can help cut down on the amount of junk e-mail and malware infesting company mailboxes.
Spam filters are software-based and they operate by analyzing e-mail messages for specific data that indicates unwanted mail. Such indicators include the following:
- Suspicious subject lines
- Suspicious image content
- Common phrases indicative of advertisements
- E-mail messages originating from blacklisted domains or suspicious senders
- Multiple e-mail messages from the same origin or with the same content, indicating a malware infection
Spam filters can remove an e-mail entirely or replace its contents, depending on its configuration. A unique balance must be carefully determined between aggressive filtering and ensuring that minimum false positives occur.
Web Application Firewalls
Many companies have web-facing applications that can present vulnerabilities if not programmed or configured correctly. Web application firewalls (WAFs) help to assist in mitigating this issue. WAFs differ from regular network firewalls in that WAFs monitor network traffic and apply firewall rules to HTTP traffic (where network firewalls control many types of traffic) to help prevent attacks such as cross-site scripting and SQL injections.
When administrators need to control the websites users visit while on company computers, they can implement URL filtering. URL filtering is either software- or hardware-based. It is usually performed at a proxy server (see Proxies section).
Content filtering is a major component of e-mail and web security on any network. When filtering e-mail – specifically using spam filters – content filtering is the most common type. Content filters can use the following e-mail fields and information to identify spam:
- Mail header (Subject)
Web content filtering is used to keep users from viewing inappropriate content or sites, thus improving productivity and computer security. Web content filtering is software-based, usually implemented on proxy servers.
Similar to content filtering, content inspection examines information and determines its suitability according to preset rules or signatures. However, content inspection can analyze files and attachments and determine whether they are malicious, rather than block files due to simple rules such as file type.
Content inspection differs from protocol analyzers (packet sniffers) in that content inspection can inspect a file, whereas packet sniffers only read network packet information. Content inspection can occur in many places on a network, such as the following:
- Web security gateways
- Proxy servers with anti-virus
Secure Network Administration Principles
It is absolutely critical to ensure all your network devices are in place. Next, you must ensure those devices are configured according to set corporate policies. Here is where we study the application of network administration principles. This section will cover the following topics:
- Rule-based management
- Firewall rules
- VLAN management
- Secure router configuration
- Access control lists
- Port security
- Flood guards
- Loop protection
- Implicit Deny
- Prevent network bridging by network separation
- Log analysis
One of the main challenges facing IT management today is the sheer amount of computers and their geographic dispersion, as well as the challenge of managing differing types of network traffic. In order to ensure business continuity when attacks happen and maintain standardization across systems, administrators will most likely implement some form of rule-based management.
Rule-based management is simple: it states that network traffic will be subject to rules and will be allowed or denied based on those rules. Rules are either explicit Allow or explicit Deny, and rule-based appliances (such as proxy servers, firewalls, NIDS, and NIPS) should be configured to deny traffic that does not match any Allow rules by default.
Rule-based management therefore creates a whitelisted environment where, if the traffic does not match anything on the list, it is blocked. This is very effective and has much less overhead and management requirements than blacklisting. In addition, rule-based management expresses security policies in a technical manner and embeds management policies into technical infrastructure.
As described previously in the Firewalls section, firewalls are one example of a rule-based management device. Therefore, firewall rules are an implementation of rule-based management. Firewall rules act as “traffic cops” for network traffic, stating whether certain types of traffic can traverse the firewall, and which directions are allowed. Firewall rules are Allow or Block only. Firewall rules can be defined by the port, the protocol, or the program used in the network communication. For example, firewalls can be configured with rules that block FTP traffic.
The default rule on a firewall should be to “Deny All” traffic (unless explicitly allowed within an Allow rule). Implicit Deny is the last rule most likely included in any firewall configuration. As with all rule-based management implementations, firewall rules are also an expression of management and security policies within the IT infrastructure.
First, we must discuss the concept of a VLAN. A VLAN is a virtual local area network, or a network of computers that communicates as if they were all in the same location, regardless of the computers’ physical or geographical placement. VLAN membership is managed through software.
Secure Router Configuration
In any enterprise, network and security hardware must be configured properly to ensure that network communications are secure. One of the most important components in the network is the router configuration. Many routers can be configured using the router’s administration page, which is usually accessed by a private IP address, such as 192.168.1.1. Most routers have a default administration password that must be changed for security purposes; default router usernames and passwords are available freely on the Internet, and all a person must know to find this information is the brand and model of the router.
Routers can also be configured using scripts. It is usually recommended to configure routers offline, where active changes will not affect the network configuration (and possibly the security configuration). However, configuring a router while it is on the network and is active can provide the benefit of watching changes happen immediately. In order to ensure the router is hardened, or open and vulnerable to the minimum degree necessary, ensure all unnecessary services are turned off (disabled).
Access Control Lists
Sometimes it is necessary to provide an additional mechanism by which administrators can control traffic on a network. Utilizing access control lists (ACLs) on routers can provide administrators with this functionality. ACLs can control which MAC addresses (the physical/hardware address of the network card) are able to transmit or receive network traffic through a particular network segment by filtering the MAC address at the router.
An example of a MAC address for a network card is 02-50-F3-CE-83-01.
Port security references two different types of security: logical ports, or network ports, and the physical security of ports on network devices, such as routers. This can refer to ensuring firewalls are sufficiently locked down and physical ports on devices are secured.
Access points and wireless access points can be vulnerable if undesirable individuals or attackers wish to connect to the network. 802.1X is a security and authentication mechanism that ensures only authorized individuals connect to access points and other network devices. It is Extensible Authentication Protocol (EAP)-based, which is an authentication protocol used mainly in wireless networks. EAP extends five different authentication types, but 802.1X specifically defines the encapsulation of EAP (authentication) traffic over LANs (EAPOL) or over wireless networks.
A certain type of attack called a SYN flood takes advantage of the connection process between a client computer and a server (called a 3-way handshake) to deny connections to legitimate users. Flood guards help assist in preventing this type of attack.
In discussing this topic, we must first discuss the TCP 3-way handshake. When a client computer wants to connect to a server, it first attempts to send a special packet called a SYN (“synchronize”) packet to create the connection. This is the first step of the three steps in a 3-way handshake. The second step requires the server to send a SYN-ACK packet, which “acknowledges” (ACK) the connection request and reciprocates the connection initiation (SYN). The last step of the 3-way handshake is the client computer’s response after receipt of the SYN-ACK from the server, where the client sends a final ACK packet to acknowledge the connection request. The client and the server are then connected at that point.
SYN floods occur when the first and third steps of the process (the client computer’s responsibility) are compromised. The first step of a 3-way handshake in a SYN flood attack is a flood of SYN requests to the server; the server then attempts to answer each request with a SYN-ACK packet, which opens multiple pending connections on the server. Normally, the client computer would send the final ACK packet to cement the connection between client and server, but in a SYN flood, the attacker does not send any ACK packets to the server, forcing the server to keep all the fraudulent connections open, and thus denying any new connections to legitimate users. A SYN flood is a type of Denial of Service (DoS) attack.
To protect against SYN floods, administrators can implement flood guards, one of which is SYN cookies. SYN cookies behave like Internet cookies, preserving a small amount of information about a connection request for later use. When a client, legitimate or malicious, sends a SYN request to a server, the server sends the SYN-ACK to the client, as in the normal connection process. The server then drops the SYN portion, which is the open connection request, until it receives an ACK from the client computer, indicating a legitimate request. (Remember, SYN floods occur when malicious attackers refuse to send the final ACKs to the server, creating a flood of open connections that deny service to legitimate users). When the server receives an ACK from the client, the server can rebuild the original connection request (SYN) from the SYN cookie and create the connection. If the ACKs are never received, there is no real damage and no SYN flood, because the SYN request was dropped, effectively freeing up server resources for legitimate users.
When configuring networks, it is important to consider possible network disruption due to end-user behavior: if a user (or a malicious person) plugs a network cable into two different ports on a switch, it can lead to network problems. Therefore, it is important to implement loop protection. Loop protection implies that SLPP (Simple Loop Protection Protocol) is in place and is preventing network traffic that originates from the same source from looping back on itself. This type of network traffic (Layer 2 of the OSI model) does not have a TTL (Time-to-Live) value, so it will loop forever, and it is thus necessary to prevent this from occurring.
Specific to Cisco routers, ACLs are lists of commands that control how packets are filtered based on the originating IP address of the packet. After all the rules have been processed, a command called an implicit Deny is last on the list, indicating that any traffic that was not specified in the aforementioned rules is to be implicitly denied on the network. Thus, implicit Deny ensures security by specifying only a whitelist of firewall rules that allow traffic on the network.
Another permutation of the implicit Deny rule is the Deny any any rule. It, too, is the last command on an ACL, following all explicitly mentioned rules. The Deny any any rule is applied to Extended ACLs, which filter traffic by not only the sender’s IP but also the destination IP address. Therefore, the rule must have the double “ any any” in the command.
Prevent Network Bridging by Network Separation
It is very common for users to connect to their workplace intranet and have their network and Internet traffic filtered. However, sometimes users want to get around filtering restrictions and access sites that are blocked by corporate policies by attaching a wireless broadband card or dongle to their computer to enable a second network connection and access blocked sites. When users employ this method of subverting security controls, they may inadvertently compromise their intranet by bridging two networks (their intranet and the Internet). This may allow attacks to enter from the Internet through the unsecured, unmanaged connection between the two networks formed at the user’s computer.
An administrator’s main duty is to ensure all systems are up and running and are in compliance with policies. One valuable way to help with this daunting task is to perform log analysis. Log analysis, when performed on firewall logs, simply refers to checking the logs for suspicious activity, such as port scanning (looking for open, and thus vulnerable, ports) and packet flooding, which can cause a DoS attack. There are many types of attacks performed on computers to attempt to compromise them, and firewall logs are a good first place to look for evidence of these attacks.
Network Design Elements and Compounds
As a modern security administrator, it is critical to understand the differences, advantages, and disadvantages inherent in technologies implemented in a corporate network. This section will discuss very important technologies, such as the DMZ, NAT, virtualization, and cloud computing, and will cover the following topics:
- Remote Access
- Cloud Computing
Quite possibly, the most important security concern is at the network perimeter (the locus of most attacks), where the company intranet meets the wild frontier of the Internet. A popular and highly effective mitigating method is to establish a DMZ (demilitarized zone), a buffer zone between the safe interior network (the corporate intranet) and the enemy (the Internet).
This terminology comes from the DMZ between North and South Korea, where a specified zone exists between the two hostile nations in which security is tightly controlled and fighting is not allowed. In the computer world, the DMZ is a set of servers on a separate network, specifically designed to provide a security buffer between the corporate intranet and the Internet, the source of many threats. The DMZ, however, exposes external services to remote users who need to connect via the Internet, such as VPN users, but because it is on a separate network, it is secured from the internal intranet. Therefore, remote users must authenticate to servers in the DMZ before they are granted access to internal resources. E-mail and web servers are most commonly placed in the DMZ and are therefore vulnerable to attacks.
It is important to remember that the DMZ is not special in any way, except for the fact that it is isolated on a separate network and is usually separated from the intranet by a firewall and limited connectivity to internal hosts. Computers within the DMZ (always servers) usually have the ability to communicate with each other unhindered.
Having just learned about the DMZ and the fact that it is contained on a separate network, this section describes how the network was actually segmented to provide this separation and isolation. The process of logically dividing a network is called subnetting. The subnetting process may seem daunting at first, but it’s simply the logical process of breaking up a large network into multiple smaller networks.
In order to understand how subnets work, we must first understand how an IP address is constructed and how subnets act on the different parts of the IP address. An IP address is a 32-bit number that is composed of two parts: the network address and the host address. IP addresses are most commonly written in the form x.x.x.x, where each x is a number from 0 to 255, and represents 8 bits, or one octet, of the IP address. The first, or leftmost, numbers of the address make up the network address and the rest of the numbers make up the host address.
There are conventions for determining how many of these bits comprise the network address by default, but the default size of a network does not necessarily suit the needs of the environment in which a network is to be deployed. Subnets work by borrowing a certain number of bits from the host portion of the IP address and adding them to the network portion of the address.
To see how a network might be divided, we will use the following as an example: the private 10.0.0.0/8 network. The /8 at the end of network address indicates how many of the IP address bits are in use for the network portion of the address. In this case, our network starts out with 8 bits (one octet) dedicated to the network portion and 24 bits (the three remaining octets) dedicated to the host portion. This is one extremely large network, with address space for 224 –
2, or 16,777,214, hosts. The reason for the -2 is that the first and last addresses of any network or subnet are reserved for the network and broadcast addresses, respectively, and are unavailable for hosts. It is unlikely that any one segment of our network will need over 16 million hosts. We can easily divide this network into a number of smaller networks with subnetting.
Subnetting introduces a new concept to the IP address – the network mask. Instead of relying on the default designation for the size of our 10.0.0.0/8 network (8 bits), we can make a /12, /16, /17, or any other number up to /30 network (up to 30 bits). The subnet mask is a 32-bit number with a format similar to an IP address that decides how many bits are in use for the network portion of our address.
For our first example, I will choose a 16-bit subnet mask, so that our 10.0.0.0/8 network will be divided into a number of /16-bit networks. The subnet mask is the value of the number of bits that are used for the network portion of our address. In the case of the first 16 bits, that value is 255.255.0.0.
Understanding how we arrive at the 255.255.0.0 value requires an understanding of binary math. Each bit of the 8-bit address is either a 0 or a 1 and represents a power of 2. The rightmost bit of an octet represents 1. The bit next represents 2, the next 4, then 8, 16, 32, 64, and finally 128. The sum of the value of the bits in the subnet mask is what we write when noting the subnet mask in dotted decimal notation. 255.255.0.0 is the same as /16.
Each octet of the IP address is treated as a separate number, so the range of possible addresses given the ability to take any number would be from 0 to 255. Since we can only borrow from the leftmost bit, the possible values of any octet of the subnet mask are 0 (borrowing no bits), 128 (borrowing one, leftmost bit), 192 (borrowing 2 bits), then 224, 240, 248, 252, 254, and 255.
In this case, we borrowed 8 bits, resulting in a complete octet of additional possible networks. 10.0.0.0 is still a network, but now 10.1.0.0 is not a host on the 10.0.0.0/8 network, it is a newly available network of its own. The same is true for 10.2.0.0, 10.3.0.0, and so on, all the way up to 10.255.0.0. By borrowing 8 bits from the host portion of the IP address, we have created 256 new subnets (although you should avoid using the first, 10.0.0.0, and the last, 10.255.0.0, except in special cases).
The formula to determine how many usable subnets are created is to count the number of borrowed bits in the subnet mask. Raise 2 to the power of however many bits are borrowed, in this case 8, and subtract 2 for the subnets we should avoid using. In this case, 28 = 256, and 256 – 2 = 254 usable subnets.
To determine how many hosts are available on each subnet, we simply take the count of the digits of the IP address that are not part of the subnet mask. In this case, with a /16 subnet mask, we have split our IP address completely in half – 16 bits belong to the subnet portion of the address and 16 bits belong to the host portion. Once we have that count, in this case 16, we raise 2 to the power of the number of bits in the host portion and subtract 2. So 216 – 2 hosts are available on each subnetwork.
We can borrow any number of bits, from 2, resulting in two usable networks (22 – 2 = 2) to all but 2 of the host bits (a /30 subnet mask, or 255.255.255.252). This means that an individual network will not always be divided right along the octet lines as it was in our last example.
As another example, we will take the private network 192.168.0.0/16 and subnet it. We want to create at least six additional usable subnets. We can take our formula for determining the number of usable subnets given a subnet mask and run it in reverse to determine the minimum number of bits we need to borrow to create a certain number of usable subnets. Borrowing 1 bit would give us zero usable subnets. Borrowing 2 bits yields two usable subnets. Borrowing 3 bits (23 – 2) gives us six usable subnets. We must borrow 3 bits from the host portion of our 192.168.0.0 network, giving us a 19-bit mask (/19, or 255.255.224) instead of the 16-bit mask we started with (/16, or 255.255.0.0).
The network ranges created when we subnet 192.168.0.0 with a 19-bit mask are not as simple to determine as the 10.0.0.0/8 network with /16 subnets. To determine where the subnets start and end, we will need to make use of binary math again.
The 19-bit mask is 255.255.224 or, in binary, 11111111.11111111.11100000.00000000. This means that in order to determine where the networks start and end, we have to increment the least significant digit that is part of the subnet mask. The least significant digit falls in the third octet (11100000) of the subnet mask. The values of the digits in any given octet decline (from left to right) as the powers of 2 decline. The leftmost bit is 27, or 128, then 26, or 64, 32, 16, 8, 4, 2, and 1 for the rightmost digit. In this case, the most significant digit is the third from the left, or the 25 digit, with a value of 32.
The first (generally unusable) subnet is 192.168.0.0/19, and it would have the usable hosts 192.168.0.1 to 192.168.31.254 (with the first and last host in each subnet reserved and unusable). The next subnet is the first one where the least significant digit of our subnet mask is incremented. Because that is the third from the left, or 32, our next subnet is 192.168.32.0/19. The hosts in the 192.168.32.0/19 subnet are 192.168.32.1 to 192.168.63.254. The subnet after this one would start when the least significant digit of the subnet mask increments again. The remaining networks are, all with a 19-bit subnet mask, 192.168.64.0, 192.168.96.0, 192.168.128.0, 192.168.160.0, 192.168.192.0, and the generally unusable 192.168.224.0 network.
In order to grasp subnetting fully, it is advisable that you perform some subnetting tasks on your own. For instance, determine the number of usable hosts that would exist in each of the 192.168.0.0/19 subnets we just created. Attempt to break apart a network such as the 10.0.0.0/8 private network we started with into subnets using a different length subnet mask, such as the /12, /16, /17, and /30-bit masks mentioned earlier.
For any given network, you should be able to calculate the number of usable subnets that will be created with a given subnet mask using the formula 2x – 2, where x is the number of bits borrowed from the host portion. For any given subnet mask, you should be able to calculate the number of usable hosts per subnet using the formula 2x – 2, where x is the number of bits remaining in the host portion of the IP address. For any given network address and subnet mask, you should be able to calculate the network addresses of the subnets by determining the value of the least significant bit of the subnet mask and incrementing the network address by it.
Subnetting is one of the most effective ways of dividing a network for organizational, traffic control, and security purposes.
Sometimes, administrators may find it necessary to separate certain departments or segments of their offices or companies. This can be done easily using VLANs (virtual local area networks), which are used for the logical segmentation of a shared physical network. Traffic from one VLAN cannot reach a host on another VLAN without passing through a router. This means policies can be set at the router that controls communication between VLANs as effectively as communication between the local network and the DMZ, or the outside world and the local network.
In most organizations, the internal/private network will have many more nodes and IP addresses than the number of public IP addresses assigned to the organization. Administrators will want to allow nodes on the private network access to the Internet without exposing the network information unnecessarily. Network address translation (NAT) works by taking internal or private IP addresses and mapping them to different external or public addresses.
Though NAT can be implemented on a one-to-one basis (Static NAT), it is much more common to map multiple internal addresses to a few external addresses. This type of NAT is known by many names, including NAT overload, NAT with port address translation (PAT), many to one NAT, and network address port translation (NAPT).
NAT is implemented on the router that connects the internal and external networks. The router tracks the state of outgoing connections using the source and destination ports, and sends the replies to the correct internal IP without ever divulging the details of the internal addressing.
NAT overload has the added benefit of not being able to pass inbound traffic for which a destination has not been specifically designated on the router. This means that a target node on an internal network will not be able to see incoming traffic that has not been specifically allowed, greatly reducing the internal network’s exposure to Internet threats.
Services on a private network often must be made accessible to users on the Internet at large. Inbound connections from networks that are not part of the LAN fall under the umbrella of remote access. When implementing remote access, a number of technologies are available to provide each of the components of the Authentication, Authorization, and Accounting (AAA) model.
The Authentication process is concerned with verifying the identity of the user or process initiating a connection. This commonly takes the form of a username and password exchange. Authorization is the assignment of rights and permissions based on the identity established by the authentication phase. Accounting is concerned with logging the activity on a given system, such as session length or activity, requests that are not authorized, or failed connections.
The remote access protocol most commonly employed for dial-in remote access is the Remote Authentication Dial-In User Service (RADIUS). Other remote access AAA services include TACACS+, Microsoft RRAS, and a VPN.
Dial-in Remote Access
An organization may be configured so that users can dial in to the network and be assigned network access based on their authentication to a RADIUS server. Users would dial in to a router at the organization and, using PPP, send their username and password. That router would check with the RADIUS server to ensure that the credentials were valid (Authentication), and to determine which ACLs to apply (Authorization). Users would then be able to communicate over the newly established link according to the ACLs that have been applied. During the entire session, the router is able to monitor and log according to the administrator’s specifications (Accounting).
Remote Access with VPN
If an organization would prefer to use Virtual Private Network (VPN) tunneling, the process follows the same AAA steps. The remote user initiates a VPN connection to the firewall/router/VPN device. The VPN device first authenticates the user, and then assigns the user permissions that have been specified by the administrator, and finally tracks and records whatever data is necessary for the duration of the session.
When it comes to remote access, regardless of the technology used to establish a connection, the AAA model applies.
Network telephony is the passing of digitally encoded voice data over a data network, or from a data network to a voice network. Common applications of telephony include Voice over Internet Protocol (VoIP) and video-conferencing.
Network Access Control (NAC) is a security approach that encompasses host authentication and policy enforcement to reduce the risk of rogue agents gaining access to a private network. This may take the form of MAC address filtering, where only specific devices are allowed to pass data to the network. The network may require authentication before allowing traffic, or the NAC may specify other requirements, such as up-to-date anti-virus software or operating system patches for security. Whatever the policy requirements, NAC only authorizes devices according to their compliance.
Virtualization is the practice of creating a logical environment for computing resources that breaks the one-to-one link between physical computers and operating systems, using a product such as VMWare or HyperV. In a virtual environment, a single physical server can host multiple logical servers.
When utilizing virtual servers, it is important that each virtual server follow the same information security guidelines as a traditional server (e.g., software firewall, operating system updates, and disabling unnecessary services). Virtual computers have the same security requirements as physical servers, and any safeguards must be implemented on each virtual server.
It may be preferable to host data and processing away from the local network. When information or services are hosted remotely on the Internet, they are said to reside in the cloud. Cloud computing is a term from Internet-hosted computer services. There are a number of flavors of cloud computing, whether hosting applications (SaaS – Software as a Service), raw computing resources (IaaS – Infrastructure as a Service), or an OS with specialized software (PaaS – Platform as a Service).
Imagine a database for your organization, hosted offsite on a virtual server. Your organization can still access this database. Your users can still request that the database perform the same tasks that it would if it were located on the LAN. Though the functionality of the database has not changed much, the physical location has. Because your database is now hosted on the cloud, you do not need to be concerned with the hardware resources necessary to host the database. This is an example of PaaS.
Now imagine you have a custom program that is extremely processor intensive and that will provide results at the end of the calculation. Furthermore, this program needs to be run only infrequently. Rather than acquiring a large amount of processing power and running your program locally, it may be possible to utilize cloud computing IaaS to get temporary access to the processing power you need and receive the results.
You are very likely already familiar with the concepts described by SaaS, for example, an e-mail service hosted at a certain website. Though we still have access to our e-mail, we no longer have access to the servers that process the e-mail. The e-mail service may use many computers or a single virtual machine, but in SaaS cloud computing, the local user is concerned only with the delivery of the service, in this case, e-mail access.
To sum up, cloud computing comes in many flavors. Services that traditionally tied to a local server can simply be moved to a server or virtual server in the cloud. Raw processing power and storage can be accessible on an as-needed basis. Applications can be run in the cloud, returning the necessary output directly to the end-user.
Cloud computing can allow the same investment to be utilized much more efficiently through dynamic provisioning of computing resources. It can also allow higher uptime by spreading the costs associated with hardware and infrastructure redundancy, putting high availability within reach of even small organizations.
A major drawback of cloud computing is the loss of the physical control over your data. The 1s and 0s no longer reside where you are guaranteed to have physical access to them. This loss of control means you must rely on a third party to ensure the integrity of your data. If the services you are receiving become unavailable, so does any data stored on those services.
Just as it is necessary to understand network design elements and their functions, understanding network protocols is absolutely critical to maintaining a secure network environment. In this section, we will look at some common protocols, what they do, and what they are used for. This section will cover the following topics:
- TLS, SSL
- IPv4 versus IPv6
When you send important or sensitive information over a network, you want to ensure it is secure, not only on the computers that hold the data (encryption) but also while the data is en route over the network. One method of ensuring security is by encrypting the data stream using IPSec.
IPSec, or Internet Protocol Security, is a set of protocols that secures IP communications at the packet level. IPSec is closely related to VPN, in that it is by far the most popular technology for implementing VPNs on IP networks.
The IPSec standard uses three major protocols to accomplish information security: Authentication Headers (AHs) to ensure information in IP packets is not altered and to perform source authentication, but not encryption; Encapsulating Security Payloads (ESPs) for data confidentiality through encryption and source authentication, and to ensure data is not altered in transit; and Security Associations (SAs) to bundle the algorithms and data IPSec relies on to accomplish its goals, including key exchange, storage of keys, and encryption protocols.
AHs serve to authenticate communications only, not secure them from prying eyes. Because ESPs can provide the same level of authentication while allowing encryption, it is a much more robust IPSec security option.
IPSec encryption via ESPs functions at a low level, either by encrypting the contents of an IP packet and leaving the source and destination unencrypted (ESP – Transport Mode), or by encrypting the entire IP packet and sending it as the data portion of a new IP packet (ESP – Tunnel Mode).
IPSec supports the use of any one of multiple encryption algorithms as defined in RFC 4835, including Advanced Encryption Standard (AES) and the successor to Data Encryption Standard (DES): Triple Data Encryption Algorithm (3DES).
The SAs define the parameters IPSec is to use in securing communications. SAs are concerned with unidirectional data security. Therefore, connections are usually secured by a pair of security associations. IPSec relies on the Internet Security Association and Key Management Protocol (ISAKMP) to set up SAs, and exchange keys and manage the resulting SAs. ISAKMP does not specify any particular protocol for key exchange, but is often accomplished by use of a pre-shared key, or implementation of Internet Key Exchange (IKE and IKEv2) or other key exchange algorithms. Each SA will define the encryption algorithms and keys to be used for a single connection.
Because IPSec functions without regard to the content of the IP packets, it can be used to secure communication between hosts and networks for any protocol that can be routed over the IP. IPSec can support authentication-only, encryption-only, or authentication with encryption. It is a powerful tool for securing IP communications.
Simple Network Management Protocol (SNMP) is a protocol for collecting and sending information regarding network-connected devices. SNMP uses UDP ports 161 and 162. SNMP does not specify specific information to be collected, but it does specify the format in which information is to be sent. The primary components of an SNMP system are managed devices, agents, and a network management system.
An agent records information for a managed device in a Management Information Base (MIB), which can be queried by the network management system. Agents listen on port 161 for requests for information and respond on port 161 to whatever port made the query. Agents may also send updates that are not directly requested by the manager (known as traps) from any available port to port 162.
The Simple Network Management Protocol is just that, simple. Information is collected in a management information base by an agent and is sent in clear text when requested by a network management system.
Secure Shell (SSH) uses public-key, or asymmetric, encryption to secure communication between two hosts. By default, SSH runs on TCP port 22. The most common use for SSH is as a secure means of gaining access to a remote shell session. This makes SSH an excellent alternative to Telnet, which is not an inherently secure method of gaining access to a shell session.
Because of weaknesses in an early version of SSH, it is advisable to use only SSH-2 when communication security is important. Other uses of SSH include SFTP and SCP.
One of the protocols that greatly aids the Internet’s ease of use is the Domain Name System (DNS). DNS is the protocol that allows you to type a website name into your browser and be sent to a specific computer at a specific IP address. This protocol is responsible for converting hostnames into IP addresses, among other uses.
DNS is also responsible for mapping domain names with certain services and the addresses of the hosts responsible for providing those services. A DNS record does not only consist of an IP address and a hostname but also a record type. The following is a partial list of all the types of DNS records:
- A records – map hostnames to IP addresses
- MX records – map a domain name to a Message Transfer Agent (MTA) for the domain
- CNAME records – map hostnames to other hostnames
- AAAA records – map hostnames to IPv6 addresses
- PTR records – return a hostname and are commonly used for reverse DNS (mapping IP addresses to specific names)
DNS records also include a class entry (almost always IN, for Internet) and a Time-to-Live (TTL), which is the amount of time, in seconds, the record should be considered valid by a server that caches DNS records without refreshing. A single complete DNS record is comprised of a name, class (IN), type, data, and TTL. DNS operates over UDP port 53.
Secure Socket Layer (SSL) and Transport Layer Security (TLS) are protocols that can be used to encrypt and secure connection-oriented protocols, such as HTTP, SMTP, and FTP. TLS is the successor to the SSL protocol.
The secure communication between the client and the server is stateful, meaning it is brought up and secured, and then lasts only as long as the communication session does. This is accomplished by means of a secure handshake, wherein the client and the server agree on which encryption type to use; the client receives the server’s authentication information, which includes the server’s public encryption key; the client encrypts a random number with the server’s public key and responds; and the server and the client use the shared random number to generate any additional key material and complete the handshake.
SSL was first released as version 2.0, with version 1 never released. A number of security vulnerabilities led to SSL 3.0 being developed as a replacement. TLS was designed to improve on the features of SSL and improve the security of SSL. SSL has been almost completely replaced by TLS, but TLS allows for downgrading of TLS connections to SSL 3.0 in those cases a client does not support TLS.
Many applications that communicate over TLS use alternate port numbers and are identified with an alternate acronym and port number. HTTP, when protected by TLS or SSH, as it very often is, is referred to as HTTPS (HTTP Secure) and most commonly uses port 443.
The Transmission Control Protocol and Internet Protocol model (TCP/IP) is the most widely used communications protocols in the world. The Internet is based on TCP/IP. All IP networks are based on TCP/IP. All networks that use TCP are based on TCP/IP. There can be some confusion when referring to TCP/IP, because TCP/IP refers to the model followed by networks and applications, not just the IP protocol and the TCP protocol.
In the past, it was not uncommon for different vendors to have their own networking protocol. Apple computers would run on their own network and communicate using AppleTalk, Windows and Unix computers might be on a TCP/IP network, and Novell might be on an IPX/SPX network. Now, with the ubiquity of the Internet, almost all “networked computers” are running on IP.
The TCP/IP model is a standardized, open, network communications model that any vendor can employ to make systems that can be interconnected.
The “layers” of the TCP/IP model include the Physical and Link/Data-link/Network interface Layers, often combined into one Network Interface Layer. This layer is concerned with the physical properties of the communications medium and the methods of encoding information to be sent across the physical medium.
The next higher layer is the Network/Internetwork/Internet Layer. This layer is concerned with logical IP addressing and routing. It allows for hierarchical routing, so that individual hosts do not need to know how to reach each other individual host, but can rely on an intermediary device to reach their destination. IP is the only protocol contemplated in the TCP/IP model.
The Transport Layer is concerned with the establishment of communication between two end hosts, and the passing of data between them, regardless of the physical or logical links between the hosts. This means that whether the two devices are in the same office or on opposite sides of the world, the Transport Layer will send the same data and expect the same response. This layer introduces the concept of port numbers, which can be used to track connections and route data to the correct application at the next level up. The Transport Layer protocols in the TCP/IP model are TCP (for connection-oriented communication) and UDP (for connectionless or unordered data).
The top layer of the TCP/IP model is the Application Layer. This layer is concerned with providing services to applications running on computers. This is the layer where user interaction occurs. Protocols that exist at the Application Layer are far too numerous to list, but they include HTTP, FTP, DNS, TFTP, SMTP, HTTPS, and many, many others.
FTP-Secure (FTPS) is simply FTP utilizing a TLS/SSL connection for security. It is not the same protocol as Secure FTP (SFTP). Because FTP is a connection-oriented protocol, it can take advantage of TLS to secure both the initial TCP handshake and the data passed between the client and the server for the duration of the connection. An FTPS connection can encrypt the command channel (to protect username and password authentication data), the data channel (to protect the contents of the files being transferred), or both.
In much the same way TLS can be used with HTTP to form HTTPS, TLS can be used with FTP to form FTPS. FTPS operates over TCP and UDP port 989 for data and port 990 for control information.
HTTP-Secure (HTTPS) is HTTP utilizing a TLS/SSL connection for security. This is one of the most common uses for TLS. Using an HTTPS connection prevents eavesdroppers from viewing the information being sent or received during an HTTP session, while ensuring the identity of the server at the far end of the HTTP session.
Because of the many security improvements over HTTP, HTTPS is an excellent solution whenever security is required in a web session. As with any other TLS-supported application, a number of encryption algorithms can be used, and the strongest available encryption will be chosen at connection establishment. HTTPS operates over TCP port 443.
Secure FTP (SFTP) is not the FTP protocol modified to make use of a different layer, but an entirely new protocol for transferring and managing files. The SFTP client assumes it is operating over a connection that is secured by another technology. Many implementations of SFTP utilize SSH tunneling to establish the connection, and then pass the SFTP data through the secure tunnel. SFTP runs over TCP port 22 (SSH) by default.
Secure Copy (SCP) is a protocol for securely transferring files between hosts on a network. It relies on SSH to establish a connection between the source and the destination hosts, and then transfers the file as specified. It only supports file transfer, not the additional file management operation available in SFTP. SCP runs over TCP port 22 (SSH) by default.
Internet Control Message Protocol (ICMP) messages are not usually seen by an end-user. They are messages that are sent back to the source of an IP packet, usually to indicate errors in the flow of traffic from one point to another, such as having to route to the final host or exceeding the TTL of a packet. ICMP operates by sending an echo packet to a remote host, and the remote host responds with a reply packet.
A few protocols that a network administrator might use that make use of ICMP messages are ping, pathping, and tracert. ICMP packets are also known as ping packets.
IPv4 versus IPv6
So far, when discussing IP addressing, it was assumed that we were using Internet Protocol version 4 (IPv4). IPv4 has a 32-bit address and is represented in the format x.x.x.x, where each x represents one byte, or octet. Instead of a 32-bit address, Internet Protocol version 6 (IPv6) uses a 128-bit address written in hexadecimal notation. Hexadecimal notation uses the numbers 0 to 9 and the letters a to f so that each digit can represent up to sixteen possible combinations, or 4 bits. An IPv6 address is written in eight groups separated by colons, with four hexadecimal digits (16 bits) in each group.
A typical IPv4 address might be 220.127.116.11, whereas a typical IPv6 address might be 126d:eadb:eef3:0000:0000:0147:12c7. To make the presentation of IPv6 addresses easier, a number of conventions have been adopted. One is that leading 0s in any 16-bit group may be omitted. Our example IPv6 address could just as accurately be written 126d:eadb:eef3:0:0:147:12c7. Another convention is that a maximum of one time per address, consecutive colons can replace any number of groups with consecutive zeros. Thus our sample address could also be written 126d:eadb:eef3::147:12c7.
In IPv6, all networks hold 264 hosts by default. IPv6 provides for many more addresses. Where the total address space of IPv4 is 232 or about 4.3 billion at most, IPv6’s address space is 2128, or approximately 3.4 x 1038. Because of the immense number of devices connected to the Internet, even considering address space-saving technologies such as NAT with PAT, the number of individually addressed devices will soon exceed the available addresses in the IPv4 address space. This paucity of available addresses is the primary impetus for the move to IPv6.
The greatly increased address space of IPv6 will no longer support NAT. This means that internal addresses will be viewable to the outside world. Because NAT will no longer be deployed at the network boundary, it is important that firewalls allow only incoming connections as specified by the administrator. Allowing only inbound connections as necessary, instead of allowing inbound connections by default, replicates the greatest security strength of NAT.
The IPv6 protocol suite includes built-in support for IPSec. Though not every connection over IPv6 will be fully encrypted, the support for encryption and authentication will be there at the IP level. Right now, we think of IPSec primarily in conjunction with VPN connections. This will still be the primary use of IPSec in IPv6 for a time after the immediate rollout. IPv6 support for IPSec does not mandate the use of IPSec. The integrated support for IPSec will, however, mean that support will be available for easier implementation in the future, giving even more opportunity to secure communications.
IPv6 makes a number of other changes in areas such as ICMP flow control messaging, multicasting to designated device types, and dynamic address configuration. With all these changes, you can think of IPv6 as a more robust delivery system for all the traffic that we currently send over IPv4.
In IPv4, the default size of a network before subnetting varies depending on the class of the network. This has changed in IPv6, as the IPv6 address is divided into two parts: a 64-bit network address and a 64-bit host address.
Commonly Used Default Network Ports
When configuring firewalls, it is important to understand which ports are necessary for your network to function, and which are not, in order to maintain a network infrastructure that is protected against threats. It may be useful to have the network port numbers of some of the more common networking protocols committed to memory. Each networking protocol uses either an established connection and Transmission Control Protocol (TCP) or the connectionless User Datagram Protocol (UDP). This section will cover the following topics:
The following table lists the port numbers and port types of selected common protocols:
|PROTOCOL||Full Name||TCP/UDP||Port #||Notes|
|FTP||File Transfer Protocol||TCP||20/21||20 for data, 21 for control|
|SFTP||Secure FTP||TCP||22||Runs over SSH, port 22|
|FTPS||FTP Secure||TCP/UDP||989/990||FTP with TLS, 989 for data, 990 for control|
|TFTP||Trivial File Transfer Protocol||UDP||69||Basic file transfer|
|Telnet||Telnet||TCP||23||Unencrypted shell sessions and text|
|HTTP||Hypertext Transfer Protocol||TCP/UDP||80||Unsecured web pages|
|HTTPS||Hypertext Transfer Protocol Secure||TCP||443||HTTP with TLS|
|SCP||Secure Copy||TCP||22||Runs over SSH, port 22|
|SSH||Secure Shell||TCP||22||Encrypted Shell sessions, SFTP and SCP run over SSH|
|NETBIOS||Network Basic Input/Output System||TCP/UDP||137,138,139||137 for name, 138 for datagram, and 139 for session service|
File Transfer Protocol (FTP) uses two ports to transfer and manage files. TCP port 20 carries the actual file data and TCP port 21 carries the control data. FTP is commonly used to transfer large files or upload web pages to a domain.
Secure FTP (SFTP) is generally tunneled through SSH. The default TCP port for SSH is 22; thus, the default port for SFTP is TCP 22.
FTP Secure (FTPS) functions the same as FTP, with separate data and control channels. Much like HTTPS is HTTP-secured by TLS/SSL, FTPS is FTP-secured by TLS/SSL and uses TCP ports 989 and 990. TCP port 989 is for data and port 990 is for control.
Trivial File Transfer Protocol (TFTP) is a basic file transfer protocol that uses UDP instead of TCP. It operates on UDP port 69.
Telnet is a service for sending and receiving unencrypted text. It operates over TCP port 23.
Hypertext Transfer Protocol (HTTP) sends web pages without security. It operates over TCP and UDP on port 80.
HTTP Secure (HTTPS) is a variant of HTTP that makes use of SSL/TLS to secure the session. HTTPS operates over TCP port 443.
Secure Copy (SCP) is a basic file copying protocol that generally runs in an SSH tunnel. As with any protocol tunneled through SSH, it uses TCP port 22.
Secure Shell (SSH) allows secure communication between two hosts, as well as tunneling for other applications, such as SFTP and SCP. It operates over TCP port 22.
The Network Basic Input/Output System (NetBIOS) is a set of protocols that allows for name resolution, connectionless communication, and session establishment. The NetBIOS name service runs over UDP port 137. The NetBIOS datagram service, for connectionless communication, runs over UDP port 138. The NetBIOS session service runs over TCP port 139.
Implementing Wireless Networks in a Secure Manner (Access Points)
It is important to consider the security of wireless networks as much as, if not more than, wired networks. This is due to the nature of the traffic and how it is transmitted wirelessly, and how it is inherently less secure. This section deals with security protocols, encryption, authentication, and other components of keeping your wireless network secure. This section will cover the following topics:
- MAC filtering
- SSID broadcast
Securing your wireless networks is always important. WPA (Wi-Fi Protected Access) provides a level of security above that of WEP (Wired Equivalent Privacy) but is not as secure as WPA2. There are two parts to wireless network security: authentication and encryption. The password a user types in is a component of one of the authentication types. WPA authentication has two different authentication modes available: PSK (Pre-shared Key) and Enterprise. WPA-PSK (also sometimes called WPA-Personal) explains itself: the wireless access point (WAP) is configured with a password, from 8 to 63 ASCII or 64 hexadecimal characters. You then give the password to every person you want to allow to connect to the WAP – thus, a pre-shared key.
WPA-Enterprise authentication is more secure in that it uses certificates or a username and password pair to authenticate a user rather than a pre-shared password, which can be stolen, copied, or read over a user’s shoulder as he or she is typing it. WPA-Enterprise requires a RADIUS authentication server in order to facilitate 802.1X authentication, and it uses EAP (Extensible Authentication Protocol), which extends several types of authentication.
Encryption of wireless traffic takes place between the client’s computer and the wireless access point, where the data is flowing wirelessly and must be protected from being captured. WPA typically uses the TKIP (Temporal Key Integrity Protocol) protocol, which was superseded by the CCMP (AES-based) protocol in WPA2.
All of the different types of authentication methods available for WPA are made available for WPA2, an enhanced and more secure version of WPA. WPA2 supports both PSK (Personal) and Enterprise authentication modes. However, the main difference is that encryption is provided using AES-based CCMP, rather than TKIP, to provide more security. WPA2 is the recommended standard for both home and enterprise users.
When securing wireless networks, the least secure standard one can possibly use (besides actually leaving the network unsecured) is WEP (Wired Equivalent Privacy), which is very weak and is easily compromised, and has been hacked in less than five minutes. WEP uses a passcode of 10 or 26 hexadecimal characters. WEP is unsecure due to its misuse of the popular RC4 stream cipher. It has been succeeded by much more secure standards, such as WPA and WPA2.
EAP (Extensible Authentication Protocol) is a transport protocol used for authenticating wireless traffic. EAP extends several types of authentication, the most common of which are as follows:
- PEAP (encapsulates traffic using TLS to address security issues with EAP).
While EAP is an excellent protocol and serves its purpose as an authentication protocol, some issues arose where EAP was assumed to be using a physically protected communication channel. PEAP (Protected Extensible Authentication Protocol) encapsulates EAP traffic using TLS, which is encrypted and authenticated itself.
PEAP-MSCHAPv2 (Microsoft Challenge Handshake Protocol) is the most common type of PEAP and is what most people refer to when speaking of PEAP. PEAP-MSCHAPv2 is one of the most commonly used and supported protocols, as it is used to provide peer-to-peer authentication.
PEAP-TLS (PEAP-EAP-TLS) requires certificates, and thus a PKI infrastructure, to operate. This can be provided for with a multifactor authentication process that includes a smartcard, which increases security.
LEAP (Lightweight Extensible Authentication Protocol) is a wireless authentication protocol developed by Cisco that provides dynamic WEP keys, which increases security by providing a new key every time the user reauthenticates to the WAP. LEAP can use TKIP for encryption and security as well, in place of dynamic WEP keys. LEAP has been known to be easily hackable due to its implementation of MS-CHAP (Microsoft Challenge Handshake Protocol).
Each network card has its own MAC address, which identifies the actual network card itself, and therefore the physical address of the computer. Implementing MAC filtering on a network device, such as a switch, can assist in ensuring users on authenticated devices access to the network. Remember, users are not listed or authenticated but rather the devices themselves are, so the MAC addresses of all authenticated devices will have to be listed.
When you configure your WAP, it will begin its SSID broadcast. The SSID (service set identifier) broadcast is the name of the wireless network and the identifier for the corresponding network configuration. In order to connect to a wireless network, you must either:
- Be able to pick up the SSID on your wireless antenna; or
- Know the wireless SSID and type it in.
The second option above is necessary when the SSID of a network has been hidden. You can hide the SSID of a network to prevent casual wardriving (unauthorized network access). Often, the name of a network identifies its location and where it can be accessed with the strongest signal – for example “Break Room”, “Production Area 1”, etc.
TKIP (Temporal Key Integrity Protocol) is an encryption protocol used specifically with WPA wireless networks. It was introduced after WEP was proven to be a weak standard, with its misuse of the RC4 stream cipher. It can be readily deployed to most equipment that currently supports WEP.
TKIP uses the RC4 cipher as well, but it properly combines the secret root key and the initialization vector (IV), rather than WEP’s simplistic additive function, where the IV is appended to the key. This knowledge makes it simple to hack WEP, and thus made an improvement in the form of WPA and TKIP necessary.
WPA uses TKIP for encryption and assumes that the physical security between two nodes is assured. However, when situations arise where physical security is NOT a certainty and the link could possibly be compromised, TKIP is not secure enough. CCMP (Counter Mode with Cipher Block Chaining Message Authentication Code Protocol, or CCM Mode Protocol) improves upon TKIP by encapsulating traffic and encrypting it using AES 128-bit encryption, a very strong encryption type. CCMP is required for WPA2 wireless networks.