The CIA triad encompasses a wide range of network threats. Bear in mind that network security is part of the enterprise’s risk management responsibilities within the overall business policy mechanisms. Every company has to determine the acceptable levels of risk and vulnerabilities that are actually based on the value of the corporate assets. Enterprises should also define the risk probability and the reasonable expectation of quantifiable loss in case of a security compromise.
This aspect of risk management is called risk assessment, and this is the main driving force behind organizations’ written security policies. Network designers and engineers play a key role in developing these security policies; however, this does not extend to the security implementation phase (this will be the role of another team). You will learn far more about network security in courses such as the Cisco CyberOps Associate.
When a network engineer is in the process of attack recognition and identifying countermeasures for those specific attacks, he should consider and plan for the worst situations because modern networks are large, and they can be susceptible to many security threats. The applications and systems in these organizations are often very complex, and this makes them difficult to analyze, especially when the company uses Web applications and services.
FIG 21.1 – High-level security components
Referencing Figure 21.1 above, you should be able to guarantee users and customers the following three important system characteristics:
- Confidentiality
- Integrity
- Availability
These three attributes are the core of the enterprise security policy. Confidentiality ensures that only authorized users, applications, or services can access sensitive data. Integrity implies that data does not get changed by unauthorized users or services. Finally, the availability of the systems and data should ensure that there is uninterrupted access to computing resources.
Threats to Confidentiality, Integrity, and Availability – The CIA Triad
A network engineer must understand the real threats to the network infrastructure (e.g., risk assessment or business impact analysis) before he can offer security consultancy services. We will analyze different categories of threats to confidentiality, integrity, and availability, including:
- Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks
- Spoofing (masquerading)
- Telnet attacks
- Password cracking programs
- Viruses
- Trojans and worms
These threats must be analyzed in the context of the network areas they affect and considering the exact system component they target.
Denial of Service Attacks
The main purpose of a Denial of Service (DoS) attack is to render a machine or a network resource unavailable to its intended users. In this particular type of attack, the attacker does not try to gain access to a resource; rather, he tries to induce a loss of access to different users or services. The resources can include:
- The entire enterprise network
- The CPU of a network device or server
- The memory of a network device or server
- The disk of a network device or server
A DoS attack results in the resource being overloaded (e.g., in terms of disk space, bandwidth, memory, buffer overflow, or queue overflow), and this will cause the resource to become unavailable for usage. This can vary from blocking access to a particular resource to actually crashing a network device or server. There are many types of DoS attacks, such as ICMP attacks and TCP flooding.
An advanced form of DoS attack is Distributed Denial of Service (DDoS), which works by manipulating a large number of systems to launch an attack on a target over the Internet or over an enterprise network. To deploy a DDoS attack, hackers usually break into weakly secured hosts (e.g., using common security holes in the operating systems or applications used) and compromise the systems by installing malicious code, which gives the attacker full access to the victims’ resources. After many systems are compromised, they can be used to launch a massive simultaneous attack on a target that will be overwhelmed by a very large number of illegitimate requests. Figure 21.2 below illustrates the difference between a DoS attack and a DDoS attack:
FIG 21.2 – DoS attack vs. DDoS attack
Spoofing and Man-in-the-Middle Attacks
A spoofing (or masquerading) attack is the process in which a single host or entity falsely assumes (spoof) the identity of another host. A common spoofing attack is called the man-in-the-middle (MITM) attack, and it works by convincing two different hosts (the sender and the receiver) that the computer in the middle is actually the other host (see Figure 21.2 below). This is accomplished using DNS spoofing, where a hacker compromises a DNS server and explicitly changes the name resolution entries.
Another type of masquerading attack is ARP spoofing, where the ARP cache is altered, and thus the Layer 2-to-Layer 3 address mapping entries are changed in order to redirect the traffic through the attacker’s machine. This type of attack is usually targeted within a Local Area Network.
FIG 21.3 – Man-in-the-Middle attack
Telnet Attacks
Programs like Telnet and FTP employ user-based authentication but the credentials are sent in clear text (unencrypted) over the wire. These credentials can be captured by attackers using network monitoring tools, and they can be used to gain unauthorized access to network devices.
Other related threats in this area are generated using old unsecured protocols like rlogin, rcp, or rsh that allow access to different systems. These unsecured protocols should be replaced by protocols like SSH or SFTP.
Password Cracking
Password cracking software is very easy to find nowadays, and it can be used to compromise password security in different applications or services. The software works by revealing the password that has been previously encrypted with weak encryption algorithms (e.g., DES).
A way to prevent password cracking from happening is to enforce the company’s security policy by:
- Using strong encrypting algorithms (e.g., AES)
- Choosing complex passwords (a combination of letters, numbers, and special characters)
- Periodically changing passwords
Viruses
A virus is a generic term for any type of program that attaches to individual files on a target system. After the virus appends its original code to a victim’s file, the victim is infected, and the file is changed and can infect other files through a process called replication.
The replication process can spread across hard disks, and it can infect the entire operating system. After a virus is linked to an executable file, it will infect other files every time the host file is executed. There are three major types of viruses, depending on where they act:
- MBR (Master Boot Sector) viruses
- Boot sector viruses
- File viruses
MBR and boot sector viruses affect the boot sector on the physical disk and render the operating system unable to boot. File viruses represent the most common type of viruses, and they affect different types of files.
Another way to categorize viruses is based on their behavior, of which there are two types:
- Stealth viruses
- Polymorphic viruses
Stealth viruses use different techniques to hide the fact that a change to the disk drive was made. Polymorphic viruses are difficult to identify because they can mutate, meaning they can change their size, and they can avoid detection by virus scanners. When using these virus detection programs, the recommendation is to make sure that they are updated as often as possible so they are capable of scanning for new forms of viruses.
Trojans and Worms
Trojan programs are basically unauthorized code that is contained in legitimate programs and performs functions that are hidden to the user. Worms are other illegitimate pieces of software that can be attached to e-mails, and once they are executed, they can propagate themselves within the file system and perform unauthorized functions, like redirecting user traffic to certain websites.
Social Engineering Attacks
Social engineering attacks are difficult to identify because they are not electronically detectable. They function via direct human interaction in which an attacker (assuming a different identity) convinces an employee to disclose confidential information. That information can be used by the attacker to gain access to the network.
Some of the forms of social engineering attacks include the following:
- Attacker pretends to be from tech support and asks for user authentication information for verification purposes
- Attacker pretends to be a high-level manager and asks for user confidential information
- Attacker pretends to have obtained authorization from the employee’s manager and asks for confidential information
- Tailgating: an attacker, seeking entry to a restricted area secured by unattended, electronic access control (e.g., by an RFID card), simply walks in behind a person who has legitimate access
- Baiting: the attacker leaves a malware-infected floppy disk, CD-ROM, or USB flash drive in a location sure to be found (e.g., bathroom, elevator, sidewalk, or parking lot), gives it a legitimate-looking and curiosity-piquing label, and simply waits for the victim to use the device
- Phishing: the attacker sends an e-mail that appears to come from a legitimate business (e.g., a bank or credit card company) requesting verification of information (the e-mail usually contains a link to a fraudulent Web page that seems legitimate and has a form requesting everything from a home address to an ATM card’s PIN)
- Phone phishing: phishing over the phone
- Some of the most important actions that can be taken against social engineering attacks are:
- Clear enterprise security policies
- User training
Buffer Overflow Attacks
A buffer overflow attack is one that takes advantage of an application’s vulnerability. Applications have different storage areas in their memory called buffers, and if you try to store more information than the buffer size, data might be “spilled” to adjacent memory areas that should not be accessed. An attacker might take advantage of this behavior and may write malware code in certain memory areas in order for this to be executed.
After an attacker discovers a possible buffer overflow, this does not mean he can consider it a vulnerability. Careful analysis has to be made, as many times overflowing a buffer can simply make the application crash. An attacker determined to make use of a buffer overflow weakness must figure out exactly how much and what type of data should be injected. If the buffer overflow can be performed in a repeated and predictable way, the attacker can take over the system.
Buffer overflow attacks can be prevented by developers by performing proper checks and imposing restrictions on what and where information can be entered. It is very important for developers to reserve a serious amount of time for testing because possible attackers also have a lot of time to search for weaknesses in the applications.
Another way to protect against buffer overflow attacks is to perform regular system patches as new vulnerabilities are discovered on a regular basis.
Packet Sniffing Attacks
Packets that are captured (sniffed) on a network link may provide a lot of useful information for an attacker, whether it’s in a wired or wireless environment. This type of attack is still an option for attackers because many users still send unencrypted confidential traffic through the network. Some of the things an attacker can do with captured packets include the following:
- Rebuild them to see:
- Mail exchange
- Websites accessed by the users
- Layer 3 information (IP addresses, routing protocols, etc.)
- Layer 2 information (MAC addresses)
- Device and topology discovery
As mentioned in previous chapters, the open source application Wireshark is the most commonly used packet capturing tool. When using wireless networks, it is very easy to gather packets because this can be done without physically accessing the infrastructure.
If you want to prevent attackers from seeing private information if they do capture packets, you should encrypt your communication. On wired networks, it is recommended that you use HTTPS or VPN tunnels, and in wireless environments, you should use WPA2 encryption.
FTP Bounce Attack
The bounce attack is used with an FTP server, and it sends traffic to a third device on a network. It uses FTP in passive mode and takes advantage of the client initiating both the command and data sessions.
This attack functions by the client instructing the FTP server to send the information to a third-party receiver instead of the client’s machine. This vulnerability is not really used in modern environments, as FTP servers know about this and prevent this behavior.
Wi-Fi Vulnerabilities
Wireless networks can be more vulnerable to attacks than wired networks because of their open structure, as potential attackers may compromise the network without actually getting access into the enterprise premises. Some of the most common Wi-Fi threats are:
- War driving and war chalking
- WEP and WPA cracking
- Rogue access points
- Wireless evil twins
War Driving and War Chalking
War driving is the process of finding available wireless access points within a certain geographical area by driving around and listening for signals. This combines Wi-Fi monitoring and GPS positioning, as every access point location is logged. The logs may contain the following information:
- Access point name
- Location
- GPS coordinates
- Type of encryption
- Signal strength
Using this method gains a lot of information in a short period of time, information that can be centralized in dedicated free software tools such as:
- Kismet
- InSSIDer
After the attacker has the GPS coordinates of the access points, he can centralize these into various applications that provide a graphical view of the area, thus visualizing everything on a map. War driving involves using automated tools that scan and log wireless access point locations. Because no human interaction is usually required, war driving has evolved into the following variations:
- War biking – scanning while riding a bike
- War flying – flying a remote-controlled airplane equipped with the necessary scanning device
War chalking is a legacy technique of physically marking an access point location by drawing different symbols on the sidewalk and walls in that specific area. A series of codes were developed to describe the Wi-Fi network’s characteristics. Some of these codes are depicted in Figure 21.4 below:
FIG 21.4 – War chalking symbols
WEP and WPA Cracking
Even though wireless networks are very popular, one of the biggest challenges is protecting the data that flows through the air and that is often publicly accessible. Possible attackers might capture data that flows through a wireless network so they can analyze it and try to decrypt the information.
With WEP encryption, the decryption of the data can be achieved using initialization vectors (IVs). IVs are small portions of data that are associated with the packets and that help create a key that can change all the time. A static key along an IV value may generate a unique key as long as the IV values change every time data is sent. Initialization vectors are passed along in clear text with the encrypted data, so if an attacker manages to capture a significant amount of data, he can reverse the encryption process. The IV is sent in clear text because the authorized receiving station must use it to decrypt the data, as it is the only part of the packet that is not known.
One of the issues with WEP is that the key size is small. It was originally limited to 64 bits and then increased to 128 bits. The 64 bits included a 40-bit key and a 24-bit initialization vector. Another major issue with WEP is that it offers no key management, so everybody uses the same key to encrypt and decrypt data. Yet another issue is the small size of the initialization vectors (24 bits), which may have to be reused, so this cannot be considered true randomization. Some of the IV values may provide weaker encryption than others, so some manufacturers do not use all the available IV values.
A technique often used by attackers is injecting frames to intentionally duplicate IV values, which makes the decryption process easier.
WPA is a more advanced encryption protocol and is preferred over WEP. The major advantage of WPA is that it offers an enhanced cryptographic algorithm that constantly changes keys during a session’s lifetime. Modern networks often use WPA2, which comes in three forms:
- WPA2-Personal – used in private networks, based on pre-shared keys (static key)
- WPA2-Enterprise – used in enterprise networks, based on 802.1X (keys are constantly changing)
- WPA3 – announced early 2018, it is the third iteration of WPA and offers several enhancements to WPA3
However, WPA also has weak points that allow attackers to decrypt the data and expose the network. WPA2-Personal is vulnerable to a series of attacks, including:
- Brute-force – attacker tries every character combination to guess the passkey
- Dictionary attacks – attacker tries every word in a common dictionary to guess the passkey
The recommendation when using WPA with pre-shared keys is to make the key as long and as complex as possible, using a lot of non-intuitive character combinations.
Rogue Access Points
Rogue access points can be a major concern, especially in large environments, and they represent third-party access points usually installed by users in a network. This creates vulnerability as everyone in range of the rogue access points can have access to the network.
To mitigate such problems, network administrators should schedule periodic surveys of the infrastructure by walking around the campus and trying to detect signals from third-party APs. You might also consider using 802.1X to force the users to authenticate against an authentication server, regardless of the connection type.
Rogue access points might even be created by enabling the Wi-Fi sharing functionality on a user’s smartphone or PDA.
Wireless Evil Twins
Wireless evil twins describes a concept of configuring an external access point to look and behave just like a trusted access point (same SSID and same security settings) so that users connect to the “evil” AP by mistake. Usually, the evil AP will have a boost of signal in order to increase the chances of users connecting to it (instead of the trusted AP), even though it may be located in another location, thus overpowering the trusted AP.
To prevent issues arising from evil twin attacks, you should implement an additional layer of encryption inside the wireless network using HTTPS or communicate through a VPN tunnel so that the encrypted data is safe, even if you connect to a non-trusted AP.
Network Device Vulnerabilities
An important vulnerable area in the network infrastructure, considering the attacks presented above, is made up of network devices. The targeted devices can be part of any network module and layer, including access level devices, distribution devices, or core equipment. Even though network devices (e.g., routers, switches, or other appliances) have embedded security features, you need to make sure that they are secured from intruders.
The first step is controlling physical access. Critical equipment should be placed in locked rooms that can be accessed only by authorized users, preferable via multiple authentication factors. You also want to make sure that the network administrators follow security guidelines to avoid human errors. Next, harden the network devices, just like you would harden hosts or servers, by applying the following techniques:
- Enable only the necessary services
- Use authenticated routing protocols
- Use one-time password configurations
- Provide management access to the device only through secured protocols, like SSH
- Make sure that the device’s operating system is always patched and updated to protect against the latest known vulnerabilities
Network Infrastructure Vulnerabilities
Network infrastructure vulnerabilities are present at every level in the enterprise architecture, and the attacks aimed to exploit these vulnerabilities can be categorized as follows:
- Reconnaissance attacks
- DoS and DDoS attacks
- Traffic attacks
Reconnaissance is a military term that implies scoping the targets before initiating the actual attack. The reconnaissance attack is aimed at the perimeter defense of the network, including the WAN network or edge modules. This includes activities like scanning the topology using techniques that include:
- ICMP scanning
- SNMP scanning
- TCP/UDP port scanning
- Application scanning
The scanning procedure can use simple tools, like ping or Telnet, but it can also involve using complex tools that can scan the network perimeter for vulnerabilities. The reconnaissance attack’s purpose is to find the network’s weaknesses and then apply the most efficient type of attack.
As a countermeasure to these reconnaissance attacks, you can use network access control, including hardware and software firewall products, and you can harden the devices to make sure they are using only specific ports, specific connections, and specific services.
DoS and DDoS attacks are meant to compromise the connectivity and availability to or from the network and can be categorized into different types:
- Flooding the network with poisoned packets
- Spoofing network traffic
- Exploiting application bugs
Countermeasures that help protect against DoS attacks mainly include using firewall products and ensuring that the network operating systems are updated regularly and include the latest patches. Some firewall devices have a very useful feature called TCP Intercept that can be used to prevent SYN flooding attacks, which are used against websites, e-mail servers, or other services. TCP Intercept intercepts and validates TCP connection requests before they arrive at the server. You can also use QoS mechanisms to filter certain types of traffic.
Because DoS attacks affect the performance of network devices and servers, many large organizations oversize their resources in order to have additional bandwidth, backup connections, and redundancy. When DoS attacks occur, these oversized resources can compensate for the negative effects without critically affecting internal services. The downside of this approach is the sheer cost.
Application Vulnerabilities
Applications and individual host machines are often the ultimate target of the attacker or the malicious user. Generally, they want to get permission to read sensitive data, write changes to the hard drive, or compromise data confidentiality and integrity.
Attackers try to exploit bugs in the operating systems (for servers, hosts, and network devices OS) and to abuse vulnerabilities in various applications to gain access to the system. Some applications are very vulnerable, mostly because they were not properly tested and were launched without advanced security features in mind.
After gaining basic access to a system, attackers will use a tactic called privilege escalation that will provide them with system administrator privileges by exploiting vulnerabilities in certain programs and machines. After they gain administrator access, they can either attack the entire system or read/write sensitive and valuable information.
Countermeasures against application and host vulnerabilities include using secure and tested programs and applications. This can be enforced by having applications digitally signed and making sure that you use quality components from different vendors. Hosts can be hardened using a variety of techniques, including ensuring that the machine is locked down and that only the appropriate services and applications are used. Firewall and virus detection techniques should also be used and should be updated often.
Another useful countermeasure is to minimize exposure to outside networks, including the Internet, even though many attacks can come from inside the organization. As organizations get larger, increased attention must be given to human factors and to inside threats. Network administrators, network designers, and end-users should be carefully trained in using the security policies implemented in the company.
Threat Mitigation
Every organization, regardless of size, should have some form of written security policies and procedures, along with a plan to enforce those guidelines and a disaster and recovery plan.
FIG 21.5 – Security policy methodology
Referencing Figure 21.5 above, when initially developing a security policy, the recommended methodology consists of the following steps:
- Risk assessment
- Determine and develop the policy
- Implement the policy
- Monitor and test security
- Re-access and re-evaluate
Risk assessment involves determining what the network threats are, making sure that the entire network is documented, and identifying current vulnerabilities and the countermeasures that are already in place. The second step is determining and developing a security policy, which should be based on a wide variety of documents, depending on the organization. When developing the policy, you should take into account the company’s strategy, the decision-makers in the organization and their obligations, the value of the company’s assets, and the prioritization of the security rules.
After the policy is developed, it should be implemented from both a hardware and a software standpoint. The next step is to monitor and test the security plan and re-evaluate it in order to make changes that will improve the policy.
Security policy documentation can be different for each organization and can be based on different international standards. Some common general written documents include:
- Organizational security policy
- Acceptable use policy
- Access control policy
- Incident handling
- Disaster recovery plan
- Personnel policies and procedures
The organizational security policy is a general document that is signed by the management of the organization and that contains high-level considerations like the objectives, the scope of the security policy, risk management aspects, the company’s security principles, planning processes (including information classification), and encryption types used in the company.
The acceptable use policy and the personnel policies and procedures detail the way in which individual users and administrators use their access privileges. The access control policy involves password and documentation control policies, and incident handling describes the way a possible threat is handled to mitigate a breach in the organization’s security. The disaster recovery plan is another document that should be included in the organizational security policies, and it should detail the procedures that will be followed in case of a total disaster, including applying backup scenarios.
When documenting the security policy, the components may be divided into the major security mechanisms that will be applied in the organization, including:
- Physical security
- Authentication
- Authorization
- Confidentiality
- Data integrity
- Management and reporting
Physical security is often ignored when documenting the security policy. This implies physically securing the data center and the wiring closets; restricting access to the network devices, the LAN cabling, and the WAN/PSTN connection points; and even securing access to endpoint devices, like workstations and printers.
Authentication implies making sure that the individual users who are accessing particular objects on the network are actually who they claim to be. Authentication is used to determine the identity of the subject and authorization is used to limit access to network objects, authorizing them based on their identity. Confidentiality and data integrity define the encryption mechanisms to be used, like IPSec, digital signatures, or physical biometric user access. Management and reporting involve auditing the network from a security standpoint, logging information, and auditing user and administrator actions. This can be supported using Host Intrusion Detection Systems (HIDS), which ensures that the network servers can detect attacks and protect themselves against those attacks.
Security Threats and Risks
Efficient security mechanisms must be able to successfully address organizational threats and mitigate risks. One characteristic of really successful security is its transparency to the end-user. The security manager should take care of ensuring the balance between strict security policies and productivity and collaboration. If the security rules are too tight, the users’ experience may be affected and the employees might not be able to fulfill their tasks easily. On the other hand, if the security rules are too permissive, the users’ experience may be improved but the network is more vulnerable. You should create a secure environment for the organization by preventing attacks, but you should also be careful that these features have as little effect on the endusers’ productivity as possible.
A network security implementation has to mitigate multiple factors, and it should be able to:
- Block outside malicious users from gaining access to the network
- Only allow system, hardware, and application access to authorized users
- Prevent attacks from being sourced internally
- Support different levels of user access using an access control policy
- Safeguard the data from being changed, modified, or stolen
As detailed previously, network threats can be categorized into the following types:
- Reconnaissance
- Unauthorized access
- Denial of Service (DoS)
Reconnaissance is the precursor of a more structured and advanced threat. Many worms, viruses, and Trojan horse attacks usually follow some type of reconnaissance attack. Reconnaissance can also be accomplished through social engineering techniques, by gathering information using the human factor. There are several tools that can be used for reconnaissance, including port scanning tools and packet sniffers. The goal for reconnaissance is to gather as much information as possible about the target host and network. The information gathered in the reconnaissance phase will be used to initiate an attack based on the most appropriate attack technique.
The reconnaissance process provides useful information to gain unauthorized access, with the goal of attacking or exploiting a system or host. Unauthorized access might relate to operating systems, physical access, or any service that allows for privilege escalation in a system. The final goal is reading or modifying confidential data.
Another main type of threat is DoS, and this is basically the process of overwhelming the resources of different servers or systems to prevent them from answering legitimate users’ requests. The affected resources can include memory, CPU, bandwidth, or any other resource that can bring down (crash) the server or the service. A DoS attack denies service using well-known protocols, like ICMP, ARP, or TCP, but attackers can also perform a more structured and distributed DoS attack using several systems and overwhelming an entire network by sending a very large number of invalid flows.
Vulnerabilities are basically measurements of the probability of being negatively influenced by a threat (i.e., reconnaissance attack, unauthorized access, or DoS attack). Vulnerabilities are often measured as a function of risk and this might include:
- Risk to the confidentiality of the data
- Risk to the integrity of the data
- Risk to the authenticity of systems and users
- Risk to the availability of networking devices
The level of security risks (vulnerability to threats) must be assessed to protect network resources, procedures, and policies. System availability involves uninterrupted access to network-enabled devices and computing resources to minimize business disruptions and productivity loss. Data integrity involves making sure that data is seen only by authorized users and that it is not modified in transit (data that leaves the sender node must be identical to the data that enters the receiver node). Data confidentiality should ensure that only legitimate users see sensitive information. Confidentiality is used to prevent data theft and damage to the organization.
The risk assessment process involves identifying all possible targets within the organization and places a quantitative value on them based on their importance in the business process. Targets include:
- Any kind of network infrastructure device (switches, routers, security appliances, wireless access points, or wireless controllers)
- Network services (DNS, ICMP, or DHCP)
- Endpoint devices, especially management stations that perform in-band or out-of-band management
- Network bandwidth (can be overwhelmed by DoS attacks)
System Security Lifecycle
Security is one of the main responsibilities of a design professional, and this includes a solid knowledge of organizational security policies and procedures. The security policy is a key element to securing network services, offering the necessary level of security, and enhancing network availability, confidentiality, integrity, and authenticity.
FIG 21.6 – Network security system lifecycle
Referencing Figure 21.6 above, the security policy is a small part of a larger network security system lifecycle that is driven by an assessment of the business’s needs and by a comprehensive risk analysis. A risk assessment may also need to be performed, using penetration testing and vulnerability scanning tools.
The security policy contains written documents that include:
- Guidelines
- Processes
- Standards
- Acceptable use policies
- Architecture and infrastructure elements used (IPSec, 802.1X, etc.)
- Granular areas of security policy, like Internet use policy or access control policy
The written security policy leads to the security system, which can include the following elements:
- UTM (firewall, IPS, IDS, anti-virus) devices
- IDS (Intrusion Detection Systems) and IPS (Intrusion Prevention Systems)
- 1X port-based authentication
- Device hardening
- Virtual private networking
These system elements are chosen based on a set of guidelines and best practices. The entire process leads to defining the organizational security operations, which involves the actual integration and deployment of the incident response procedures, the monitoring process, compliance with different standards, and implementation of security services (IPS, proxy authentication, zone-based firewalls, etc.).
The entire diagram presented in Figure 21.6 above is an iterative process, and after the security operations are put into place, the process can step back and the business needs can be reassessed, leading to changes being made to the security policy. The network security system lifecycle is an ongoing framework and all of its components should be periodically revised and updated.
Security Policy and Procedures
The security policy is the main component of the security system lifecycle and is defined per RFC 2196 as a formal statement of the rules and guidelines that must be followed by the organization’s users, contractors, and temporary employees, as well as anybody who has access to the company’s data and informational assets. It is an overall general framework for the organizational security implementation, and it should contain the different areas of the organization documented using a modular approach.
One way of approaching the security policy is to examine the modular network design of the organization and develop a separate policy for each different module or a single policy that will include all the modules. The modular approach is also recommended when performing risk and threat assessment.
The security policy also creates a security baseline that will allow future gap analysis to be performed in order to detect new vulnerabilities and countermeasures. The most important aspects covered by the written security policy and procedures are:
- Identifying the company’s assets
- Determining how the organization’s assets are used
- Defining communication roles and responsibilities
- Describing existing tools and processes
- Defining the security incident handling process
A steering committee will review and eventually publish the security policy after all of the component documents are finalized. Figure 21.7 below illustrates the five-step process that defines the security policy methodology:
FIG 21.7 – Security policy methodology
The first step is to identify and classify the organization’s assets and assign them a quantitative value based on the impact of their loss. The next step is to determine the threats to those assets, because threats only matter if they can affect specific assets within the company. One company may assign higher priority to physical security than to other security aspects (like protecting against reconnaissance attacks).
Next, a risk and vulnerability assessment is performed to determine the probability of the threats occurring. The next step is performed after the security policy is published, and it involves implementing cost-effective mitigation to protect the organization. This defines the actual tools, techniques, and applications that will mitigate the threats to which the company is vulnerable. The last step, which is often skipped by many organizations, involves periodically reviewing and documenting the developed security policy.
Many organizations have templates for developing their security policy and some of the common components include the following:
- The acceptable use policy – This is a general end-user document that defines the roles, responsibilities, and processes allowed regarding software and hardware equipment. For example, certain file-sharing applications or instant messaging programs can be forbidden.
- Network access control policy – This policy contains general access control principles and can relate to things like password requirements, password storage, or data classification.
- Security management policy – This policy summarizes the organization’s security mechanisms and defines ways to manage the security infrastructure with appropriate tools (for NAC appliances).
- Incident handling and response policy – This document should describe the policies and procedures by which security incidents are handled. It can even include emergency-type scenarios like disaster recovery plans or business continuity procedures.
- VPN policy – This dedicated policy covers the virtual private networking technologies used and various security aspects that concern them. Different policies may be applied for teleworkers, remote access users, or site-to-site VPN users.
- Patch management policy – This policy should cover the procedures for patching and keeping the existing systems up to date.
- Physical security policy – This policy involves physical security aspects like access control (badges, biometrics, and facility security).
- Training and awareness – Ongoing training and awareness campaigns must sustain the organization’s security policy and this is especially applicable to new employees.
There are two driving factors behind the security policy:
- The business’s needs and goals
- Risk assessment
Network security requires a comprehensive risk management and risk assessment approach that will help lower the risks to acceptable levels for the organization. These acceptable levels will vary from organization to organization. The risk assessment process should lead to the implementation of the components included in the security policy. Risk assessment should also be accompanied by a cost-benefit analysis, which will analyze the financial implications of the mitigation (control) that will be put in place to protect specific assets. For example, money should not be spent on protecting certain assets against threats that are not likely to occur.
The risk assessment process involves three components:
- Severity
- Probability
- Control
These three components should explain what assets will be secured, their monetary value, and the actual loss that would result if one of those resources were to be affected. The severity and the probability aspects refer to the probability and impact of a certain attack on the organization. The control aspect defines how the policy will be used to control and minimize the risks.
The three components can be used to develop a risk index (RI), which uses the following formula:
RI = (severity factor * probability factor) / control factor
where:
- The severity factor represents the quantitative loss of a compromised asset
- The probability factor is a mathematical value of the risk actually occurring
- The control factor is the ability to control and manage that risk
For example, the severity factor may have a range of 1 to 5, the probability factor may have a range of 1 to 3, and the control factor may also have a range of 1 to 3. Looking at a particular example, if the severity factor for a DoS attack on an e-mail server lasting two hours has a value of 3, the probability factor has a value of 2, and the control factor has a value of 1, then the calculated RI has a value of 6 (3 * 2 / 1 = 6). This calculation should be applied to different areas of the network and should take into account different types of threats.
Another characteristic of risk assessment is that it is an ongoing process that will undergo continuous change due to new technologies emerging. The security policy must be updated to reflect these infrastructure changes. There are four steps to the continuous security lifecycle, as illustrated in Figure 21.8 below:
- Secure
- Monitor
- Test
- Improve
FIG 21.8 – Risk assessment security lifecycle
Securing implies using authentication and identification techniques, access control lists, packet inspection, firewall techniques, IDS and IPS technologies, VPNs, or encryption. The next step is monitoring the processes using SNMP or SDEE. Ongoing vulnerability testing should be provided, along with penetration testing and security auditing, to ensure the functionality of each process. The last step is an iterative process that helps improve different areas. Improving these areas will be based on data analysis, reports, summaries, and intelligent network design.
Trust and Identity Management
Trust and identity management is a critical aspect in developing secure network systems. Trust and identity management states who can access the network, what systems can access the network, when and where the network can be accessed, and how the access can occur. It also attempts to isolate infected machines and keep them off the network by enforcing access control, by which they are forced to update their signature databases and their applications.
Trust and identity management has three components:
- Trust
- Identity
- Access control
Trust is the relationship between two or more network entities, for example, a workstation and a firewall appliance. The trust concept will determine the security policy decisions. If a trust relationship exists, communication is allowed between the entities. The trust relationship and the level of privilege can be affected by different postures (e.g., an outdated virus signature database or an unpatched system). Devices can be grouped into domains of trust that can have different levels of segmentation.
The identity aspect determines who can access the network, including users, devices, or other organizations. The authentication of identity is based on three attributes that make the connection to access control:
- Something that the subject knows (password or PIN)
- Something that the subject has (token or smartcard)
- Something that the subject is (biometrics like fingerprint, voice, or facial recognition)
The domains of trust can be implemented on a Microsoft Active Directory implementation, in large organizations, and across the Internet. Certificates play an important role in proving user identity and the right to access information and services.
Access controls in enterprise organizations typically rely on AAA (Authentication, Authorization, and Accounting) services. AAA solutions can use an intermediate authenticator device (e.g., router, switch, or firewall) that can leverage some back-end services and various RADIUS or TACACS+ servers. Authentication establishes user or system identity and access to network resources, while authorization services define what users can access. The accounting part provides an audit trail that can be used for billing services (e.g., recording the duration of a user connection to a particular service). Most of the modern network devices can act as authenticators and can pass user authentication requests to RADIUS/TACACS+ servers.
Secure Connectivity
Secure connectivity is another component that works closely with the trust and identity management concept described above. This implies using secure technologies to connect endpoints. Examples in this regard include:
- Using IPSec inside the organization and over the insecure Internet
- Using SSH to replace insecure technologies like Telnet for console access
- Using SSL/TLS (HTTPS) secure connectivity when using Web browsers
- Using solutions from service providers, like MPLS VPNs (Multi Protocol Label Switching Virtual Private Networks)
Threat Defense Best Practices
Some of the best practices for protecting the network infrastructure through trust and identity include the following:
- Using AAA services with RADIUS/TACACS+ servers
- Using 802.1X
- Logging using syslog to create comprehensive reports
- Using SSH instead of Telnet to avoid any management traffic crossing the network in clear text
- Using secure versions of management protocols, like SNMPv3 (authenticates the client and the server), NTPv3, and SFTP
- Hardening all network devices by making sure that unnecessary services are disabled
- Using authentication between devices that are running dynamic routing protocols
- Using access control lists to restrict management access, only allowing certain hosts to access the network devices
- Using IPSec as an internal (encrypting management or other sensitive traffic) or external (VPN) solution
- Using NAC (Network Admission Control) solutions, ensuring that network clients and servers are patched and updated in an automated and centralized fashion with the newest anti-virus, anti-spam, and anti-spyware mitigation tools
User Authentication
This section describes user authentication techniques, including PKI, Kerberos, AAA, 802.1X, two-factor authentication, and single sign-on.
User Authentication – General Concepts
User authentication is a fundamental component of security policies across the network. From a user’s perspective, this can be accomplished in multiple ways:
- Username and password
- Token generators
- Fingerprint readers
- A combination of multiple factors
Even though this process may seem simple from a user’s perspective, it becomes complicated behind the scenes as one or multiple authentication protocols have to be used to achieve this purpose. In addition, you need to ensure that authentication to a remote device is performed in a secure manner that will not provide anyone with the ability to discover the user’s credentials.
One common way to secure authentication communication is using a hash. This is a complex cryptographic function that achieves one-way translation of credentials (password) to something called a message digest. The digest is a summary of the input information, and it can be obtained using a series of algorithms:
- MD5 (Message Digest algorithm 5)
- SHA (Secure Hash Algorithm)
If a password is put into the MD5 or SHA cryptographic function, the string (digest) will look like this: ag8hh884904atg490049dg99491. As mentioned, this is a one-way function, so you cannot obtain the original password if you have the hash string. This is very useful when dealing with passwords because they don’t need to be sent in plain text to the other side; you can simply hash it and send the output. The other side (server) already has the user credentials in its database, so it can create a hash of the locally stored password and compare it to the received hash. If the hashes match, the user is authenticated. This process is depicted in Figure 21.9 below:
FIG 21.9 – Authentication using hash algorithms
PKI
The Public Key Infrastructure (PKI) is a complex authentication technique that functions by using digital certificates. These certificates provide a certain level of trust to the communication as they authenticate the sender. PKI uses the concept of Certificate Authority (CA) for the central entity that issues certificates to users. The CA confirms the identity of every user, and every user in an organization trusts the CA.
Some situations might not require a central certification authority, and one alternative to this approach is using a “web of trust.” With this approach, users sign certificates for people they know and those people sign certificates for other people they know. In this way a web of trust is built, and users may accept messages from other users just because they have a common trusted friend.
Most of the modern operating systems have an integrated component that helps manage certificates and keys but you can also use third-party solutions.
PKIs are built to manage certificates, including the issuing, assigning, and verification process. A PKI generally works using one of two encryption types:
- Symmetric encryption
- Asymmetric encryption
Symmetric encryption involves using the same key for both data encryption and decryption. However, most of the time, a PKI uses an asymmetric encryption type, which works by providing users with a public key they use to encrypt data sent out on the network. The encrypted data will be decrypted with a different key.
The public and the private keys are created at the same time in order for them to be cryptographically related to each other. When building the keys, you can use a certain level of randomization to ensure increased security. After the key pair is available, you can provide the public key to all users so they can encrypt data that can be decrypted only by using the associated private key, which is known only by the key pair generator entity.
To increase the level of protection, a certificate is usually valid for a limited period of time. The CA can then revoke it and issue the user a fresh certificate.
Building a PKI involves a lot of planning, and it should be done considering the many factors and teams in the organization. A large network may involve using a central CA and a series of subordinate CAs assigned to different regions to properly manage user and machine certificates.
Kerberos
Kerberos is a network authentication protocol that allows a user to enter his credentials once and receive access to all necessary network resources, without the need to re-authenticate for each one of them. Kerberos offers advanced cryptographic functions using a mutual authentication between the client and the server to protect against man-in-the-middle attacks.
Kerberos, which was developed by MIT (Massachusetts Institute of Technology), has been in use since the 1980s and is covered in RFC 4120. Microsoft has been using Kerberos as its authentication method since Windows 2000, so it is often associated with Windows environments but it is also compatible with other operating systems and devices.
Kerberos has three main components:
- KDC (Key Distribution Center) – The KDC checks the valid login credentials and vouches for the user’s identity. It operates on TCP and UDP port 88.
- Authentication Service – This is the component that actually performs the authentication on the network.
- Ticket Granting Service – Kerberos works with internal tickets, and this service is the component that manages tickets and provides user access on the different network components.
Next, we will analyze an example in which a user wants to access an application server. After the user decides to log into the network, he will need to talk with the Authentication Service by sending a login request. During this process, the date and time on the local computer is encrypted using the key that is in the password hash. Because the password hash is not actually sent over the network, it is used as a key to encrypt the date and time instead. For this reason, everyone in the network should use NTP to synchronize their clocks.
The Authentication Service receives the information sent by the user and tries to decrypt the information with the hash of the credentials it has. After it is successfully decrypted, it sends a Ticket Granting Ticket (TGT) back to the user, which includes:
- Client name
- IP address
- Timestamp
- TGT validity period (default 10 hours)
The TGT is encrypted with the KDC secret key so it cannot be decrypted by anyone else. The client will also receive a Ticket Granting Service (TGS) session key, used for communication between the client and the TGS. The TGS is then encrypted with the user’s password hash so he will be able to decrypt it.
After the client has a ticket that allows access to resources, he will send the ticket and the name of the application server he wants to access to the TGT to request access to that specific server. This particular request is time stamped with the client’s ID and encrypted with the TGS session key (to avoid request spoofing). The TGS will send a response back to the client with the following information:
- A session key to use with the application server (this is also encrypted with the TGS session key)
- A service ticket containing user information and service session key (encrypted with the application server’s secret key)
When the client receives information, he cannot decrypt it, so he needs to pass the encrypted service ticket to the application server. The client will also send a time-stamped authentication, encrypted with the service session key. After the application server receives this information, it begins to decrypt the data to confirm its integrity. The server might send a final message check back to the user to be sure that there is no man-in-the-middle. This is an optional step that is most often deployed in high security environments. Finally, the client receives access to the application server’s resources.
Although the Kerberos authentication process appears complex, it all happens transparently to the user and ensures increased security in accessing network resources.
AAA
Logging in to network resources may not be a consistent process, as different devices use different authentication techniques. Access control in enterprise organizations typically relies on Authentication, Authorization, and Accounting (AAA) services. AAA offers the following services:
- Verifies user identity and credentials (authentication)
- Provides access to network resources (authorization)
- Logs user access (accounting)
The general concept behind AAA is centralizing all these actions under a single system and making it easier for the user to authenticate via a single username and password. AAA solutions can use an intermediate authenticator device (e.g., router, switch, or firewall) that can leverage some back-end services, like various RADIUS or TACACS+ servers.
RADIUS and TACACS+ are often called authentication servers/services and the way they function is as follows:
- The user sends an authentication request to an authenticator device (router, switch, etc.)
- The authenticator device passes the request to a RADIUS or TACACS+ server
- The RADIUS/TACACS+ server responds to the authenticator device
- The authentication device allows or blocks user access based on the response it received from the authentication server
RADIUS stands for Remote Authentication Dial-In User Service and was first defined in RFC 2058, but the current RFC is 2865. RADIUS uses UDP port 1812 and works by receiving user credentials and verifying them against a central database.
TACACS stands for Terminal Access Controller Access Control System and is a remote authentication protocol defined in RFC 1492. It has been updated through the years to the following protocols:
- XTACACS (Extended TACACS) – A Cisco proprietary version that provides additional support for accounting and auditing
- TACACS+ – The latest Cisco proprietary version that includes more authentication requests and response codes but is not backward compatible with previous versions (this is the version used in current network environments)
Some of the most important differences between RADIUS and TACACS+ include the following:
- RADIUS functions over UDP, while TACACS+ uses TCP
- RADIUS encrypts only the password during transmission, while TACACS+ encrypts the entire session
- RADIUS combines authentication and authorization, while TACACS+ separates authentication, authorization, and accounting
- RADIUS is an open standard, while TACACS+ is Cisco proprietary
We will revisit AAA in more detail later.
802.1X and EAP
Another common security issue that you have to deal with in wireless environments is managing unauthorized access. In wireless networks, there are no physical boundaries, so attackers can gain access from outside the physical security perimeter. They can introduce rogue APs or soft APs on laptops or handheld devices that can breach security policies. Because wireless signals are not easily controlled or contained, this could create security issues for the network.
MAC address security can be used to allow only certain devices to associate with the access points, but this cannot prevent MAC address spoofing techniques. An effective solution would involve MAC address filtering but this is not very scalable when dealing with a large number of wireless clients. The most efficient solution to this problem is using 802.1X port-based authentication. This is an authentication standard for both wired and wireless LANs that allows individual users and devices to authenticate using the Extensible Authentication Protocol (EAP) and an authentication server (RADIUS or TACACS+).
FIG 21.10 – 802.1X functionality
Referencing Figure 21.10 above, 802.1X works by authenticating the user before receiving access to the network, and this involves three components:
- Supplicant (client)
- Authenticator (access point or switch)
- Authentication server (RADIUS or TACACS+)
The client workstation can run client software known as a supplicant, which can be a Windows client or a third-party client software supplicant. The client software requests access to different services, and it uses EAP to communicate with the access point (or LAN switch), which is the authenticator. The authenticator will then verify the client information against an authentication server (e.g., RADIUS).
EAP has five different security types:
- EAP-TLS
- PEAP
- EAP-TTLS
- LEAP
- EAP-FAST
EAP-TLS
EAP-Transport Layer Security (EAP-TLS) is a commonly used EAP method in wireless environments that requires a certificate to be installed on both the supplicant and the authentication servers. The key pairs must first be generated and then signed by a local or remote CA server. The key communication process used by EAP-TLS is similar to the SSL encryption in which the user’s certificate is sent through an encrypted tunnel. EAP-TLS is one of the most secure authentication methods but it is also very expensive and difficult to implement.
PEAP
Protected Extensible Authentication Protocol (PEAP) requires only a server-side certificate that will be used to create the encrypted tunnel. The authentication process takes place inside that tunnel. PEAP was jointly developed by Cisco, Microsoft, and RSA so it is heavily used in Microsoft Windows environments. PEAP uses the Microsoft Challenge Handshake Authentication Protocol (MS-CHAPv2) or Generic Token Card (GTC) to authenticate the user inside the encrypted tunnel.
EAP-TTLS
EAP-Tunneled Transport Layer Security (EAP-TTLS) is a lot like PEAP, as it uses a TLS tunnel to protect the less secure authentication mechanisms. This might include protocols like PAP (Password Authentication Protocol), CHAP, MS-CHAPv2, or EAP MD5. EAP-TTLS is not widely used in enterprise networks but it can be found mainly in legacy environments that contain older authentication systems (e.g., Windows NT).
LEAP
Lightweight EAP (LEAP) was created by Cisco as a proprietary solution for their equipment and systems. It is still supported by a variety of operating systems, like Windows and Linux, but it is no longer considered secure because a series of vulnerabilities that affect it was identified.
EAP-FAST
EAP-Flexible Authentication via Secure Tunneling (EAP-FAST) is also a Cisco-developed EAP type that was aimed to address the weaknesses in its LEAP protocol. When using EAP-FAST, server certificates are optional, so it offers a lower cost implementation than a full-blown PEAP or EAP-TTLS. EAP-FAST uses a Protected Access Credential (PAC) technique to establish the TLS tunnel for the protection of the credential tunnel. PAC is basically a strong shared secret key that is unique for every client.
Note: The most commonly used EAP solutions are PEAP and EAP-FAST for small business networks and EAP-TLS for large enterprise solutions.
PAP and CHAP
PAP and CHAP are authentication protocols used mostly on point-to-point links. PAP stands for Password Authentication Protocol and is a legacy protocol that functions in clear text. Although it may be preferred sometimes because of its simplicity, it is not a reliable protocol from a security perspective because it sends passwords in clear text.
PAP works basically the same way as the normal login procedure. The client authenticates by sending a username and a password to the server, which the server compares to its secrets database. This technique is vulnerable to eavesdroppers, who may try to obtain the password by listening in on the serial line, as well as to repeated trial-and-error attacks.
CHAP stands for Challenge Handshake Authentication Protocol and is an evolution of PAP; it is a secure authentication protocol that uses encryption. There is also a version of CHAP modified by Microsoft and used in their specific environments called MS-CHAP. CHAP functions using a three-way handshake:
- The server sends a challenge message.
- The client responds with a password hash for authentication purposes.
- The server compares the hash received with the stored hash and grants access if they match.
Note: CHAP uses password hashes instead of actual passwords to increase security.
Even if the client is successfully authenticated, the CHAP protocol may repeat the authentication process periodically during the connection without the user knowing this is happening. With this procedure, the server ensures that the client hasn’t been replaced by an intruder.
Multi-Factor Authentication
Multi-factor authentication is a technique that uses more than one authentication method to increase security. The most commonly used factors are:
- Something you know – username and password
- Something you have – token, smart card, etc.
- Something you are – biometrics (fingerprint or retinal scanners)
These solutions are usually used in high-security environments and can be expensive, as it may involve providing hardware tokens to every user and installing biometric equipment in the organization. If the requirements are not very high, you can also use software tokens that can be installed on mobile devices (e.g., smartphones).
Many enterprise ID cards also have smart card functionality that can integrate with authentication devices. Validating the smart card ID and entering a password at the same time could be a secure way of accessing network resources.
Other types of tokens include:
- USB token – stores a certificate and must be inserted into a USB port to be validated
- Hardware pseudo-random authentication code generators
- Mobile phones that can receive a unique code via SMS from the authentication server
FIG 21.11 – Security token
Single Sign-On
As the network gets larger, you may find that users must authenticate multiple times to gain access to multiple resources. Single sign-on allows users to overcome this issue and permits them to access the entire resources they need using a single set of credentials to authenticate a single time.
Single sign-on can be accomplished using multiple methods:
- Kerberos (integrated in Microsoft platforms) – Windows domain login provides access to all resources
- Third-party solutions
Single sign-on is particularly useful when working with cloud applications that are located on the Internet because it is very easy for the user to login once and then access all the applications in the cloud. This solution is also referred to as software as a service (SaaS). A straightforward example of single sign-on in the cloud is Google. When signing into a Google account, the user can access mail service, a calendar, Google documents, and every other service on the Google platform.
VPN
A virtual private network (VPN) is a data network that uses a public telecommunications infrastructure, maintaining privacy through the use of a tunneling protocol and security procedures. The main purpose of a VPN is to give companies the same capabilities as private leased lines at a much lower cost through the use of the shared public infrastructure.
Some of the most important features that need to be incorporated into a virtual private network include:
- Security
- Reduced costs
- Reliability
- Scalability
- Network management
- Policy management
The main reason that companies use secure VPNs is to inexpensively transmit sensitive information over the Internet without the risk of the information being compromised. Everything that goes over a secure VPN is encrypted to such a level that even if someone captured a copy of the traffic, they could not read the content. Using a secure VPN ensures that an attacker cannot alter the contents of a company’s transmissions. Secure VPNs are particularly valuable for remote access, where a user is connected to the Internet at a location not controlled by the network administrator, such as from a hotel room, airport kiosk, or home.
Companies that use VPN technologies do so because they want to ensure that their data is moving over a set of paths that has specified properties and is controlled by one ISP or a trusted confederation of ISPs. This allows customers to use their own private IP addressing schemes and possibly handle their own routing. The customer trusts that the paths will be maintained according to an agreement and that people the customer does not trust (such as an attacker) cannot change the paths of any part of the VPN or insert traffic into the VPN. Typically, Internet Protocol Security (IPSec) is used to protect data flows over VPNs.
VPNs can be broadly classified as follows:
- Site-to-site VPNs – Permanent connections are established between different sites of the same company. Traffic is encrypted between these locations and the end-users see the other sites as directly connected. Figure 21.12 below shows a typical site-to-site VPN connection:
FIG 21.12 – Site-to-site VPN
- Dynamic Multipoint Virtual Private Network (DMVPN) – This is a dynamic tunneling form of a virtual private network (VPN) supported on Cisco IOS routers and some other vendors’ equipment. They are designed to allow multiple hub-and-spoke VPN connections with no extra configuration required to add additional spokes. Tunnels are built dynamically without the need for the network administrator to become involved.
- Remote access VPNs – These are dynamic secure connections between small sites or mobile workers and the company headquarters. Figure 21.13 below shows a typical remote access VPN connection:
FIG 21.13 – Remote access VPN
End of Chapter Questions
Please visit www.howtonetwork.com/ccnasimplified to take the free Chapter 21 exam.