Network Fundamentals
This chapter covers the fundamental concepts and terminology that relate to networking, and it is the foundation for the remaining topics covered in the manual. The term network describes the way in which PCs or network devices are connected. For example, a data network is one that allows computers to exchange data. Furthermore, an internetwork is a collection of networks that functions as a single entity, whereas internetworking is a concept that relates to the products and technologies that are involved in the design, implementation, and administration of internetworks.
You can learn more in our CompTIA Network+ certification course.
A basic understanding of internetworking will help you master the many other concepts and topics faster. This chapter will cover the following topics:
The OSI Model
The Open System Interconnection (OSI) reference model (Figure 1.1) is a seven-layer model used in networking. The model specifies layer by layer how information from an application on a network device (e.g., computer, router, etc.) moves from the source to the destination using a physical medium, and then how it interacts with the software application on that specific network device. In other words, the OSI model defines the network functions required for sending data and divides them into seven categories.
Figure 1.1 – The OSI Model
OSI was developed by the International Organization for Standardization (ISO) in 1984. The OSI mechanism involves the following two concepts:
- The OSI reference model, which has seven layers
- OSI protocols that map to each of the seven layers
The seven layers of the OSI model, starting from the top, are as follows:
Layer 7 | Application |
Layer 6 | Presentation |
Layer 5 | Session |
Layer 4 | Transport |
Layer 3 | Network |
Layer 2 | Data Link |
Layer 1 | Physical |
The upper three layers are concerned with application issues, such as user interfacing and data formatting. The lower four layers relate to transport issues, such as data transmission and the physical characteristics of the network.
It is essential to understand the OSI reference model from a design standpoint because of its modular architecture. The OSI model divides the specific tasks that are involved in moving the information from one networking device to another into seven smaller, more manageable groups of tasks/actions. The overall goals of the OSI model are to enhance interoperability and functionality between different applications and vendors, as well as make it easier for network administrators to focus on the design of particular layers of the model. For example, applications can be designed without having to worry about the lower OSI layers, because, if the packet has already been analyzed by the lower layers, there is a certain level of trust that the lower layers will process and send the packet over the wire successfully.
The OSI model is a key concept in the networking industry and it plays an important role in the design phase of a network using a modular (layered) approach.
Note: The OSI model represents the actions required to send data, but it does not specify how these actions are carried out. However, the OSI model does provide a framework for the communication protocols to be used between devices, where different protocols implement functions at various layers of the model.
Protocols
A protocol is a set of rules. Network devices need to agree on a set of rules in order to communicate, and they must use the same protocol to understand each other. A wide variety of network protocols exists at different OSI layers. For example, at the lower OSI layers, LAN and WAN protocols are used, while routed and routing protocols are found at Layer 3.
Protocols can be organized in protocol suites or stacks. TCP/IP is the most common network protocol suite, named after the two protocols in the stack. The TCP/IP suite can be found in almost all modern networks, and it is the core feature for the Internet and within organizations’ networks. Other examples of protocol suites are AppleTalk and Novell NetWare.
The OSI layers and their associated protocols are described in the following sections, beginning with the highest layer of the model.
Application Layer
The Application Layer (Layer 7) is where the end-user interacts directly with an application. For example, when a user has information to transmit (e.g., data request, pictures, document file, etc.), the application layer interacts directly with any software application that communicates with the internetwork.
Depending on the information the user wants to send over the network, a specific protocol is used, such as the following:
- The SMTP or POP3 protocol is used to send an e-mail message
- The FTP protocol is used to transmit a file over the network
- The Telnet protocol is used to control a remote device
Presentation Layer
The Presentation Layer (Layer 6) ensures that the data is understandable by the end system. In other words, the data must be converted and formatted in such a way that the system recognizes it and knows how to handle the content, so that information sent from one host can be interpreted properly by the destination host. This includes the translation and conversion required for formatting, data structure, coding, compression schemes for video and audio (e.g., MPEG, AVI, JPEG, GIF, and TIF files), encryption schemes, and character representation formats (e.g., ASCII to Unicode). In sum, if the packets from the Application Layer are sent unformatted, the Presentation Layer translates them and then passes them on to the Session Layer (Layer 5).
Session Layer
From a technical standpoint, communications systems are comprised of different service requests, and service responses between applications are located on different networking devices. The Session Layer (Layer 5) establishes, manages, and terminates these communication sessions and connects the lower layers with the Presentation and Application Layers.
Transport Layer
The Transport Layer (Layer 4) accepts data from the Session Layer and breaks it up into transportable segments. This layer is responsible for the information reaching the destination device error-free and in the proper order (i.e., sequence of packets); it is also responsible for the following:
- Reliability
- Transmission error checking
- Error correction
- Data retransmission
- Flow control
- Sequencing
- Data multiplexing
From a technical standpoint, all of these features are implemented by establishing a virtual circuit between the sender and the receiver devices. The Transport Layer initiates, maintains, and terminates these virtual circuits and uses segments as the protocol data unit. Segments are defined sets of data that include control information and are sent between the Transport Layers of the endpoints.
The two main Transport Layer protocols used on the Internet are as follows:
- TCP (Transmission Control Protocol): a connection-oriented protocol
- UDP (User Datagram Protocol): an unreliable, low-overhead, connectionless protocol
Connection-oriented protocols establish a logical connection and use sequence numbers to ensure that all data is received at the destination. Connectionless protocols only send the data and rely on the upper-layer protocols to handle error detection and to correct possible problems.
Network Layer
The Network Layer (Layer 3) is responsible for knowing the internetwork path (routing) from the sender device to the receiver device. It is also responsible for the logical addressing schemes (e.g., IP, IPX, and AppleTalk) that assign logical addresses to the network hosts on both sides of the communication path.
The Network Layer sends datagrams (or packets), which contain a defined set of data that includes addressing and control information and is routed between the source and destination devices. If a datagram needs to be sent across a network that can handle only a certain amount of data at a time, the datagram can be fragmented into multiple packets and reassembled by the receiving device. If no fragmentation occurs, then a datagram is sent as a single packet. It is important to note that a datagram is a unit of data, while a packet is sent physically through the network.
In addition to logical addressing schemes, the Network Layer is also responsible for router selection and packet forwarding, using the following types of protocols:
- Routed protocols (IP, IPX/SPX, AppleTalk, and DECnet)
- Routing protocols (RIP, EIGRP, OSPF, IS-IS, and BGP)
Routed protocols are responsible for the actual rules and processes regarding the encapsulation of the data packets, and they are ultimately routed over the internetwork, whereas routing protocols actually move the routed protocol packets (Layer 3 data units) across the internetwork, from one router to another, using particular routing algorithms.
Data Link Layer
The Data Link Layer (Layer 2) defines the format of the data that is transmitted across the physical network. This layer has two sublayers: the LLC (Logical Link Control) Sublayer and the MAC (Media Access Control) Sublayer (Figure 1.2). LLC deals with the Network Layer while MAC has access to the Physical Layer (Layer 1).
Figure 1.2 – Data Link Sublayers
The LLC Sublayer (IEEE 802.2) allows multiple network Layer 3 protocols to communicate over the same physical link by allowing those protocols to be specified in the LLC fields.
The MAC Sublayer (IEEE 802.3) specifies the physical MAC address that identifies a device on a network. Each frame sent over the wire contains a MAC address field, and only devices with a specific MAC address can process the frame. A source MAC address field is also included in the frame.
The Data Link Layer is responsible for reliable transmission of data across a physical network link, using specifications that provide different network and protocol characteristics, which includes physical addressing, different network topologies, error notifications, frame (Layer 2 data units) sequences, and frame flow control.
Layer 2 is concerned with a specific addressing structure, namely physical addressing, as opposed to the Layer 3 logical addressing scheme. Physical addressing generally comes in the form of MAC addresses that are burned onto a computer network interface card (NIC) or on the interfaces of network devices.
Physical Layer
The Physical Layer (Layer 1) lies at the bottom of the OSI protocol stack and it represents the actual physical medium on which the information is travelling between network devices. As mentioned, Layer 1 interconnects with the Data Link Layer through the MAC Sublayer, which controls the sending of the physical signals that encode 0 and 1 bits, or binary digits (e.g., electrical signals over a copper link).
The following protocols operate at the Physical Layer:
- Local Area Network (LAN) protocols (Ethernet, IEEE 802.3, 100Base-T, Token Ring/IEEE 802.5, and FDDI)
- Wide Area Network (WAN) protocols (EIA/TIA-232, EIA/TIA-449, V.35, and EIA-530)
Layer 1 defines physical media procedures, electrical and mechanical aspects, encoding, and modulation (voltage) on the line (i.e., the electrical signal is either a 0 or a 1, or is in a transition state), as well as activating, maintaining, and deactivating the actual physical link between multiple systems on LAN or WAN networks.
Encapsulation
In both LANs and WANs, packet transmission can be analyzed using the seven-layer OSI model. When data is transmitted by the source toward a specific destination, it passes through the Application, Presentation, and Session Layers and the protocol data unit arrives at the Transport Layer (Layer 4). At this layer, a 20-byte header is placed in front of the data. Regardless of whether the protocol is a reliable, connection-oriented protocol (TCP) or an unreliable, connectionless protocol (UDP), the data and the Layer 4 header, which together form a segment, is passed down to Layer 3, as illustrated in Figure 1.3 below:
Figure 1.3 – Packet Encapsulation
The Network Layer places its Layer 3 header in front of the received segment and this group becomes a packet (or a datagram). The Layer 3 header contains important fields, such as the logical address (IP address) of both the source and the destination device. The newly formed packet is then passed down to Layer 2. The Data Link Layer creates a new data unit, called a frame, by adding the Layer 2 frame header and trailer. The frame is then passed down to the Physical Layer, which converts the information into 0 and 1 bits that are sent over the physical media using electrical signals (i.e., on a copper link). Finally, the data is sent over the wire using a wide variety of methods, such as Ethernet or Token Ring.
The headers and trailers are a specific form of control information that allows the data to go through the network properly. Thus, the data at each layer is encapsulated in the information appropriate for the specific layer, including addressing and error checking.
A Protocol Data Unit (PDU) is a grouping of data used to exchange information at a particular OSI layer. The Layer 1 to Layer 4 PDU types, signifying the group of data and the specific headers and trailers, are summarized as follows:
Layer | PDU Name |
Layer 1 | Bit |
Layer 2 | Frame |
Layer 3 | Packet (Datagram) |
Layer 4 | Segment |
The overall size of the information increases as the data travels through the lower layers (from Layer 1 to Layer 4). The destination device receives the data, and this additional information is analyzed and is then removed as the data passes through the higher layers, up to the Application Layer, where the data is unwrapped (or decapsulated).
In addition to the Layer 3 logical addressing fields in the header, an addressing structure is also applied in the Layer 2 header (i.e., the MAC address). Every network device has a physical address burned onto it, which is located in a special field in the Data Link Layer header. This address changes as the packet passes from one device to another (e.g., from the source PC to a switch to a router to another switch and, finally, to the destination PC). However, the original IP source and destination addresses do not change when transiting the network because the packet is stripped of its Layer 3 header only when it goes beyond a router. When it stays within the same LAN, it only passes through switches, which decapsulate it at the Layer 2 header containing the MAC address. As a result, the header changes as the packet is re-encapsulated, as does the MAC address fields.
Because different protocols are available at each layer (e.g., IP packets are different from IPX packets), proper network operation requires that both the source and the receiver endpoints communicate using the same protocol.
Networking Devices
When it comes to networking technology, it is important to understand the different products that Cisco offers for different solutions, especially when designing LAN and WAN solutions. The three most common network devices in use today are routers, switches, and hubs, which are shown below in Figure 1.4:
Figure 1.4 – Network Devices
When describing various network devices, the following terminology is used:
- Domain: A specific part of a network.
- Bandwidth: The amount of data that can be carried on a link in a given time period.
- Unicast data: Data sent to one device.
- Multicast data: Data sent to a group of devices.
- Broadcast data: Data sent to all devices.
- Collision domain: Includes all devices that share the same bandwidth; collision domains are separated by switches.
- Broadcast domain: Includes all devices that receive broadcast messages; broadcast domains are separated by routers.
Note: The concept of unicast, multicast, and broadcast transmission has different meanings, depending on whether it relates to Layer 2 or to Layer 3, which applies to both MAC addresses and IP addresses.
Hubs
Hubs are network devices that operate at Layer 1 and connect multiple devices, which are all on the same LAN. Hubs became necessary when the need to connect more than two devices first arose, because a cable can connect only two endpoints.
Unlike switches, hubs do not have any intelligence and they do not process packets in any way. Their main function is to send all the data received on a port to all the other ports, so devices receive all the packets that traverse a specific network, even if they are not addressed to them. For this reason, hubs are also called repeaters. This behavior is depicted below in Figure 1.5, where a packet sent by PC 1 to PC 3 is broadcasted out by the hub to all ports, forcing the workstations that do not need the packet (i.e., PC 2 and PC 4) to discard it.
Figure 1.5 – Hub Operations
Note: Devices connected to the hub are in the same collision domain and the same broadcast domain.
Switches
Using hubs in medium- and large-sized networks is not efficient. In order to improve performance, especially from a bandwidth and security standpoint, LANs are divided into multiple smaller LANs, called collision domains, which are interconnected by a LAN switch. When using switches, only the destination device in a communication flow receives the data sent by the source device; however, multiple conversations between devices connected to a switch can happen simultaneously.
Switches have some intelligence, unlike hubs, because they send data to a port only if the data needs to reach that particular segment. Switching intelligence functions based on a MAC table kept in the switch’s memory. The MAC table contains MAC address-to-port mappings, and it is populated when a device sends data to a device located on another switch port and the switch learns the source MAC address (Layer 2 address) and its associated port. It then floods the received frames out to all ports. This process continues until the MAC table contains entries for all the devices in the network. When a switch must forward a frame with a destination MAC address in the MAC table, it forwards that frame only to the specific port for which it is meant.
Figure 1.6 below exemplifies this process. In the diagram on the left, PC 1 sends a frame to PC 3, but the switch does not know the port to which PC 3 is connected so it floods that frame out to all ports. At the same time, it records the source port and MAC address of that specific frame (Port 1, with a MAC address of PC 1). In the diagram on the right, PC 3 responds and sends a frame back to PC 1, but the switch does not have to flood that frame out to all ports because it now knows the port associated with PC 1, which is Port 1. At the same time, it also records the port-MAC association for PC 3, so if PC 1 sends a future frame to PC 3, the switch will forward it only to Port 3 because it now knows where PC 3 is connected.
Figure 1.6 – Switch Operations
Devices connected to a switch port are in the same collision domains, while devices connected to different ports are in different collision domains. The most important feature of a switch is separating collision domains. On the other hand, all devices connected to a switch are in the same broadcast domain. Special scenarios are those in which the destination Layer 2 field contains a multicast or broadcast address. In those cases, the switch forwards the frame to multiple ports. In addition, a special category of switches are Layer 3 switches, which have full Layer 3 capabilities, including routing. They are also called routing switches.
Routers
The most intelligent devices in a network are called routers. Routers are Layer 3 devices that use Layer 3 addresses and allow devices on different LANs to communicate with each other. By default, they do not forward any information between devices connected to different ports.
Figure 1.7 – Router Operations
Figure 1.7 above illustrates how a router operates. First, it reads the source and destination IP addresses in the packets and then it keeps track of which devices connect to which ports, and which devices need to communicate with devices on other ports. A router separates broadcast domains, so devices connected to different ports are located in different broadcast domains. The process of moving a packet across different broadcast domains is called routing, which works by implementing different routing protocols on the router.
Routers block multicast and broadcast packets by default. This is a significant difference between a router and a switch, and it helps control the bandwidth utilization on a network. In addition, devices connected to the same router port are in the same collision and broadcast domains, but devices connected to different router ports are in different collision and broadcast domains. Routing operations are discussed further in Chapter 7.
Network Types
Networks are classified into two major categories – LANs and WANs – based on the devices and areas in which they interconnect.
Local Area Networks
A LAN is a localized computerized network used to communicate between host systems, generally for sharing information (e.g., documents, audio files, video files, e-mail, and chat messages) and using a wide variety of productivity tools.
LANs have limited reach, as their name implies, spanning across an area less than a few hundred meters, so they only can connect devices in the same building or campus. Local Area Connections usually belong to the companies in which they are deployed. The different LAN technologies available include the following:
- Ethernet (10 Mbps)
- FastEthernet (100 Mbps)
- GigabitEthernet (1 Gbps)
- Wireless LAN (up to 600 Mbps under the 802.11n specification)
All the network devices in a LAN have a common logical addressing scheme and all the devices share the same network address (IP address). An example of an IP address is 192.168.10.0, with devices having logical addresses such as 192.168.10.1, 192.168.10.2, and so on. IP addressing will be discussed later in this chapter.
Figure 1.8 – Local Area Network Components
Generally, network devices such as workstations, IP telephones, printers, plotters, laptops, servers, and PDA devices connect to an Access Layer switch via either a wired or a wireless network, as shown above in Figure 1.8. The Access Layer switch may have a higher speed link to a router, which may connect to other routers or have an outbound Internet connection. Anything behind the router is part of the WAN, so the router serves as an edge device between a LAN and a WAN.
Wide Area Networks
Understanding how to implement and design WANs is an important step toward becoming a networking professional, a position in which you will find various ways to connect different systems, whether you are working in a small campus area, a large campus network, a metropolitan area, or a global network. As mentioned, a WAN connects multiple LANs or multiple WANs (e.g., the Internet is a large WAN, or a network of networks). A WAN is usually located over a broad geographical area and belongs to an Internet Service Provider (ISP) that might charge a fee for using its WAN services. Because of its size, a typical WAN is slower than a LAN.
Figure 1.9 – Example of a Wide Area Network
As shown in Figure 1.9 above, the ISP serves as a network that covers a specific area and interconnects different local networks, such as between a home office and a branch office of the same company, or a branch office and a headquarters of different companies. WANs use a wide variety of protocols and topologies to accomplish this interconnecting of different LAN areas, which will be covered in detail in Chapter 5.
LAN connections to the ISP can take many forms, depending on the technology in use, such as the following:
- Packet-switched networks (Frame Relay), where the ISP creates permanent virtual circuits and switched virtual circuits that carry data between subscriber sites
- Circuit-switched networks (ISDN), where the ISP creates a physical path reserved for the duration of the connection between two sites
- T1/E1 lines
- Leased lines, using PPP or HDLC protocols
- Dial-up connections
- Cable, using cable television networks to deliver data
- DSL, utilizing traditional copper telephone lines to deliver data
WANs and LANs use specific routing protocols that are configured based on topology and other criteria. The various routing protocols will be covered in detail in subsequent chapters.
TCP/IP
The TCP/IP protocol suite (Figure 1.10) is a modern adaptation of the OSI model and contains the following five layers:
- Application Layer
- Transport Layer
- Internet Layer
- Data Link Layer
- Physical Layer
In some documentation, the Data Link and Physical Layers are grouped together as the Network Access Layer or the Network Interface Layer.
Figure 1.10 – TCP/IP Model
TCP/IP Application Layer
The Application Layer in the TCP/IP model covers the functionality of the Session, Presentation, and Application Layers in the OSI reference model. Various protocols can be used in this layer, among which include the following:
- SMTP and POP3, used to provide e-mail services
- HTTP, a World Wide Web browser content delivery protocol
- FTP, used in file transfers
- DNS, used in domain name translation
- SNMP, a network management protocol
- DHCP, used to assign IP addresses to network devices automatically
- Telnet, used to manage and control network devices
TCP/IP Transport Layer
Both the TCP/IP Transport Layer and the Internet Layer are considerably different compared to the corresponding OSI layers. The Transport Layer is based on the following two protocols:
- Transmission Control Protocol (TCP): This provides a connected-oriented transmission, meaning the path that the data travels on in the network is reliable, as the endpoints establish a synchronized connection before sending the data. Every data packet is acknowledged by the receiving host. File Transfer Protocol (FTP) is an example of a protocol that uses TCP.
- User Datagram Protocol (UDP): This provides an unreliable, connectionless transmission between hosts. Unlike TCP, UDP does not ensure that the segments arriving at a destination are valid and in the proper order, resulting in integrity verifications and error connection processes in the Application Layer. On the other hand, UDP has a smaller overhead than TCP because the UDP header is much smaller. Trivial File Transfer Protocol (TFTP) is an example of a protocol that uses UDP.
The TCP and UDP protocol data units are segments. Each segment contains a number of fields that carry different information about the data, as shown below in Figure 1.11.
Figure 1.11 – TCP and UDP Segment Fields
The UDP fields are as follows:
Field | Size | Description |
Source Port Number | 16 bits | Identifies the application used by the sender |
Destination Port Number | 16 bits | Identifies the application used by the receiver |
Length | 16 bits | The size of the header and the data |
Checksum | 16 bits | The checksum of the header and the data, used to verify integrity of the segment |
Data | Variable | Application Layer data |
The TCP fields are as follows:
Field | Size | Description |
Source Port Number | 16 bits | Identifies the application used by the sender |
Destination Port Number | 16 bits | Identifies the application used by the receiver |
Sequence Number | 32 bits | Verifies the correct order of received segments |
Acknowledgement Number | 32 bits | Verifies the correct order of received segments |
Header Length | 4 bits | The size of the header |
Reserved | 6 bits | Unused field |
Code Bits | 6 bits | Indicates the segment type |
Window Size | 16 bits | The number of bytes received before sending an acknowledgement |
Checksum | 16 bits | The checksum of the header and the data, used to verify integrity of the segment |
Urgent | 16 bits | Marks the end of urgent data |
Option | 0 to 32 bits | Defines the maximum TCP segment size |
Data | Variable | Application Layer data |
The TCP header is larger than the UDP header because of all the extra fields needed to ensure a reliable connection.
Port numbers can take values up to 65535. Most of the common applications are assigned well-known port numbers between 1 and 1023 (port number 0 is reserved). Port numbers 1024 through 49151 are registered port numbers, while port numbers 49152 through 65535 define dynamic port numbers (automatically assigned by network devices). Port numbers are used to distinguish between applications running on the same device. Examples of well-known port numbers include the following:
- HTTP: TCP port 80
- FTP: TCP port 20 (data) and 21 (control)
- TFPT: UDP port 69
- POP3: TCP port 110
- SMTP: TCP port 25
- DNS: TCP and UDP port 53
- SNMP: UDP port 161
- Telnet: TCP port 23
When a TCP connection is established, it follows a process called a three-way handshake. This process uses SYN and ACK bits in the code bits of the TCP’s Segment, Sequence, and Acknowledgement Number fields. Figure 1.12 below illustrates the three-way handshake process:
Figure 1.12 – TCP Operation (Three-way Handshake)
Referring to the figure above, Host A tries to establish a TCP connection with Host B. Host A sends a segment with the SYN bit set, letting the other device know it wants to synchronize. The segment includes the initial sequence number of 5 that Host A is using. Host B accepts the segment to establish a session and sends back a segment with the SYN bit set. Host B also sends the ACK bit to acknowledge that it has received the initial segment sent by Host A. The acknowledgement number represents the next segment it expects to receive, which is 6 (this is also called an expectational acknowledgment). The new segment includes the initial sequence number of Host B, which is 14. Host A replies with an ACK segment that contains a sequence of 6, because this is what Host B is expecting, and acknowledgement number 15, informing Host B that it can send the next segment. This concludes the TCP session’s establishment phase.
The window size informs the remote host about the number of bytes a device will accept before it must send an acknowledgement. However, the window sizes may not match on the two endpoints. Host A has a window size of 2 and Host B has a window size of 3. When Host A sends data, it can send 3 bytes before waiting for an acknowledgement, whereas Host B can send only 2 bytes before receiving an ACK.
Note: The window size specifies the number of bytes (octets) a device will accept, not the number of segments.
After all the data is sent between the two hosts, the session can be closed. To accomplish this, Host A sends a segment with the FIN bit set, letting Host B know it wants to end the TCP session. The segment includes the sequence number Host B is using at that specific moment, which is 341. Host B acknowledges the request and sends the ACK bit with the acknowledgement number 342 to confirm it has received number 341. The segment also includes the current sequence number of Host B, which is 125. Host B sends a new segment with the FIN bit set, announcing the application it is running also requests closing the session. In the last step before the session is closed, Host A sends an ACK segment with number 126 to confirm it received number 125 from Host B.
TCP/IP Internet Layer
The Internet Layer in the TCP/IP model corresponds to OSI Layer 3 (Network Layer) and includes the following protocols:
- Internet Protocol (IP): This connectionless protocol offers best-effort delivery of packets in the network, relying on Transport Layer protocols such as TCP to ensure a reliable connection. IP addresses are assigned to each network device or interface in the network. In addition, the IP protocol comes in two flavors: IPv4 and IPv6 (which will be covered later in this manual).
- Internet Control Message Protocol (ICMP): This protocol sends messages and error reports through the network. The most common application that relies on ICMP is Ping, which sends an ICMP echo message to the destination and expects an ICMP echo reply back to ensure that the destination can be reached and to give information about the delay between the two endpoints.
Referring back to IP, an IPv4 packet contains the following fields, as depicted below in Figure 1.13:
Figure 1.13 – IPv4 Packet Fields
Field | Size | Description |
Version | 4 bits | Identifies the IP version (IPv4 in this case) |
Header Length | 4 bits | Size of the header |
Type of Service (ToS) | 8 bits | QoS marking, specifies how the packet should be handled within the network |
Total Length | 16 bits | The size (in octets) of the header and data |
Identification | 16 bits | Used when the packet is fragmented |
Flags | 3 bits | Used when the packet is fragmented |
Fragment Offset | 13 bits | Used when the packet is fragmented |
Time to Live (TTL) | 8 bits | Protection against endless loops, decremented by 1 on every router the packet passes through |
Protocol | 8 bits | Identifies the Layer 4 protocol (TCP, UDP) |
Header Checksum | 16 bits | The checksum of the header, used to verify its integrity |
Source IP Address | 32 bits | Source logical IP address |
Destination IP Address | 32 bits | Destination logical IP address |
IP Options and Padding | Variable | Used for debugging |
Data | Variable | Transport Layer data |
An IPv6 packet contains the following fields, as depicted below Figure 1.14:
Figure 1.14 – IPv6 Packet Fields
Field | Size | Description |
Version | 4 bits | Identifies the IP version (IPv6 in this case) |
Traffic Class | 8 bits | Similar to the ToS byte in the IPv4 header (QoS marking functionality) |
Flow Label | 20 bits | Used to identify and classify packet flows |
Payload Length | 16 bits | The size of the packet payload |
Next Header | 8 bits | Similar to the Protocol field in the IPv4 header, defines the type of traffic contained within the payload and which header to expect |
Hop Limit | 8 bits | Similar to the TTL field in the IPv4 header, prevents against endless loops |
Source IP Address | 128 bits | Source logical IPv6 address |
Destination IP Address | 128 bits | Destination logical IPv6 address |
Data | Variable | Transport Layer data |
TCP/IP Network Access Layer
The Network Access Layer is comprised of the Data Link Layer and the Physical Layer, and it has the same functionality as in the OSI reference model. A common protocol used at the Data Link Layer is Address Resolution Protocol (ARP), which requests the MAC addresses of a host with a known IP address. Once the MAC address is known, it is used as a destination address in the frames sent in that specific direction.
Layer 2 Technologies
Layer 2 Addressing
Layer 2 addresses are also called MAC addresses, physical addresses, or burned-in addresses (BIA). These are assigned to network cards or device interfaces when they are manufactured.
MAC addresses (Figure 1.15) have a value of 48 bits. The first 24 bits comprise the Organizational Unique Identifier (OUI), which represents a code that identifies the vendor of the device. The second least significant bit in the OUI portion identifies whether the address is locally (bit value of 1) or universally (bit value of 0) assigned, and the most significant bit identifies a unicast MAC address (bit value of 0) or a multicast address (bit value of 1). The last 24 bits form a unique value assigned to a specific interface, allowing each network interface to be identified in a unique way via the associated MAC address.
Figure 1.15 – MAC Address Structure
Switching
Switches are network devices that separate collision domains and process data at high rates due to the switching function being implemented in hardware using Application Specific Integrated Circuits (ASICs). Networks are segmented by switches in order to provide more bandwidth per user by reducing the number of devices that share the same bandwidth. In addition, they forward traffic only on interfaces that need to receive the traffic. However, for unicast traffic, switches forward the frame to a single port rather than to all ports.
When a frame enters an interface, the switch adds the source MAC address and the source port to its bridging table and then examines the destination MAC. If this is a broadcast, multicast, or unknown unicast frame, the switch floods the frame to all ports, except for the source port. If the source and the destination addresses are on the same interface, the frame is discarded. However, if the destination address is known (i.e., the switch has a valid entry in the bridging table), the switch forwards the frame to the corresponding interface.
The switching operation can be summarized by Figure 1.16 below:
Figure 1.16 – Switching Operation
When the switch is first turned on, the bridging table contains no entries. The bridging table (also called the switching table, the MAC address table, or the CAM [Content Addressable Memory] table) is an internal data structure that records all of the MAC addresses to interface pairs whenever the switch receives a frame from a device. Switches learn source MAC addresses in order to send data to appropriate destination segments.
In addition to flooding unknown unicast frames, switches also flood two other frame types: broadcast and multicast. Various multimedia applications generate multicast or broadcast traffic that propagates throughout a switched network (i.e., broadcast domain).
When a switch learns a source MAC address, it records the time of entry. Every time the switch receives a frame from that source, it updates the timestamp. If a switch does not hear from that source before a predefined aging time expires, that entry is removed from the bridging table. The default aging time in Cisco Access Layer switches is 5 minutes. This behavior is exemplified in the MAC address table shown below, where the sender workstation has the AAAA.AAAA.AAAA.AAAA MAC address:
Reference Time |
Action | Port | MAC Address | Age (sec.) |
00:00 | Host A sends frame #1 | Fa0/1 | AAAA.AAAA.AAAA.AAAA | 0 |
00:30 | Age increases | Fa0/1 | AAAA.AAAA.AAAA.AAAA | 30 |
01:15 | Host A sends frame #2 | Fa0/1 | AAAA.AAAA.AAAA.AAAA | 0 |
06:14 | Age increases | Fa0/1 | AAAA.AAAA.AAAA.AAAA | 299 |
06:16 | Entry aged out (deleted) | – | – | – |
06:30 | Host A sends frame #3 | Fa0/1 | AAAA.AAAA.AAAA.AAAA | 0 |
06:45 | Age increases | Fa0/1 | AAAA.AAAA.AAAA.AAAA | 15 |
MAC address table entries are removed when the aging time expires because switches have a finite amount of memory, limiting the number of addresses it can remember in its bridging table. If the MAC address table is full and the switch receives a frame from an unknown source, the switch floods that frame to all ports until an opening in the bridge table allows the bridge to learn about the station. Entries become available whenever the aging timer expires for an address. The aging timer helps to limit flooding by remembering the most active stations in the network. The aging timer can be adjusted if the total number of network devices is lower than the bridging table capacity, which causes the switch to remember the station longer and reduces flooding.
Note: The process of flooding new unknown frames when the MAC address table is full is a potential security risk because an attacker could take advantage of this behavior and overwhelm the bridging table. If this happens, all the ports (including the attacker port) will receive all the new received frames, even if they are not destined for them.
Spanning Tree Protocol
The Spanning Tree Protocol (STP), defined by IEEE 802.1D, is a loop-prevention protocol that allows switches to communicate with each other in order to discover physical loops in a network. If a loop is found, the STP specifies an algorithm that switches can use to create a loop-free logical topology. This algorithm creates a tree structure of loop-free leaves and branches that spans across the Layer 2 topology.
Loops occur most often as a result of multiple connections between switches, which provides redundancy, as shown below in Figure 1.17.
Figure 1.17 – Layer 2 Loop Scenario
Referring to the figure above, if none of the switches run STP, the following process takes place: Host A sends a frame to the broadcast MAC address (FF-FF-FF-FF-FF-FF) and the frame arrives at both Switch 1 and Switch 2. When Switch 1 receives the frame on its Fa0/1 interface, it will flood the frame to the Fa0/2 port, where the frame will reach Host B and the Switch 2 Fa0/2 interface. Switch 2 will then flood the frame to its Fa0/1 port and Switch 1 will receive the same frame it transmitted. By following the same set of rules, Switch 1 will re-transmit the frame to its Fa0/2 interface, resulting in a broadcast loop. A broadcast loop can also occur in the opposite direction (the frame received by Switch 2 Fa0/1 will be flooded to the Fa0/2 interface, which will be received by Switch 1).
Bridging loops are more dangerous than routing loops because, as mentioned before, a Layer 3 packet contains a special field called TTL (Time to Live) that decrements as it passes through Layer 3 devices. In a routing loop, the TTL field will reach 0 and the packet will be discarded. A Layer 2 frame that is looping will stop only when a switch interface is shut down. The negative effects of Layer 2 loops grow as the network complexity (i.e., the number of switches) grows, because as the frame is flooded out to multiple switch ports, the total number of frames multiplies at an exponential rate.
Broadcast storms also have a major negative impact on the network hosts, because the broadcasts must be processed by the CPU in all devices on the segment. In Figure 1.17, both Host A and Host B will try to process all the frames they receive. This will eventually deplete their resources unless the frames are removed from the network.
STP calculations are based on the following two concepts:
- Bridge ID
- Path Cost
A Bridge ID (BID) is an 8-byte field composed of two subfields: the high-order Bridge Priority (2 bytes) and the low-order MAC address (6 bytes). The MAC address is expressed in hexadecimal format, while the Bridge Priority is a 2-byte decimal value with values from 0 to 65535 and a default value of 32768.
Switches use the concept of cost to evaluate how close they are to other switches. The original 802.1D standard defined a cost of 1000 Mbps divided by the bandwidth of the link in Mbps. For example, a 10 Mbps link was assigned a cost of 100 and a FastEthernet link had a cost of 10. Lower STP costs are better. However, as higher bandwidth connections have gained popularity, a new problem has emerged, namely that cost is stored as an integer value only. The option of using a cost of 1 for all links greater than 1 Gbps would narrow the accuracy of the STP cost calculations, so it is considered invalid. As a solution to this problem, the IEEE decided to modify the cost values on a non-linear scale, as illustrated below:
Bandwidth | STP Cost |
10 Mbps | 100 |
45 Mbps | 39 |
100 Mbps | 19 |
622 Mbps | 6 |
1 Gbps | 4 |
10 Gbps | 2 |
These values were carefully chosen to allow the old and new schemes to interoperate for the link speeds in common use today.
To create a loop-free logical topology, STP uses a four-step decision process, as follows:
- Lowest Root BID
- Lowest Path Cost to Root Bridge
- Lowest Sender BID
- Lowest Port ID
Switches exchange STP information using special frames called Bridge Protocol Data Units (BPDUs). Switches evaluate all the BPDUs received on a port and store the best BPDU seen on every port. Every BPDU received on a port is checked against the four-step sequence to see whether it is more attractive than the existing BPDU saved for that port.
When a switch first becomes active, all of its ports send BPDUs every 2 seconds. If a port hears a BPDU from another switch that is more attractive than the BPDU it has been sending, the port stops sending BPDUs. If the more attractive BPDU stops arriving for a period of 20 seconds (by default), the local port will resume sending its own BPDUs.
The two types of BPDUs are as follows:
- Configuration BPDUs, which are sent by the Root Bridge and flow across active paths
- Topology Change Notification (TCN) BPDUs, which are sent to announce a topology change
The initial STP convergence process is accomplished in the following three steps:
- Root Bridge election
- Root Ports election
- Designated Ports election
When a network is powered on, all the switches announce their own BPDUs. After they analyze the received BPDUs, a single Root Bridge is elected. All switches except the Root Bridge calculate a set of Root Ports and Designated Ports to build a loop-free topology. After the network converges, BPDUs flow from the Root Bridge to every segment in the network. Additional changes in the network are handled using TCN BPDUs.
The first step in the convergence process is electing a Root Bridge. The switches do this by analyzing the received BPDUs and looking for the switch with the lowest BID, as shown below in Figure 1.18:
Figure 1.18 – STP Convergence
Referring to the figure above, Switch 1 has the lowest BID of 32768.AA.AA.AA.AA.AA.AA and will be elected as the Root Bridge because it has the lowest MAC address, considering they all have the same Bridge Priority (i.e., the default of 32768).
The switches learn about Switch 1’s election as the Root Bridge by exchanging BPDUs at a default interval of 2 seconds. BPDUs contain a series of fields, among which include the following:
- Root BID – identifies the Root Bridge
- Root Path Cost – information about the distance to the Root Bridge
- Sender BID – identifies the bridge that sent the specific BPDU
- Port ID – identifies the port on the sending bridge that placed the BPDU on the link
Only the Root BID and Sender BID fields are considered in the Root Bridge election process. When a switch first boots, it places its BID in both the Root BID and the Sender BID fields. For example, Switch 1 boots first and starts sending BPDUs announcing itself as the Root Bridge every 2 seconds. After some time, Switch 3 boots and announces itself as the Root Bridge. When Switch 2 receives these BPDUs, it discards them because its own BID has a lower value. As soon as Switch 3 receives a BPDU generated by Switch 2, it starts sending BPDUs that list Switch 2 as the Root BID (instead of itself) and Switch 3 as the Sender BID. The two switches now agree that Switch 2 is the Root Bridge. Switch 1 boots a few minutes later, and it initially assumes that it is the Root Bridge and starts advertising this fact in the BPDUs it generates. As soon as these BPDUs arrive at Switch 2 and Switch 3, these two switches give up the Root Bridge position in favor of Switch 1. All three switches are now sending BPDUs that announce Switch 1 as the Root Bridge.
The next step is electing the Root Ports. A Root Port on a switch is the port that is closest to the Root Bridge. Every switch except the Root Bridge must elect one Root Port. As mentioned before, switches use the concept of cost to determine how close they are from other switches. The Root Path Cost is the cumulative cost of all links to the Root Bridge.
When Switch 1 sends BPDUs, they contain a Root Path Cost of 0. As Switch 2 receives them, it adds the path cost of its interface Fa0/1 (a value of 19 for a FastEthernet link) to the Root Path Cost value. Switch 2 sends the new Root Path Cost calculated value of 19 in its BPDUs generated on the Fa0/2 interface. When Switch 3 receives the BPDUs from Switch 2, it increases the Root Path Cost by adding 19, the cost of its Fa0/2 interface, for a total of 38. At the same time, Switch 3 also receives BPDUs directly from the Root Bridge on Fa0/1. This enters Switch 3 with a value of 0, and Switch 3 increases the cost to 19 because Fa0/1 is a FastEthernet interface. At this point, Switch 3 must select a single Root Port based on the two different BPDUs it received, one with a Root Path Cost of 38 from Switch 2 and the other with a Root Path Cost of 19 from Switch 1. The lowest cost wins; thus, Fa0/1 becomes the Root Port and Switch 3 begins advertising this Root Path Cost of 19 to downstream switches. Switch 2 goes through the same set of calculations and elects its Fa0/1 interface as the Root Port. This Root Port selection process on Switch 3 is based on the lowest Root Path Costs it receives in the BPDUs, as illustrated below:
BPDUs Received on the Port | Root Path Cost |
Fa0/1 (winner) | 19 |
Fa0/2 | 38 |
Note: The Path Cost is a value assigned to each port and it is added to BPDUs received on that port in order to calculate the Root Path Cost. The Root Path Cost represents the cumulative cost to the Root Bridge and it is calculated by adding the receiving port’s Path Cost to the value contained in the BPDU.
The next step in the STP convergence process is electing Designated Ports. Each segment in a Layer 2 topology has one Designated Port. This port sends and receives traffic to and from that segment and the Root Bridge. Only one port handles traffic for each link, guaranteeing a loop-free topology. The bridge that contains the Designated Port for a certain segment is considered the Designated Switch on that segment.
Analyzing the link between Switch 1 and Switch 2, Switch 1 Fa0/1 has a Root Path Cost of 0 (being the Root Bridge) and Switch 2 Fa0/1 has a Root Path Cost of 19. Switch 1 Fa0/1 becomes the Designated Port for that link because of its lower Root Path Cost. A similar election takes place for the link between Switch 1 and Switch 3. Switch 1 Fa0/2 has a Root Path Cost of 0 and Switch 3 Fa0/1 has a Root Path Cost of 19, so Switch 1 Fa0/2 becomes the Designated Port.
Note: Every active port on the Root Bridge becomes a Designated Port.
When considering the link between Switch 2 and Switch 3, both Switch 2 Fa0/2 and Switch 3 Fa0/2 ports have a Root Path Cost of 19, resulting in a tie. To break the tie and declare a winner, STP uses the four-step decision process described below:
- Lowest Root BID: All three bridges are in agreement that Switch 1 is the Root Bridge; advance to the next step.
- Lowest Root Path Cost: Both Switch 2 and Switch 3 have a cost of 19; advance to the next step.
- Lowest Sender BID: Switch 2’s BID (32768.BB.BB.BB.BB.BB.BB) is lower than Switch 3’s BID (32768.CC.CC.CC.CC.CC.CC), so Switch 2 Fa0/2 becomes the Designated Port and Switch 3 Fa0/2 is considered a non-Designated Port; end of the decision process.
- Lowest Port ID: N/A.
In a loop-free topology, Root and Designated Ports forward traffic and non-Designated Ports block traffic. The five STP states are listed below:
State | Purpose |
Blocking | Receives BPDUs only |
Listening | Builds “active” topology |
Learning | Builds bridging table |
Forwarding | Sends/receives user data |
Disabled | Administratively down |
- After initialization, the port starts in the Blocking state, where it listens for BPDUs. The port will transit into the Listening state after the booting process, when it thinks it is the Root Bridge or after not receiving BPDUs for a certain period of time.
- In the Listening state, no user data passes through the port; it is just sending and receiving BPDUs in order to determine the Layer 2 topology. This is the phase in which the election of the Root Bridge, Root Ports, and Designated Ports occur.
- Ports that remain Designated or Root Ports after 15 seconds progress to the Learning state, and during another 15-second period, the bridge builds its MAC address table but does not forward user data.
- After the 15-second period, the port enters the Forwarding state, in which it sends and receives data frames.
- The Disabled state means the port is administratively shut down.
The STP process is controlled by the three timers listed below:
Timer | Purpose | Default Value |
Hello Time | Time between sending of BPDUs by the Root Bridge | 2 seconds |
Forward Delay | Duration of the Listening and Learning states | 15 seconds |
Max Age | Duration for which the BPDU is stored | 20 seconds |
A modern variation of the STP is the Rapid STP (RSTP), as defined by IEEE 802.1W. The main advantage of RSTP is its ability to achieve fast convergence (i.e., neighbor switches can communicate between each other and determine the state of the links in less time). RSTP ports have the following roles:
- Root
- Designated
- Alternate
- Backup
- Disabled
RSTP port states are also different, as the Blocking, Learning, and Disabled states converge into a Discarding state. Although some important differences exist between RSTP and STP, they are compatible and can work together in any network.
Virtual LANs
Virtual LANs (VLANs) define broadcast domains in a Layer 2 network. They represent an administratively defined subnet of switch ports that are in the same broadcast domain, the area in which a broadcast frame propagates through a network.
As mentioned before, routers separate broadcast domains, preventing broadcasts from propagated through router interfaces. On the other hand, Layer 2 switches create broadcast domains by special configuration on the switch. By defining broadcast domains on the switch, you can configure switch ports to forward a received broadcast frame to other specified ports.
Broadcast domains cannot be observed by analyzing the physical topology of the network because VLAN is a logical concept based on the configuration of switches. Another way of thinking about VLANs is as virtual switches, defined in one physical switch. Each new virtual switch defined creates a new broadcast domain (VLAN). Since traffic from one VLAN cannot pass directly to another VLAN within a switch, a router must be used to route packets between VLANs. Moreover, ports can be grouped into different VLANs on a single switch or on multiple interconnected switches, but broadcast frames sent by a device in one VLAN will reach only the devices in that specific VLAN.
VLANs represent a group of devices that participate in the same Layer 2 domain and can communicate without needing to pass through a router, meaning they share the same broadcast domain. Best design practices suggest a one-to-one relationship between VLANs and IP subnets. Devices in a single VLAN are typically also in the same IP subnet.
Figure 1.19 – Virtual LANs
Figure 1.19 above presents two VLANs, each associated with an IP subnet. VLAN 10 contains Router 1, Host A, and Router 2 configured on Switch 1 and Switch 3 and is allocated the 10.10.10.0/24 IP subnet. VLAN 20 contains Host B, Host C, and Host D configured on Switch 2 and Switch 3 and is allocated the 10.10.20.0/24 IP subnet.
Although vendors used individual approaches in creating VLANs, a multi-vendor VLAN must be handled carefully when dealing with interoperability issues. For example, Cisco developed the ISL standard that operates by adding a new 26-byte header, plus a new trailer, encapsulating the original frame, as shown in Figure 1.20 below. In order to solve the incompatibility problems, IEEE developed 802.1Q, a vendor-independent method to create interoperable VLANs.
Figure 1.20 – ISL Marking Method
802.1Q is often referred to as frame tagging because it inserts a 32-bit header, called a tag, into the original frame, after the Source Address field, without modifying other fields. The next 2 bytes after the Source Address field hold a registered Ethernet-type value of 0 x 8100, meaning the frame contains an 802.1Q header. The next 3 bits represent the 802.1P User Priority field, which are used as Class of Service (CoS) bits in Quality of Service (QoS) techniques. The next subfield is a 1-bit Canonical Format Indicator, followed by the VLAN ID (12 bits). This results in a total of 4,096 VLANs when using 802.1Q. The 802.1Q marking method is illustrated in Figure 1.21 below:
Figure 1.21 – 802.1Q Marking Method
A port that carries data from multiple VLANs is called a trunk. It can use either the ISL or the 802.1Q protocols. A special concept in 802.1Q is the native VLAN. This is a particular type of VLAN in which frames are not tagged. The native VLAN’s purpose is to allow a switch to use 802.1Q trunking (i.e., multiple VLANs on a single link) on an interface; however, if the other device does not support trunking, the traffic for the native VLAN can still be sent over the link. Cisco uses VLAN 1 as its default native VLAN.
Among the reasons for using VLANs, the most important include the following:
- Network security
- Broadcast distribution
- Bandwidth utilization
An important benefit of using VLANs is network security. By creating VLANs within switched network devices, a logical level of protection is created. This can be useful, for example, in situations in which a group of hosts must not receive data destined for another group of hosts (e.g., departments in a large company, as depicted in Figure 1.22 below).
Figure 1.22 – Departmental VLAN Segmentation
VLANs can mitigate situations in which broadcasts represent a problem in a network. Creating additional VLANs and attaching fewer devices to each isolates broadcasts within smaller areas. The effectiveness of this action depends on the source of the broadcast. If broadcast frames come from a localized server, that server might need to be isolated in another domain. If broadcasts come from workstations, creating multiple domains helps reduce the number of broadcasts in each domain.
In Figure 1.22 above, each department’s VLAN has a 100 Mbps bandwidth shared between the workstations in that specific department, creating a standalone broadcast domain. Users attached to the same network segment share the bandwidth of that particular segment. As the number of users attached to the segment grows, the average bandwidth assigned to each user decreases, which affects its various applications. Therefore, implementing VLANs can offer more bandwidth to users.
Layer 3 Technologies
Network Layer Addresses
Although each network interface has a unique MAC address, this does not specify the location of a specific device or to what network it is attached, meaning a router cannot determine the best path to that device. In order to solve this problem, Layer 3 addressing is used.
Network addresses are logical addresses assigned when a device is placed in the network and changed when the device is moved. Network layer addresses have a hierarchical structure comprised of two parts: the network address and the host address. Logical addresses can be assigned manually by the administrator or dynamically via a dedicated protocol, such as Dynamic Host Configuration Protocol (DHCP). All the devices in a network have the same network portion of the address and different host identifiers.
This addressing structure is illustrated in Figure 1.23 below, both for IPv4 and for IPv6. The IPv4 and IPv6 address structures will be covered in detail in Chapter 6.
Figure 1.23 – Network Addressing Structure
Routers analyze the network portion of IP addresses and compare them with entries from its routing table. If a match is found, the packet is sent to the appropriate interface. If the devices are directly connected, routers also examine the host portion of the address in order to send the packet to the appropriate device. The router uses Address Resolution Protocol (ARP) to determine the MAC address of the device with a specific IP address and encapsulates the packet with a header that contains that specific MAC address before sending it on the wire.
IPv4 Addressing
IPv4 addresses are 32-bit numbers represented as strings of 0s and 1s. As mentioned before, the Layer 3 header contains a Source IP Address field and a Destination IP Address field. Each field is 32 bits in length.
For a more intuitive representation of IPv4 addresses, the 32 bits can be divided into four 4-octet (1 octet, or byte, = 8 bits) groupings separated by dots, which is called dotted-decimal notation. The octets can be converted into decimal numbers by standard base-2 to base-10 translation.
For example, consider the following 32-bit string:
11000000101010001000000010101001
Dividing it into 4 octets results in the following binary representation:
11000000.10101000.10000000.10101001
This translates into an easy-to-read decimal representation:
192.168.128.169
The maximum value of an octet is when all the bits are equal to 1. The equivalent decimal value is 255.
Note: The base-10 representation is easy for humans to understand, but computers internally compute IPv4 addresses as strings of 0s and 1s. As a result, there is no processing or storage advantage offered by the simplified representation.
IPv4 addresses are categorized into five classes. Classes A, B, and C are used for addressing devices, Class D is for multicast groups, and Class E is reserved for experimental use. The first bits of the address define which class it belongs to, as illustrated below. Knowing the class of an IPv4 address helps determine which part of the address represents the network and which part represents the host bits.
Class |
Leading Bits | Size of Network Portion | Size of Host Portion | Number of Networks | Addresses per Network | Start Address | End Address |
A | 0 | 8 bits | 24 | 128 | 16,777,216 | 0.0.0.0 | 127.255.255.255 |
B | 10 | 16 bits | 16 | 16,384 | 65,536 | 128.0.0.0 | 191.255.255.255 |
C | 110 | 24 bits | 8 | 2,097,152 | 256 | 192.0.0.0 | 223.255.255.255 |
D | 1110 | – | – | – | – | 224.0.0.0 | 239.255.255.255 |
E | 1111 | – | – | – | – | 240.0.0.0 | 225.255.255.255 |
IPv4 addresses can be classified into the following categories:
- Public addresses, used for external communication
- Private addresses, which are reserved and used only internally within a company
Private address ranges, as defined by RFC 1918, include the following:
- 0.0.0 to 10.255.255.255
- 16.0.0 to 172.31.255.255
- 168.0.0 to 192.168.255.255
When reserving full classes of addresses (i.e., classful addressing) for certain networks, certain limitations appear because of the large number of addresses per network and because of the limited IPv4 address space. For this reason, the concept of subnets (i.e., classless addressing) was introduced in RFC 950.
Classless addressing allows Class A, B, and C addresses to be divided into smaller networks called subnets, resulting in a larger number of possible networks, each with fewer host addresses. The subnets are created by borrowing bits from the host portion and using them as subnet bits.
An important aspect in IPv4 addressing is separating the network and the host part of the addressing string. This is accomplished by using a subnet mask, also represented as a 32-bit number. The subnet mask starts with a continuous string of bits with the value of 1 and ends with a string of 0s. The number of bits with the value of 1 represents the number of bits in the IP address that must be considered in order to calculate the network address. A subnet mask bit of 0 indicates that the corresponding bit in the IPv4 address is a host bit. Using the same example as above and a 255.255.255.0 mask results in the following situation:
With a string of 24 bits of 1 in the subnet mask, consider only the first 24 bits in the IP address as the network portion, resulting in a network address of 192.168.128.0 with a subnet mask of 255.255.255.0. The last 8 bits in the IP address, called the host portion of the IP address, can be assigned to network devices. Having 8 free bits, you can assign an IP address to 28 hosts, meaning a total of 256 host addresses in the 192.168.128.0 network space. Every machine in a particular LAN will have the same network address and subnet mask; however, the host portion of the IP address will be different.
When using classless addressing, a subnet mask indicates which bits have been borrowed from the host field. Using subnet masks creates a three-level hierarchy: network, subnet, and host. Another way to represent the subnet mask is by using a prefix or a slash-notation (/) to indicate how many network bits the address contains. For example, 192.168.10.0/24 means the first 24 bits of the 192.168.10.0 address are network bits. This corresponds to a 255.255.255.0 subnet mask.
IPv6 Addressing
The limited number of IPv4 addresses and the permanent increase in the number of addressable network devices all over the world has accelerated the implementation of IP version 6. IPv6 addresses have a different structure than IPv4 addresses do. They are 128 bits long, which means a larger pool of IPv6 addresses is available. The notation of IPv6 addresses is also different: while an IPv4 address can be written in decimal format, an IPv6 address is notated in a hexadecimal format (i.e., 16 bits separated by colons), for example:
2001:43aa:0000:0000:11b4:0031:0000:c110.
Considering the complex format of IPv6 addresses, the following rules were developed to shorten them:
- One or more successive 16-bit groups that consist of all 0s can be omitted and represented by two colons (::)
- If a 16-bit group begins with one or more 0s, the leading 0s can be omitted.
For the IPv6 example above (2001:43aa:0000:0000:11b4:0031:0000:c110), the shortened representations are as follows:
- 2001:43aa::11b4:0031:0000:c110
- 2001:43aa::11b4:0031:0:c110
- 2001:43aa::11b4:31:0:c110
Several types of IPv6 addresses are required for various applications, as listed below. Compared to IPv4 address types (i.e., unicast, multicast, and broadcast) IPv6 is different in that special multicast addresses are used instead of broadcast addressing and it includes a new address type called anycast.
Address Type | Range | Description |
Aggregatable Global Unicast | 2000::/3 | Public addresses, host-to-host communications; equivalent to IPv4 unicast |
Multicast | FF00::/8 | One-to-many and many-to-many communication; equivalent to IPv4 multicast |
Anycast | Same as Unicast | Interfaces from a group of devices can be assigned the same anycast address; the device closest to the source will respond; application-based, including load balancing, optimization traffic for a particular service, and redundancy |
Link-local Unicast | FE80::/10 | Connected-link communications; assigned to all device interfaces and used only for local-link traffic |
Solicited-node Multicast | FF02::1:FF00:0/104 | Neighbor solicitation |
IP Routing
Routers are devices that operate at OSI Layer 3 and their responsibility is to determine the best path a packet can take to a specific destination. After the best path has been chosen, the packet is encapsulated with a new frame and the router places the packet on the interface that has a link to the next hop in that path.
The process of choosing the best path is called routing and the process of sending the packet to the correct interface is called switching. Although routers are the most popular devices that make routing decisions, other network devices can have routing functionality, such as Layer 3 switches or security appliances.
A router is responsible for sending the packet the correct way, no matter what is happening above the network layer. However, a router is concerned with what is happening on the Physical and Data Link Layers because it might need to receive data from certain media and send over a different media type. This happens by decapsulating the received packet up to the Network Layer and encapsulating it with the header specific to the other media type.
Figure 1.24 below illustrates this process. Router A receives the packet over an Ethernet connection, re-encapsulates it with a Frame Relay header, and sends it to Router B, which processes the packet in the reverse order by stripping the Frame Relay header and encapsulating it in the Ethernet format before sending the packet to the receiver endpoint. Note that the routers are concerned with only the last three OSI layers.
Figure 1.24 – Routing across Different Physical Media
Routers look at the packet’s destination address to determine where the packet is going so they can select the best route to get the packet there. In order to calculate the best path, routers must know what interface should be used in order to reach the packet’s destination network. Routers learn about the network either by being connected to them physically or by learning information from other routers or from a network administrator. The process of learning about networks from other routers’ advertisements is called dynamic routing and different routing protocols can be used to achieve this (this process will be covered in more detail in subsequent chapters). The process by which a network administrator manually defines routing rules on the device is called static routing. Finally, the routes to which a router is physically connected are known as directly connected routes.
Routers keep the best path to destinations learned via direct connections, static routing, or dynamic routing in internal data structures called routing tables. A routing table contains a list of networks the router has learned about and information about how to reach them.
As mentioned before, dynamic routing is the process by which a router exchanges routing information and learns about remote networks from other routers. Different routing protocols can accomplish this task, including the following:
- Routing Information Protocol (RIP)
- Enhanced Interior Gateway Routing Protocol (EIGRP)
- Open Shortest Path First (OSPF)
- Intermediate System to Intermediate System (IS-IS)
- Border Gateway Protocol (BGP)
The most important information a routing table contains includes the following items:
- How the route was learned (i.e., static, dynamic, or directly connected)
- The address of the neighbor router from which the network was learned
- The interface through which the network can be reached
- The route metric, which is a measurement that gives routers information about how far or how preferred a network is (the exact meaning of the metric value depends on the routing protocol used)
Figure 1.25 – Routing Tables
Figure 1.25 above illustrates a scenario with two routers that use hop count as the metric. The topology contains three networks known by both routers. Hop count represents the number of routers that a packet is sent through to reach a specific destination. Router A has two directly connected networks, 10.10.10.0 and 192.168.10.0; thus, the metric to each of them is 0. Router A knows about the 10.10.20.0 network from Router B, so the metric for this network is 1, because a packet sent by Router A must traverse Router B to reach the 10.10.20.0 network. Router B has two directly connected networks, 10.10.20.0 and 192.168.10.0, and one remote network learned from Router A, 10.10.10.0, with a metric of 1.
Summary
The OSI model is a layered mechanism that describes how information from an application on a network device moves from the source to the destination using a physical medium, and then interacts with the software application on that specific network device. The OSI model is comprised of the following seven layers:
Layer 7: Application
|
Provides services to the lower layers. Enables program-to-program communication and determines whether sufficient resources exist for communication. Examples are e-mail gateways (SMTP), TFTP, FTP, and SNMP (Simple Network Management Protocol). |
Layer 6: Presentation
|
Presents information to the Application layer. Compression, data conversion, encryption, and standard formatting occur here. Contains data formats such as JPEG, MPEG, MIDI, and TIFF. |
Layer 5: Session
|
Establishes and maintains communication sessions between applications (dialogue control). Sessions can be simplex (one direction only), half-duplex (one direction at a time), or full duplex (both ways simultaneously). Session Layer keeps different applications data separate from other applications. Protocols include NFS, SQL, X Window, RPC, ASP, and NetBios Names. |
Layer 4 : Transport
|
Responsible for end-to-end integrity of data transmissions, and establishes a logical connection between sending and receiving hosts via virtual circuits. Windowing works at this level to control how much information is transferred before acknowledgement is required. Data is segmented and reassembled at this layer. Port numbers are used to keep track of different conversations crossing the network at the same time. Supports TCP, UDP, SPX, NBP. Segmentation and error correction works here, but not detection. |
Layer 3: Network
|
Routes data from one node to another and determines the best path to take. Routers operate at this level. Network addresses are used here for routing (packets). Routing tables, subnetting, and control of network congestion occur here. Routing protocols, regardless of which protocol they run over, reside here. Examples include RIP, IP, IPX, ARP, IGRP, and AppleTalk. |
Layer 2: Data Link
|
Sometimes referred to as the LAN layer. Responsible for the physical transmission of data from one node to another. Error detection occurs here. Packets are translated into frames here and hardware address is added. Bridges and switches operate at this layer. Contains the LLC and MAC Sublayers. |
Layer 1: Physical
|
Puts data onto the wire and includes Physical Layer specifications, such as connectors, voltage, physical data rates, and DTE/DCE interfaces. Some common implementations include Ethernet/IEEE 802.3, FastEthernet, and Token Ring/IEEE 802.5. |
Protocols are sets of rules. Network devices need to agree on a set of rules in order to communicate, and they must use the same protocol to understand each other. A wide variety of network protocols exists at different OSI layers. For example, at the lower OSI layers, LAN and WAN protocols are used. Going up the reference model, routed and routing protocols are found at Layer 3. Each layer and its associated protocols are described below.
A Protocol Data Unit (PDU) is a grouping of data used to exchange information at a particular OSI layer. The Layer 1 to Layer 4 PDU types, signifying the group of data and the specific headers and trailers, are summarized below:
Layer | PDU name |
Layer 1 | Bit |
Layer 2 | Frame |
Layer 3 | Packet (Datagram) |
Layer 4 | Segment |
Networks can be classified into the following categories based on the devices and areas they interconnect:
- A Local Area Network (LAN) is a localized computerized network used to communicate between host systems, generally for sharing information (e.g., documents, audio files, video files, e-mail, or chat messages) and using a wide variety of productivity tools.
- A Wide Area Network (WAN) is usually located over a broad geographical area and belongs to an Internet Service Provider that might charge a fee for using its WAN services.
The TCP/IP protocol suite is a modern adaptation of the OSI model and contains the following five layers:
- Application
- Transport
- Internet
- Data Link
- Physical
Layer 2 addresses are also called Media Access Control (MAC) addresses, physical addresses, or burned-in addresses (BIA). These are assigned to network cards or device interfaces when they are manufactured.
Although each network interface has a unique MAC address, this does not specify the location of a specific device or to what network it is attached, meaning a router cannot determine the best path to that device. In order to solve this problem, Layer 3 addressing is used.
IPv4 addresses are 32-bit numbers that are represented as strings of 0s and 1s. IPv6 addresses are 128 bits long, which means a larger pool of IPv6 addresses is available. The notion of IPv6 addresses is also different: while an IPv4 address can be written in decimal format, an IPv6 address is notated in a hexadecimal format (i.e., 16 bits separated by colons), for example:
2001:43aa:0000:0000:11b4:0031:0000:c110.
The Spanning-Tree Protocol (STP), defined by IEEE 802.1D, is a loop-prevention protocol that allows switches to communicate with each other in order to discover physical loops in a network. Switches go through the following three steps for their STP convergence:
- Elect one Root Bridge
- Elect one Root Port per non-Root Bridge
- Elect one Designated Port per segment
All STP decisions are based on a predetermined sequence, as follows:
- Lowest Root BID
- Lowest Path Cost to Root Bridge
- Lowest Sender BID
- Lowest Port ID
Virtual LANs (VLANs) define broadcast domains in a Layer 2 network. They represent an administratively defined subnet of switch ports that are in the same broadcast domain, the area in which a broadcast frame propagates through a network.
VLANs represent a group of devices that participate in the same Layer 2 domain and can communicate without needing to pass through a router, meaning they share the same broadcast domain. Best design practices suggest a one-to-one relationship between VLANs and IP subnets. Devices in a single VLAN are typically also in the same IP subnet.
IP routing is the process of forwarding a packet based on the destination IP address. Routers keep the best path to destinations learned via direct connections, static routing, or dynamic routing in internal data structures called routing tables. A routing table contains a list of networks the router has learned about and information about how to reach them.
The most important information a routing table contains includes the following items:
- How the route was learned (i.e., static, dynamic, or directly connected)
- The address of the neighbor router from which the network was learned
- The interface through which the network can be reached
- The route metric, which is a measurement that give routers information about how far or how preferred a network is (the exact meaning of the metric value depends on the routing protocol used)