Compare and contrast different LAN technologies. This chapter covers in detail different LAN types and properties, including CSMA/CD, CSMA/CA, broadcast and collision domains, bonding, speed, and distance. We cover Ethernet in detail in our Cisco CCNA course online.
Ethernet Standards
Ethernet is a widely deployed technology and is a reference in the networking world. It has many associated standards depending on the type of media used (i.e., copper, fiber, or wireless) and speed (10Mbps, 100Mbps, 1Gbps, or 10Gbps). Older Ethernet standards were based on coaxial cable, but this is not the case in modern networks. The most important Ethernet standards are as follows:
- 10BaseT
- 100BaseT
- 1000BaseT
- 100BaseTX
- 100BaseFX
- 1000BaseX
- 10GBaseSR
- 10GBaseLR
- 10GBaseER
- 10GBaseSW
- 10GBaseLW
- 10GBaseEW
- 10GBaseT
You can see a pattern when looking of these standards’ naming convention. The first part defines the maximum speed at which the standard operates, as follows:
- 10 = 10Mbps
- 100 = 100Mbps
- 1000 = 1Gbps
- 10G = 10Gbps
“Base” stands for baseband, which means the cables use a single frequency to send the data from one end to the other. As opposed to baseband technology, Broadband technology uses many frequencies to send data, as it operates over a shared medium (e.g., TV cables). The last part of the naming convention defines the type of media used by the standard, which can be copper (T comes from twisted-pair) or fiber optics (the other abbreviations).
10BaseT
10BaseT is one of the earlier twisted-pair standards of Ethernet and it offers a maximum bandwidth of 10Mbps. The T represents twisted-pair (i.e., it’s a copper-based standard). This standard uses only two pairs out of the four available on a standard UTP cable (one pair for Transmit and another for Receive). When 10BaseT was first introduced, the standard cable used was a Category 3 cable that operated at a maximum distance of 100 m.
100BaseT
As 10Mbps was not enough for network users’ growing needs, the 100BaseT standard was developed, also known as FastEthernet. The higher speeds of 100BaseT also involve higher requirements from a cable perspective. This standard uses a Category 5 or higher twisted-pair copper cable and it operates at a maximum distance of 100 m. This standard also uses only two pairs out of the four available on a standard UTP cable (one pair for Transmit and another for Receive).
100BaseFX
100BaseFX is the equivalent of 100BaseT, with the only difference being that it operates over fiber links instead of copper cables. Pairs of optical fibers are used, one of them for Transmit purposes and the other for Receive purposes, thus achieving a full-duplex functionality. Two types of fiber links support this standard:
- Multi-mode fiber: operates at a maximum distance of 400 m (half-duplex) or 2 km (full-duplex)
- Single-mode fiber: operates at distances greater than 2 km
1000BaseT
The next natural step was the increase from 100Mbps to 1Gbps, thus the 1000BaseT standard. This also operates over Category 5 cable but is usually used with Cat 5e or Cat 6 cables. Unlike the 100BaseT standard, 1000BaseT uses all four network pairs in the UTP cable.
1000BaseX
1000BaseX is the fiber optic correspondent of 1000BaseT and it comes in many variations, depending on the type of fiber used:
- 1000BaseSX: short wavelength laser (up to 550 m)
- 1000BaseLX: long wavelength laser (over 5 km)
10GBase Standards
The most important 10Gbps standards used in data center networking are:
- 10GBaseSR
- 10GBaseLR
- 10GBaseER
The 10GBaseSR (Short Range) standard operates using short-range communication. It initially operated up to 80 m but, using proper multi-mode fiber, it can go up to 300 m. It is generally used inside a single room or inside a single data center.
10GBaseLR (Long Range) operates over longer ranges and for this reason it uses single-mode fiber instead of multi-mode fiber. This standard usually operates up to 25 km and uses lasers to ensure that a high-power light signal can travel over such a long distance.
10GBaseER (Extended Range) operates over even greater distances using single-mode fiber. It can travel up to 40 km.
10G over WAN
Service providers might utilize the following 10G Ethernet standards that operate over WAN:
- 10GBaseSW
- 10GBaseLW
- 10GBaseEW
These standards correspond to the 10GBaseSR/LR/ER standards (which are usually used within an enterprise data center) and they use a short, long, and extended range of operation. This integrates a 10Gbps Ethernet connection with a SONET/SDH WAN using the same types of fibers and connectors and it operates over the same distances as the SR/LR/ER standards.
10GBaseT
10GBaseT is a special standard that allows 10Gbps transmission over copper cables (twisted-pair) in situations in which you may not want to use optical fibers. This standard uses the following media types:
- Category 6 cable: operates up to 55 m
- Category 6a cable: operates up to 100 m
CSMA/CD
CSMA/CD and CSMA/CA are common acronyms in the networking world. The CS in CSMA stands for Carrier Sense and this means that the device that is communicating on the network is listening to determine whether some other station is transmitting on the medium. If that is the case, it will not transmit over the already existing signal. The MA in CSMA stands for Multiple Access and this means that there is more than one device on the network that might be trying to communicate at the same time.
The CD in CSMA/CD stands for Collision Detection. An Ethernet collision happens when two stations send a signal on the wire at the same time and the signals collide so no one on the network can understand any of the signals.
When a station wants to send a frame, it follows an algorithm, as depicted in Figure 19.1 below. This is the standard way that Ethernet works.
Figure 19.1 – CSMA/CD
After the frame is assembled the station will listen on the wire for any signal that might be transmitted by other stations. If it detects such a signal, it will not send the packet but will instead wait for a time before trying again. If no signal is present, it will transmit the first part of the frame and will wait to see if a collision happens. If it does, all stations involved in that collision will back off and wait for a random amount of time before retransmitting the signal. If no collision is detected, the station sends the next part of the frame and checks again for collisions. This process continues until the frame is fully transmitted.
Note: The random retransmission timer allows the stations to retry sending the frame at different times. If they used the same timer, they would end up retransmitting the frame at the same time and the collision would happen again. |
The scenario described above happens when all of the stations on the segment are able to hear each other, that is, in half-duplex environments (hubs). Half-duplex environments allow stations to transmit or receive at a given moment but not both at the same time. CSMA/CD is no longer used on a wide scale because modern networks use switches instead of hubs. When using switches, you can use full-duplex transmissions, which means that you can both send and receive at the same time.
CSMA/CA
With CSMA/CA, the CA stands for Collision Avoidance, which is a different concept than Collision Detection in that the stations are trying to avoid any kind of collision instead of simply detecting it. This is commonly used on wireless networks because a wireless device cannot hear whether communication is occurring between other stations at a specific moment (thus, it cannot use CD).
In such an environment, the plan is to avoid any collisions before sending data on the network and this is commonly implemented as RTS/CTS (ready to send/clear to send). This is usually managed by the central point of the wireless network – the access point (AP). Before stations can send traffic, the AP must first grant them permission to do so, confirming that there is no other traffic occurring at that time so collisions can be avoided. Because the station must wait for the clear signal, the AP can be sure that only a single station is sending traffic at a specific time. This decision process is depicted in Figure 19.2 below:
Figure 19.2 – CSMA/CA
In Figure 19.2, you can see that after the frame is assembled the station must wait for confirmation that the channel is clear and only then can it transmit the data. If stations do not receive the CTS approval from the AP, they must wait a random amount of time before sending the RTS request again. This technique also solves the problem in which a station can see the AP but cannot see other stations from other sides of the network that are accessible only through the AP (i.e., the AP can reach every station, as it is the central point).
Collision and Broadcast Domains
As described earlier in this book, one of the main drawbacks of network hubs is that when there is a collision on the wire, the damaged frame is sent to all connected devices. One of the advantages of modern switches is that each port on the switch is considered a collision domain. In the event of a collision (which is not possible in full-duplex environments), the damaged frame does not pass through the interface.
Switches do not separate broadcast domains, routers do. If a switch receives a frame with a Broadcast destination address, then it must forward it out of all ports, apart from the port the frame was received on. Again, a router is required to separate broadcast domains. Figure 19.3 below represents a small network using switches/bridges and a router to represent how collision domains are separated:
Figure 19.3 – Collision Domains
Devices connected to a switch port are in the same collision domain, but devices connected to different ports are in different collision domains. This is the most important feature of a switch: it separates collision domains. On the other hand, all devices connected to a switch are in the same broadcast domain, which is illustrated in Figure 19.4 below:
Figure 19.4 – Broadcast Domains
Routers block Multicast and Broadcast packets by default. This is a significant difference between a router and a switch and it helps control bandwidth utilization on a network. Devices connected to the same router port are in the same collision and broadcast domains, but devices connected to different router ports are in different collision and broadcast domains.
Interface Bonding
Interface bonding is a generic term for the following:
- Link aggregation
- Link teaming
- NIC teaming
- Port channeling
- EtherChannels
The idea behind this technique is taking multiple interfaces and bundling them to increase the amount of throughput between two devices and add some redundancy. Link aggregation can use any type of interfaces (e.g., 100Mbps, 1Gbps, and 10Gbps), but all the interfaces within a bundle must have the same capacity. Such connections are used in scenarios in which you need redundancy and/or you have a high amount of traffic traveling between two network devices, usually in data center environments at the Core or Distribution Layer. The most common use for link aggregation is connecting two switches, as shown in Figure 19.5 below:
Figure 19.5 – Link Aggregation
In addition to increasing throughput, interface bonding also offers another important benefit, which is redundancy. If one link in the bundle fails, the rest keep forwarding traffic and the logical link (the bundle) remains active. If the logical link goes down, that means all of the components on the physical link also go down.
Note: Devices on both ends of the connection must be configured for link aggregation; otherwise, the logical link bundle will not function. |
Link aggregation is a very popular technology and is typically deployed between the Distribution Layer and the Core Layer, or between Core Layer devices where increased availability and scalability is needed. However, port aggregation is usually disabled on interfaces that are facing end-users.
Two commonly used link aggregation protocols are:
- Link Aggregation Control Protocol (LACP): an open standard protocol
- Port Aggregation Protocol (PAgP): a Cisco proprietary protocol
Link Aggregation Control Protocol Overview
LACP is part of the IEEE 802.3ad specification for creating a logical link from multiple physical links. Because LACP and PAgP are incompatible, both ends of the link need to run either LACP or PAgP in order to automate the formation of EtherChannel groups.
As is the case with PAgP, when configuring LACP EtherChannels, all LAN ports must be the same speed and must all be configured as either Layer 2 or Layer 3 LAN ports. If a link within a port channel fails, traffic previously carried over the failed link is switched over to the remaining links within the port channel. Additionally, when you change the number of active bundled ports in a port channel, traffic patterns will reflect the rebalanced state of the port channel.
LACP supports the automatic creation of port channels by exchanging LACP packets between ports. It learns the capabilities of port groups dynamically and informs the other ports. Once LACP identifies correctly matched Ethernet links, it facilitates grouping the links into a GigabitEthernet port channel. Unlike PAgP, where ports are required to have the same speed and duplex settings, LACP mandates that ports be full-duplex only, as half-duplex is not supported. Half-duplex ports in an LACP EtherChannel are placed into a suspended state.
By default, all inbound Broadcast and Multicast packets on one link in a port channel are blocked from returning on any other link of the port channel. LACP packets are sent to the IEEE 802.3 Slow Protocols Multicast group address 01-80-C2-00-00-02, and LACP frames are encoded with the EtherType value 0x8809. The screenshot below illustrates these fields in an Ethernet frame:
Figure 19.6 – Ethernet Frame (LACP)
LACP Port Modes
LACP supports the automatic creation of port channels by exchanging LACP packets between ports. LACP does this by learning the capabilities of port groups dynamically and informing the other ports. Once LACP identifies correctly matched Ethernet links, it facilitates grouping the links into a port channel. Once an LACP mode has been configured, it can only be changed if a single interface has been assigned to the specified channel group. LACP supports two modes: active and passive.
LACP Active Mode
LACP active mode places a switch port into an active negotiating state in which the switch port initiates negotiations with remote ports by sending LACP packets. Active mode is the LACP equivalent of PAgP desirable mode. In other words, in this mode, the switch port actively attempts to establish an EtherChannel with another switch that is also running LACP.
LACP Passive Mode
When a switch port is configured in passive mode, it will negotiate an LACP channel only if it receives another LACP packet. In passive mode, the port responds to LACP packets that the interface receives but does not start LACP packet negotiation. This setting minimizes the transmission of LACP packets. In this mode, the port channel group attaches the interface to the EtherChannel bundle. This mode is similar to the auto mode that is used with PAgP.
Table 19.1 below shows the different LACP combinations and the result of their use in establishing an EtherChannel between two switches:
Table 19.1 – LACP Modes
Switch 1 LACP Mode | Switch 2 LACP Mode | EtherChannel Result |
Passive | Passive | No EtherChannel Formed |
Passive | Active | EtherChannel Formed |
Active | Active | EtherChannel Formed |
Active | Passive | EtherChannel Formed |
Port Aggregation Protocol
PAgP is a Cisco proprietary link aggregation protocol that enables the automatic creation of EtherChannels. By default, PAgP packets are sent between EtherChannel-capable ports to negotiate the forming of an EtherChannel. These packets are sent to the destination Multicast MAC address 01-00-0C-CC-CC-CC. The screenshot below shows the fields contained within a PAgP frame as seen on the wire:
Figure 19.7 – PAgP Frame
PAgP Port Modes
PAgP supports different port modes that determine whether an EtherChannel will be formed between two PAgP-capable switches. Before we delve into the two PAgP port modes, one particular mode deserves special attention. The “on” mode is sometimes incorrectly referenced as a PAgP mode: however, it is not a PAgP port mode.
The on mode forces a port to be placed into a channel unconditionally. The channel will be created only if another switch port is connected and is configured in the on mode. When this mode is enabled, there is no negotiation of the channel performed by the local EtherChannel protocol. In other words, this effectively disables EtherChannel negotiation and forces the port to the channel. It is important to remember that switch interfaces that are configured in the on mode do not exchange PAgP packets. Switch EtherChannels using PAgP may be configured to operate in one of two modes: auto or desirable.
Auto Mode
Auto mode is a PAgP mode that will negotiate with another PAgP port only if the port receives a PAgP packet. When this mode is enabled, the port(s) will never initiate PAgP communication but instead will listen passively for any received PAgP packets before creating an EtherChannel with the neighboring switch.
Desirable Mode
Desirable mode is a PAgP mode that causes the port to initiate PAgP negotiation for a channel with another PAgP port. In other words, in this mode, the port actively attempts to establish an EtherChannel with another switch running PAgP.
Table 19.2 below shows the different PAgP combinations and the result of their use in establishing an EtherChannel:
Table 19.2 – PAgP Modes
Switch 1 PAgP Mode | Switch 2 PAgP Mode | EtherChannel Result |
Auto | Auto | No EtherChannel Formed |
Auto | Desirable | EtherChannel Formed |
Desirable | Auto | EtherChannel Formed |
Desirable | Desirable | EtherChannel Formed |
Summary
Ethernet is a widely deployed technology and is a reference in the networking world. It has many standards associated with it, depending on the type of media used (i.e., copper, fiber, or wireless) and speed (e.g., 10Mbps, 100Mbps, 1Gbps, or 10Gbps). Older Ethernet standards were based on coaxial cables but this is not the case in modern networks. The most important Ethernet standards are as follows:
- 10BaseT
- 100BaseT
- 1000BaseT
- 100BaseTX
- 100BaseFX
- 1000BaseX
- 10GBaseSR
- 10GBaseLR
- 10GBaseER
- 10GBaseSW
- 10GBaseLW
- 10GBaseEW
- 10GBaseT
CSMA/CD and CSMA/CA are common acronyms in the networking world. The CS in CSMA stands for Carrier Sense and this means that the device that will be communicating on the network is listening to determine whether some other station is transmitting on the medium. If that is the case, it will not transmit over the already existing signal. The MA in CSMA stands for Multiple Access and this means that there is more than one device on the network that might be trying to communicate at the same time.
The CD in CSMA/CD stands for Collision Detection. An Ethernet collision happens when two stations send a signal on the wire at the same time and the signals collide so no one on the network can understand any of the signals.
With CSMA/CA, the CA stands for Collision Avoidance, which is a different concept than Collision Detection in that the stations are now trying to avoid any kind of collision instead of simply detecting it. This is commonly used on wireless networks because a wireless device cannot hear whether communication is occurring between other stations at a specific moment (thus, it cannot use CD).
Switches do not separate broadcast domains, routers do. If a switch receives a frame with a Broadcast destination address, then it must forward it out of all ports, apart from the port the frame was received on. A router is required to separate broadcast domains.
Configure Ethernet in our 101 Labs – CompTIA Network+ book.