Much the same as with routing protocols, we tend to configure Wide Area Network (WAN) connections and then forget about them. We usually only need to address them if there is a connection issue or the need to add a remote office connection.
For the CCNA exam, Cisco expects you to understand the common WAN protocols available. Although some have been removed from the syllabus, I’ve left them here because I suspect that Cisco still expects you to understand them (and you certainly will need to at work).
WAN Technologies
Wide Area Networks (WANs) are used to connect Local Area Networks (LANs) together. Because of the long distances involved, you will normally have to use a third-party company known as a service provider/telephone company (telco) to provide this service.
When I started working at Cisco Systems in the UK in 2002, the leading WAN connection types supported were ISDN, T1, and Frame Relay. Of course, technology has since improved, and now you have a range of options open to you depending on your required bandwidth, security requirements, budget, and location. Even entry-level Cisco routers for branch offices now feature WAN support for multimode VDSL2/ADSL2/2+, multimode G.SHDSL, ISDN, xDSL, Ethernet, 3G and 4G, and fiber.
You can read the Cisco WAN notes here.
Common WAN Networking Terms
You will hear many terms when discussing WAN technologies, such as the following:
Customer Premise Equipment
CPE is any equipment owned and maintained internally and located on your premises. If the CPE breaks, it will be your responsibility to resolve the problem as the network engineer.
Data Terminal Equipment
This is normally the interface on your side of the WAN link that connects to the telco’s network. The DTE interface uses clocking signals generated by the DCE interface to synchronize traffic. Your network router will almost always be the DTE side of the network.
Data Communications Equipment
DCE interfaces provide connections to the service provider’s network. Here traffic is forwarded, data is synchronized, and clocking signals are provided. When practicing networking with routers at home, you will have to configure your own DCE interface because one end of the cable will be DTE and the other end will be DCE. On the DCE end of the cable, you simply add the clock rate command and give a clocking number.
You can normally tell which end of the cable is DCE because it will have the letters DCE stamped on it (see Chapter 1). A DCE interface is normally defined by the cable and not the actual interface. You can check which type of cable you have attached with the show controllers serial x command, where x is the number of the interface. If you have a DCE cable attached to the interface, you need to add the clock rate # command:
RouterA#config t
RouterA(config)#interface Serial0/0
RouterA(config-interface)#clock rate 56000 – Sets the speed to 56,000 bps
Point of Demarcation
Normally, inside a switching closet in the communications room is where the CPE meets the local loop, which is the point of demarcation, or demarc for short. This is usually installed by the service provider as the termination of a digital service line, such as T1, T3, E1, E3, etc.
Local Loop
This is the cabling and connectivity that extends from the demarc to the nearest local telco switch or exchange. It can sometimes cause confusion that the local loop includes only the interface, trunk, or line card of the telco device connected to the other end of the circuit. Figure 5.1 below illustrates the local loop and the telco CO, which we will cover next:
FIG 5.1 – The local loop
Telco Central Office
The telco CO is the main point of presence for the telco’s WAN service to the end-user. This is also referred to as simply the central office.
Channel Service Unit/Data Service Unit
DCE equipment also includes a utility referred to as a channel service unit/data service unit (CSU/DSU). This device provides the conversion from your LAN data format to one compatible with your telco’s requirements. Although you might think that a CSU/DSU works in the same way as a modem, they actually do different things entirely. A CSU/DSU converts digital signals from a router to a leased line (the local loop), while a modem converts digital signals from a router to a phone line (the local loop).
FIG 5.2 – Common WAN terms
Modems
A modem converts digital signals to analog and back again. It MODulates data over frequencies outbound and DEModulates the signal received.
FIG 5.3 – A modem converts digital signals to analog signals
The correct name for a modem is modulator/demodulator. The purpose of a modem is to convert a digital signal to an analog signal for use across Plain Old Telephone Service (POTS) lines. It is used for small bandwidth requirements or more commonly as a backup solution. Note that, as shown in Figure 5.3 above, the modem will terminate an analog line (local loop) coming into your network.
WAN Connection Types
There are three main connection types for WANs:
- Leased lines
- Circuit switching
- Packet switching
Leased Line
A leased line is a dedicated connection between your site and another site. It can also be referred to as a point-to-point link. The link is not shared with any other company and is available 24 hours a day. Leased lines can be very expensive, depending on bandwidth and distance, but they do eliminate some of the security and traffic engineering problems associated with connections to your remote site over the Internet or with shared connections.
FIG 5.4 – A point-to-point leased line
Leased lines are usually created for point-to-point connections. They typically result in a high-quality connection but they offer limited flexibility.
Circuit Switching
Just like for a telephone call, for a circuit-switched connection to take place, a dedicated temporary connection has to be made between the end-devices. When the session is no longer required between the two end devices, the connection is normally torn down. Circuit switching can be very cost-effective but the speeds are slow.
All packets traveling along the WAN take the same path in a circuit-switched connection. Integrated Services Digital Network (ISDN) is an example of a circuit-switched network.
FIG 5.5 – A circuit-switched network
Packet Switching
In packet-switched connections, users may share a connection with other networks. The cost is generally less for users since the telco can make more efficient use of their bandwidth. End-to-end connectivity in packet-switched networks is known as having virtual circuits (VCs). Common examples of packet-switched networks are Frame Relay, ATM, and X.25.
FIG 5.6 – A packet-switched network
With a packet-switched connection, you have no choice in which path your data takes. Typically, the service provider’s policy will allow for an optimal path, which is decided depending on how much traffic is saturating their connections. When your data arrives at the other end, it is reassembled and put into the correct order.
Packet switching is very efficient but can be complex to configure, especially for large networks spanning multiple locations.
Point-to-Point Protocols
There are several protocols you can use when connecting over a WAN. Some are compulsory when you use a certain service and some you can choose from. When you pay for a leased line for a point-to-point connection, you will normally choose HDLC or PPP.
High-Level Data Link Control
HDLC is a layer 2 protocol used for WAN connectivity. It is based primarily on IBM’s Synchronous Data Link Control (SDLC) protocol. HDLC uses keepalives to monitor connectivity with the remote end-device.
The DCE side of the connection sends the DTE side a keepalive packet containing a sequence number. The DTE side echoes this sequence number back to the DCE, proving connectivity. If three consecutive packets are not received, the link is declared down.
You can monitor the keepalives on an HDLC link with the debug serial interface command. You can test this command on any lab where you are using Serial interfaces.
Although HDLC is a widely-used protocol, Cisco has created its own proprietary version, so if you are connecting a Cisco device to a non-Cisco device, you will not be able to use it. Configuring it on an interface is very straightforward. Remember, though, that it is on by default on Cisco Serial interfaces, so you don’t need to configure the encapsulation.
Router#config t
Router(config)#interface Serial0/0
Router(config-if)#encapsulation hdlc – Sets the encapsulation type
Router(config-if)#ip address 192.168.1.1 255.255.255.0
Router(config-if)#no shutdown
Router(config-if)#^Z
Router#
You can check your interface protocol settings (and many other interface settings) by typing show interface serial 0/0. You would normally never need to set the encapsulation type to HDLC on a Cisco router since it is the default.
Router#show interface Serial0/0
Serial0/0 is up, line protocol is up
Hardware is HD64570
Internet address is 192.168.1.1/24
MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation HDLC, Loopback not set – Encapsulation setting
Keepalive set (10 sec)
FIG 5.7 – The same encapsulation type must be on each side of the connection
HDLC uses 10-second keepalives to verify the integrity of the connection. The DTE and DCE ends increment a set of sequence numbers that you should see increment with a debug serial packet command. Three missed keepalives will cause the link to be deactivated.
R1#debug serial packet
Serial network interface debugging is on
*Mar 1 00:21:52.727: Serial0/0: HDLC myseq 0, mineseen 0, yourseen 0, line up
*Mar 1 00:22:02.727: Serial0/0: HDLC myseq 1, mineseen 1*, yourseen 4, line up
*Mar 1 00:22:12.727: Serial0/0: HDLC myseq 2, mineseen 2*, yourseen 5, line up
You can debug the actual interface with the debug serial interface command.
Another important command to know and use is show interface serial 0/0 (or whatever your interface number is). The output below is truncated:
R1#show interface s0/0
Serial0/0 is up, line protocol is up
Hardware is GT96K Serial
Internet address is 192.168.1.1/24
MTU 1500 bytes, BW 1544 Kbit/sec, DLY 20000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation HDLC, Loopback not set
Keepalive set (10 sec)
0 carrier transitions
DCD=up DSR=up DTR=up RTS=up CTS=up
It’s worth checking Cisco documentation for the meaning of the fields, many of which have been discussed throughout this manual; however, you should see that the interface, line protocol, correct IP address and subnet, correct encapsulation type, and DCD through CTS are all up.
Point-to-Point Protocol
PPP is very popular for use over dedicated and circuit-switched links, as well as when you are connecting to non-Cisco equipment. PPP is specified in RFC 1661. You would have to use PPP if you were connecting your Cisco router to a non-Cisco router over a Serial line.
PPP is popular because it is vendor-neutral and it can work over many different connection types, including synchronous (clocks on both sides agree), asynchronous (clocks differ), ISDN, Digital Subscriber Line (DSL), and High Speed Serial Interface (HSSI) links. In addition, PPP has built-in error detection and data compression, and it supports authentication with CHAP and PAP, as well as network-layer address negotiation.
PPP is made up of two main components—NCP and LCP:
- Network Control Protocol (NCP) – a family of independent protocols that encapsulate network layer protocols, such as TCP/IP
- Link Control Protocol (LCP) – negotiates, sets up, and tears down control options for the data link connection to the WAN
PPP Authentication
PPP offers optional authentication and has two ways of authenticating the calling router—PAP and CHAP:
- Password Authentication Protocol (PAP) – This protocol uses a two-way handshake, allowing the remote host to authenticate itself. The password is sent in clear text so it can easily be captured and read.
- Challenge Handshake Authentication Protocol (CHAP) – This protocol uses a three-way handshake and never sends the password over the link in clear text. Instead, a hashed value made from the password is sent. This hashed value can only be read by a host with the appropriate key to the MD5 algorithm, which is a very strong level of encryption.
FIG 5.8 – CHAP uses a three-way handshake
On the calling router, a hostname and username/password must be added. Of course, encapsulation must be set to PPP and authentication type to CHAP. CHAP will continue to carry out authentication on the line after the connection is established using the three-way handshake.
On the called or authenticating router, a hostname and username/password of which routers will be calling has to be configured. AAA security can also be used with PPP, but this is outside the CCNA syllabus.
LCP Configuration Options
Cisco routers offer several configuration options to use with some of the features LCP offers, as shown In Table 5-1 below:
Table 5-1: LCP Options
Feature | Operation | Protocol | Command |
Authentication | Requires a password, performs challenge handshake | PAP, CHAP | ppp authentication pap
ppp authentication chap |
Compression | Compresses data at the source and decompresses data at the destination | Stacker, Predictor | ppp compress stacker
ppp compress predictor |
Error Detection | Monitors dropped data, avoids frame looping | Quality, Magic Number | ppp quality [number 1-100] |
Multilink | Performs multiple-link load balancing | Multilink Protocol | ppp multilink |
Mini-lab – Configuring PPP
PPP can easily be configured by changing the encapsulation type from HDLC to PPP. Optionally, you can add many other features, including authentication, compression, link quality, and a raft of ISDN options. For the CCNA exam, you should be comfortable configuring PPP with CHAP authentication.
We will do a full lab at the end of this chapter, but here is a brief demonstration of the configuration commands you need to enable PPP with CHAP. You can optionally add a second method of authentication to be used if the first method fails, so CHAP and then PAP, or vice versa.
FIG 5.9 – Mini-lab: Configuring PPP
R1(config)#username R2 password howtonetwork
R1(config)#int s0/0
R1(config-if)#ip add 192.168.1.1 255.255.255.0
R1(config-if)#encapsulation ppp
R1(config-if)#ppp authentication chap pap
R1(config-if)#no shut
R1(config-if)#end
NOTE: The ppp authentication chap pap command may not work on some IOS versions, so stick to chap only if this is the case. If you can move from one form of authentication to another, it’s referred to as PPP fallback.
R2(config)#int s0/0
R2(config-if)#ip add 192.168.1.2 255.255.255.0
R2(config-if)#clock rate 64000
R2(config-if)#no shut
R2(config-if)#encap ppp
R2(config-if)#ppp authentication chap pap
R2(config-if)#exi
R2(config)#username R1 password howtonetwork
R2(config)#exit
R2#sh int s0/0
Serial0/0 is up, line protocol is up
Hardware is GT96K Serial
Internet address is 192.168.1.2/24
MTU 1500 bytes, BW 1544 Kbit/sec, DLY 20000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation PPP, LCP Open
[END OF MINI-LAB]There are other ways to configure CHAP and PAP authentication; however, this is the easiest.
PPPoE
The Point-to-Point Protocol over Ethernet is a network protocol that allows encapsulating PPP packets inside Ethernet frames. It is often used in the context of DSL connections as a solution for tunneling packets over the DSL connection to the ISP’s IP network. This solution allows authentication, encryption, and traffic compression, which makes it a very attractive option from an ISP’s point of view.
The PPP session authenticates the user based on a username or password via the PAP or CHAP protocols. PPPoE has two distinct stages:
- PPPoE Discovery
- PPP Session
The PPPoE Discovery stage allows the MAC addresses of the endpoints to be known before the PPP control packets are exchanged so a connection can be established over Ethernet. After the MAC addresses of the two peers are known and the session has been established, the Session stage will start.
Metro Ethernet
Metro Ethernet is a technology that uses Carrier Ethernet technology in MANs (Metropolitan Area Networks) to offer connectivity services. Common Metro Ethernet attributes include:
- Cost-effective connectivity
- Reliable connection
- Increased scalability
- Flexible bandwidth management
Metro Ethernet can connect LANs to a WAN or to the Internet. Multisite organizations can use this technology to connect their branches to an intranet or to the Internet.
A typical Metro Ethernet system has a star network or a mesh network topology, with individual nodes connected through fiber-optic media. Using Ethernet in a MAN environment is relatively inexpensive compared with pure SHD (Smallest Hamming Distance) or MPLS (Multi-Protocol Label Switching) systems of similar bandwidth.
Ethernet on the MAN can be used with the following technologies:
- Simple Ethernet (cheapest)
- Ethernet over SHD
- Ethernet over MPLS (most reliable)
- Ethernet over DWDM (dense wavelength division multiplexing)
MPLS
Multi-Protocol Label Switching provides a mechanism for forwarding packets for any network protocol. It was originally developed in the late 1990s to provide faster packet forwarding for IP routers. Since then, its capabilities have expanded massively, for example, to support service creation (VPNs), traffic engineering, network convergence, and increased resiliency.
MPLS is now the de facto standard for many carrier and service provider networks and its deployment scenarios are continuing to grow.
Traditional IP networks are connectionless, meaning that when a packet is received, the router determines the next hop using the destination IP address on the packet with information from its own forwarding table. The router’s forwarding tables contain information in the network topology obtained via an IP routing protocol, such as OSPF, IS-IS, BGP, or RIP, or static configuration, which keeps that information synchronized with changes in the network.
MPLS similarly uses IP addresses, either IPv4 or IPv6, to identify endpoints and intermediate switches and routers. This makes MPLS networks IP-compatible and easily integrated with traditional IP networks. However, unlike traditional IP, MPLS flows are connection-oriented and packets are routed along preconfigured Label Switched Paths (LSPs).
MPLS works by tagging the traffic (packets) with an identifier (a label) to distinguish the LSPs. When a packet is received, the router uses this label (and sometimes also the link over which it was received) to identify the LSP. It then looks up the LSP in its own forwarding table to determine the best link over which to forward the packet and the label to use on this next hop.
A different label is used for each hop, and it is chosen by the router or switch performing the forwarding operation. This allows the use of very fast and simple forwarding engines, which are often implemented in hardware.
Ingress routers at the edge of the MPLS network classify each packet potentially using a range of attributes, not just the packet’s destination address, to determine which LSP to use. Inside the network, the MPLS routers use only the LSP labels to forward the packet to the egress router.
FIG 5.10 – MPLS VPN topology
In Figure 5.10 above, the top-left CE (Customer Edge) Router advertises Customer A Site 1 routes to the left-side PE (Provider Edge) Router, which injects the routes into the MPLS network, assigning each of them an MPLS label. When the packet arrives at the first P (Provider) Router, the label is switched to a locally assigned label for that specific route and then the packet is forwarded to the next P Router. At this point the procedure is exactly the same: the label is switched and then forwarded to the outbound (right-side) PE Router. The PE Router strips the label from the packet and forwards the pure IP packet to the top-right CE Router.
This way, routes from Site 1 are advertised to Site 2 and the switching in the ISP network is achieved based on the MPLS label, thus accomplishing the process faster than standard IP routing. Each customer’s traffic is also tagged with specific RD (Route Distinguisher) values associated with that specific customer, so the end-to-end layer 3 path is often referred to as an MPLS VPN. Customers can even advertise prefixes from overlapping ranges to the ISP MPLS cloud. They will be treated differently, however, because of the associated RD value.
VSAT
A very small aperture terminal (VSAT) is a small telecommunication earth station that transmits and receives real-time data via satellite. The VSAT transmits signals to orbital satellites, which relay the information to other systems in other locations around the globe.
The CPE (Customer Premises Equipment) for VSAT users is generally a box that acts as an interface between the local network and the external antenna or satellite dish transceiver. The antenna sends data to the satellite to be received in another location, and the data received is sent to other locations. The satellite acts as a hub for the system and receives signals from each earth station in a star topology.
VSAT data rates are typically between 56 Kbps and a few Mbps. VSATs are generally used to transmit:
- Narrowband data, such as point of sale (credit card transactions)
- Broadband data, for the provision of Internet satellite access in remote areas requiring voice or video (or both)
VSATs are also used for maritime communications that need to be mobile, as well as on-the-move communications (using phased array antennas).
FIG 5.11 – VSAT connections
Cellular Networks
Cellular networks work via signals that carry voice, text, and digital data. These signals are transmitted via radio waves from one device to another. The data is transmitted through a global network of transmitters and receivers.
The cellular design in these kinds of networks involves dividing the overall structure into a multitude of overlapping geographical areas called cells. These cells overlap to ensure continuous transmission for roaming users. Each cell is served by a base station, which functions as a hub for the specific area. RF (radio frequency) signals are transmitted by an individual phone and received by the base station. The base station transmits the signal to another base station or directly to the receiving phone.
The entire cellular system is controlled by a mobile switching center (MSC), which coordinates the actions of all base stations, providing overall control and acting as a switch and connection to external networks. As such, it has a variety of communication links into it that include fiber-optic links as well as some microwave links and some copper wire cables. The MSC might contain many backups and duplicate circuits to ensure that it does not fail.
When a mobile phone is turned on, it needs to be able to communicate with the cellular telecommunications network. Even if a call is not made instantly, the network needs to be able to communicate with the mobile phone to know where it is. In this way, the network can route any calls through the relevant base station, as the network would soon be overloaded if the notification of an incoming call had to be sent via several base stations.
There are a variety of tasks that need to be undertaken when a phone is turned on. This can be seen in the few seconds it takes before the phone is ready for use after turning it on. Part of this process is the software start-up for the phone, but a majority of it involves the registration process with the cellular network. There are several aspects to the registration: first, it must make contact with the base station; and second, the mobile phone has to be registered to allow it to have access to and use the network.
In order to make contact with the base station, the mobile phone uses a paging or control channel. The name of this channel and the exact way in which it works will vary from one cellular standard to the next, but it is a channel that the mobile phone can access to indicate its presence. The message sent is often called the attach message. After this has been achieved, the mobile phone needs to register with the cellular network and to be accepted into it.
It is necessary to have a register or database of users allowed to register with a given network. With mobile phones often being able to access all the channels available in a country, methods of ensuring that the mobile phone registers with the correct network and that its account is valid are required. Additionally, it is required for billing purposes. To achieve this, an entity in the network often known as the Authentication Center (AuC) is used. The network and the mobile phone communicate, with numbers giving the identity of the subscriber. Next, the user’s information is checked to provide authentication and encryption parameters that verify the user’s identity and ensure the confidentiality of each call, protecting users and network operators from fraud.
Once accepted into the network, two further registers are normally required—the Home Location Register (HLR) and the Visitor’s Location Register (VLR). These two registers are required to keep track of the mobile phone so that the network knows where it is at any time and that calls can be routed to the correct base station or general area of the network. These registers are used to store the last known location of the mobile phone. Thus, at registration, the register is updated and then periodically the mobile phone updates its position.
When the mobile phone is switched off, it sends a detach message. This informs the network that it is switching off and enables the network to update the last known position of the mobile phone.
Based on their capabilities in terms of data transmissions, cellular networks are classified as follows:
- 2G networks (GSM and CDMA) – limited data rate
- HSPA+ (High Speed Packet Access) – download rates up to 84 Mbps and upload rates of up to 22 Mbps
- LTE (Long Term Evolution) – download rates up to 300 Mbps and upload rates up to 75 Mbps
- 4G networks – transmission rates that exceed 100 Mbps
T1/E1
T1/E1 are specifications for telecommunications standards. They work using time-division multiplexing, meaning they use multiple transmission channels to carry digital signals, with a different channel being served at a different moment of time.
The T1 standard offers a data rate of 1.544 Mbps and it contains 24 digital channels. The E1 standard is similar to the T1 standard, except that it offers a data rate around 2 Mbps and it can serve up to 32 channels.
T1 and E1 connections are specific to different geographical regions. T1 is used in North America, Japan, and South Korea, while E1 is used in Europe.
T3 and E3 standards offer higher bandwidth than the T1 and E1 standards. T3 connections offer around 44 Mbps, while E3 connections offer a total line rate of around 33 Mbps.
ISDN
Integrated Services Digital Network is a set of standards for digital transmission over ordinary telephone copper wire as well as over other media. ISDN is not used very much nowadays, as it has been replaced by technologies like DSL and cable modems, which will be described in the next few sections.
ISDN offers two levels of service:
- BRI (Basic Rate Interface)
- PRI (Primary Rate Interface)
Both rates include a number of B-channels and D-channels. The difference between the two types of channels is that B-channels are used to carry data, voice, and other services, while D-channels are used for control and signaling.
BRI is the ISDN service most people use to connect to the Internet. An ISDN BRI connection supports two 64 Kbps B-channels and one 16 Kbps D-channel over a standard phone line; thus, a BRI user can have up to 128 Kbps service. BRI is often called 2B+D, referring to its two B-channels and one D-channel. The D-channel on a BRI line can even support low-speed (9.6 Kbps) X.25 data; however, this is not a very popular application in the United States.
ISDN PRI service is used primarily by large organizations with intensive communication needs. An ISDN PRI connection supports 23 64 Kbps B-channels and one 64 Kbps D-channel (or 23B+D) over a high-speed DS1 (or T-1) circuit. The European PRI configuration is slightly different, supporting 30B+D. These services are illustrated in Figure 5.12 below:
FIG 5.12 – ISDN, BRI, and PRI
ISDN in concept is the integration of both analog and voice data together with digital data over the same network. Although ISDN integrates these on a medium designed for analog transmission, broadband ISDN (BISDN) is intended to extend the integration of both services throughout the rest of the end-to-end path using fiber-optic and radio media. Broadband ISDN encompasses Frame Relay service for high-speed data that can be sent in large bursts, the Fiber Distributed-Data Interface (FDDI), and the Synchronous Optical Network (SONET).
DSL
Digital Subscriber Line is a type of high-speed Internet access. DSL Internet access is delivered across the telephone network backbone. Not all phone companies offer DSL service in every residential area, so even if phone service is available, it does not necessarily mean that DSL will also be available.
Compared with a dial-up connection, where a modem is used to connect to the Internet over the phone line, DSL is always on. There is no need to dial in or disconnect. DSL is generally much faster than a dial-up connection, which is limited to 56 Kbps.
DSL speed and bandwidth are usually somewhat lower than cable, which is available wherever cable TV service is available; however, cable Internet access is a shared media. What this means is that if cable is available in your area, all users who are connected to the cable hub share a fixed amount of bandwidth. The more devices that are connected, the less bandwidth each user gets. With DSL, each user has a dedicated circuit and doesn’t share bandwidth on that circuit with any other users.
Even though with DSL you don’t have to share access with other users, the closer your home is located to a telephone company’s central office switch, the better. This is a physical building where the local switching equipment is located. Distance to the switch is a determining factor in whether or not DSL service is available and what speed will be available.
DSL service introduces interference on your phone line. It is necessary to eliminate this noise using a filter supplied by your DSL provider, as demonstrated in Figure 5.13 below:
FIG 5.13 – Home connection to the DSL circuit using a filter
The most common DSL types are detailed in the following sections.
ADSL
The variation called Asymmetric Digital Subscriber Line is the form of DSL that is most familiar to home and small business users. ADSL is called asymmetric because most of its two-way or duplex bandwidth is devoted to the downstream direction, sending data to the user. Only a small portion of bandwidth is available for upstream or user-interaction messages. However, most Internet and especially graphics- or multimedia-intensive web data need lots of downstream bandwidth, but user requests and responses are small and require little upstream bandwidth.
Using ADSL, up to 6.1 Mbps of data can be sent downstream and up to 640 Kbps upstream. The high downstream bandwidth means that your telephone line will be able to bring motion video, audio, and 3-D images to your computer or hooked-in TV set. In addition, a small portion of the downstream bandwidth can be devoted to voice rather than data, and you can hold phone conversations without requiring a separate line.
HDSL
The earliest variation of DSL to be widely used was High bit-rate DSL, which is used for wideband digital transmission within a corporate site and between the telephone company and a customer. The main characteristic of HDSL is that it is symmetrical: an equal amount of bandwidth is available in both directions. For this reason, the maximum data rate is lower than for ADSL. HDSL can carry as much on a single wire of twisted-pair as can be carried on a T1 line in North America or an E1 line in Europe (2,320 Kbps).
IDSL
ISDN DSL is somewhat of a misnomer because it’s really closer to ISDN data rates and service at 128 Kbps than to the much higher rates of DSL.
RADSL
Rate-Adaptive DSL is an ADSL technology in which software is able to determine the rate at which signals can be transmitted on a given customer phone line and adjust the delivery rate accordingly. Westell’s FlexCap2 system uses RADSL to deliver data from 640 Kbps to 2.2 Mbps downstream and from 272 Kbps to 1.088 Mbps upstream over an existing line.
VDSL
Very high data rate DSL provides much higher data rates over relatively short distances (between 51 and 55 Mbps over lines up to 1,000 feet or 300 meters in length).
Cable
The cable TV network can be used to connect a local computer or network to the Internet, competing directly with DSL technology.
This type of network often uses both fiber optics and coaxial cables. The connection between the cable TV company and the distribution points is made using fiber optics, while the connection between the distribution points and the users’ homes is made using coaxial cables. Each distribution node typically serves between 500 and 2,000 clients. In the coaxial network, amplifiers can be used to regenerate the signal and expand the maximum lengths of the coaxial network. As a result, cable networks do not suffer from the same problems that DLS networks do in terms of electromagnetic interferences and cable length issues.
The most common system used by cable TV companies to offer Internet access is DOCSIS (Data Over Cable Services Interface Specification). The latest DOCSIS version is 3.1 and it allows some interesting features, including channel bonding.
The coaxial cable used by the cable TV allows the transmission of several channels using different frequencies. Typically, each channel is 6 MHz wide and a whole channel is used for downstream transmissions, with a maximum transfer rate of 42.88 Mbps. For upstream transmissions, a 6.4 MHz channel is used in DOCSIS 3.0, which offers a maximum transfer rate of 30.72 Mbps.
DOCSIS 1.0 uses time-division multiple access (TDMA), while DOCSIS 2.0 and 3.0 also allow the use of code-division multiple access (CDMA). DOCSIS 3.0 also allows the use of more than one channel at the same time, a feature called channel bonding. This increases the transfer rates; for example, if four channels are used for downstream transmissions, the maximum bandwidth can reach 171.52 Mbps.
The actual transfer rate achieved using cable TV networks is related to the number of users connected to the optical node at the same time, as this system is based on the fact that not all users will be accessing the Internet at the same time.
Multilink PPP
Multilink PPP (MLPPP), a variant of PPP, allows you to bundle multiple physical point-to-point links together to act as one logical link, as demonstrated in Figure 5.14 below:
FIG 5.14 – Multilink PPP
The link bundle is configured between two nodes and provides load balancing over two or more links. An example of this is a company bundling two leased lines in order to aggregate the link speed. MLPPP fragments frames, which are then sent over each link (with headers) at the same time. If there are three links, then each frame is fragmented into three parts, and so on. A header on each fragment allows reassembly at the other end.
There is some level of redundancy inasmuch as if one of the links fails, the others will pick up the load. Routing protocols treat the MLPPP link as a single adjacency.
Generic Routing Encapsulation
GRE was developed by Cisco as a means of encapsulating a large number of network protocols inside a virtual point-to-point link over IP. An example of this is where Site A and Site B both use private IP addressing and want to route packets over the Internet. GRE will allow them to encapsulate the non-routable IP traffic inside a GRE packet and then route it. GRE can also support multicast traffic.
Figure 5.15 below is a screenshot of a GRE packet capture that shows the concept of a packet wrapped inside another packet:
FIG 5.15 – GRE packet capture
Cloud Computing
Cloud computing has moved from a novel idea to a core internetworking concept. Any company that has not already moved to the cloud is certainly contemplating it. All of us already use the cloud when we access social media, e-mail, and streaming services such as YouTube and Netflix.
I strongly recommend that you make Cloud Computing certification a top priority if you want to remain employable. Consider CompTIA Cloud certifications, and then Amazon or Google.
Cloud computing is defined in the NIST 800-145 document as follows:
“…a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”
The characteristics of cloud computing defined in the NIST document above are:
- On-demand self-service (create and remove as needed)
- Broad network access (accessible from tablets, phones, PCs, etc.)
- Resource pooling (can be used by others)
- Rapid elasticity (can easily scale [e.g., add more RAM with a click of a button])
- Measured service (provision is monitored, usually for billing purposes)
There are three models of cloud computing defined in the NIST document:
- Software as a Service (SaaS) – cloud-hosted software/application licensed as a subscription basis, such as Dropbox.
- Platform as a Service (PaaS) – platform that allows customers to run and develop applications as needed.
- Infrastructure as a Service (IaaS) – moving servers, storage, and networking functions to the cloud.
Figure 5.16 below demonstrates the three models:
FIG 5.16 – The three models of cloud computing
Cloud deployment models defined in the NIST document include private, public, and others, including community and hybrid, which will not be discussed.
A private cloud is for the exclusive use of the organization that owns it. It can be operated by the organization or by a third party. A public cloud is provisioned to be used by the general public, such as Amazon Web Services (AWS). The fact that it is open to public access does not mean that everyone can see or use your provisioned service, but rather that it is available for anybody to use.
Impact of Cloud Resources on Enterprise Networks
To understand how cloud computing will affect enterprise networks, a good example would be a bank with several branch offices. The standard model is for each branch to store and access data locally, as illustrated in Figure 5.17 below:
FIG 5.17 – Each branch stores and accesses data locally
If the bank wanted to move to a cloud computing model, it could use one of the following options:
Scenario 1: Self-hosted Private Cloud
The HQ hosts its own datacenter. All file storage is moved to a central location. All branch offices must connect to the HQ over the WAN. This is illustrated in Figure 5.18 below:
FIG 5.18 – Self-hosted private cloud
Scenario 2: Third-party-hosted Private Cloud
This is still a private cloud but it is hosted by a third party. The link to the private cloud could be over the Internet; however, this may cause issues such as security, capacity, and QoS. Another connection option is a private WAN such as MPLS, though this could also cause issues if the bank changed their service provider.
Scenario 3: Intercloud Exchange
The bank connects to the intercloud exchange, which in turn has connections to several cloud providers. This is illustrated in Figure 5.19 below:
FIG 5.19 – Intercloud exchange
In Figure 5.20 below, more complexity and possible connectivity issues have been added to the third-party cloud option.
FIG 5.20 – Complex cloud connection
Virtualization
Although cloud computing is not necessary in order to run virtualization, it is useful in the context of understanding pooled resources.
Examples of virtualization software are VMware and VirtualBox. Normally, PCs and laptops come with an operating system pre-installed that has access to all the available hardware (e.g., CPU, RAM, hard disk, etc.). If users wanted to use another operating system, they would have to buy another PC or partition their hard drive to dual boot. Both options have several drawbacks.
With virtualization, users can install one or several operating systems on a host. The only limit is the resources on their device (see Figure 5.21 below).
FIG 5.21 – Virtualization
Virtualization uses the terms guest and host. The guest is also referred to as the virtual machine, whereas the host is the physical machine. Users can use one physical machine to run several virtual machines (VMs) that share hardware resources.
High-end devices do not necessarily operate in the same way as a home PC when virtualization software is run. Enterprise software such as VMware ESXi can be installed directly onto a server without first installing an OS (such as Windows Server).
The software that allows a host to be split into virtual machines is known as a Hypervisor, which creates, runs, and manages VMs on a host. Examples for personal use include VMware, VirtualBox, and Parallels desktop for Mac. Enterprise versions include VMware ESXi, Microsoft Hyper-V, and Citrix XenServer.
Virtual Network Functions and Services
Virtual network functions allow devices such as firewalls, routers, and switches to be run in software instead of hardware, which has always been the tradition. Cisco has a few such solutions on the market already:
- Virtual switch (e.g., Nexus 1000V)
- Virtual router (e.g., Cisco CSR 1000V)
- Virtual firewall (e.g., Cisco ASAv)
Virtual services allow users to access various services when they access a virtual machine over the Internet, which means that IP addressing has been allocated and probably NAT as well. The DNS will also be working in the background, allowing users to connect to devices using FQDN.
Virtual Desktops and Servers
Virtualization allows you to take multiple physical devices and move them to a single physical device that is logically divided into smaller virtual domains (see Figure 5.22 below). In other words, it allows you to create a software environment that emulates the hardware that used to be there before. The single device that will host all those virtual servers will have many resources available, in particular the following:
- CPU capacity
- Memory
- Disk space
- Bandwidth
FIG 5.22 – Network virtualization
Virtualization involves having a single physical device on top of which you use some virtualization software that is able to separate virtual machines inside the physical device. The virtualization software will allocate a certain amount of disk space, memory, and CPU capacity to each virtual machine (VM) defined inside. If you wanted to build a new server, you would just carve out a new section of the physical device to create another virtual operating system and allocate the necessary resources, making it act and feel exactly as if it were a physical device (see Figure 5.23 below).
The software that makes this happen is called a virtual machine manager or a hypervisor (which is more than a supervisor). The hypervisor has the following responsibilities:
- Manages all the virtual systems
- Manages physical hardware resources
- Manages the VM relationships to the hardware components inside the physical server
- Bridges the virtual world to the physical world
- Maintains separation between virtual machines when you don’t want them to communicate with each other
Note: It is very important that developers of the hypervisor software make sure that it has proper security features in place to restrict visibility and access between all VMs, even though they are sitting on the same physical device.
FIG 5.23 – Virtualization components and hypervisor
There are two types of hypervisors:
- Type 1 – Bare metal machine managers: With this type of virtual machine manager, you purchase a big server and simply load the VM software on the raw hardware. There is no underlying operating system involved and nothing else that you have to think about from an OS perspective. You simply load the hypervisor (e.g., VMware ESXi or Microsoft Hyper-V), which is the actual OS. This hypervisor type is often seen in very large enterprise server environments.
- Type 2 – Hypervisors that run on existing OS: This type of virtual machine manager runs on top of Windows/Linux/Mac OS hosts and it is often used in desktop environments.
The hypervisor allows you to start all the virtual machines at one time, as well as to network between them by configuring how different systems can communicate across the network. This offers the system administrator a lot of power from both an OS and a networking perspective.
Regarding enterprise environments, it’s not about users running their virtual systems and servers on Windows or Linux platforms; instead, it’s about a bare metal installation. Because you will usually run tens or hundreds of servers on a single piece of hardware, that device needs to have a lot of resources allocated to it, including:
- Multi-core CPU and multi-CPU sockets
- Large memory capacities (usually above 128GB, compared to 2 to 4GB used in desktop environments)
- Massive amounts of storage, internal or network-attached storage (NAS)
These large resource requirements make sense because you are consolidating all the servers into a single physical machine. You used to have a data center that had hundreds of servers (physical devices) plugged in at the same time. Now you have taken them all away and moved them into a single physical device. This server consolidation offers the following benefits:
- Saves a lot of room in the data center
- Increases flexibility on what you can do with the hardware
- Lowers costs on hardware, electricity, cooling, etc., both from a CAPEX (initial investment) and an OPEX (recurring operational and maintenance costs) perspective
Virtualization also affords a number of advantages from a management perspective:
- Fast deployment: You don’t have to buy a new computer, load an operating system, plug it into the network, find a place in the rack for it, and do all the administrative tasks necessary with a physical server. Using virtualization, you can build an OS in a matter of minutes with the VM manager software, which includes an IP address and pre-built software that you might have configured as a template.
- Managing the load across servers: If one particular server is very busy during a particular time of year, you can allocate additional memory and disk space during that time. As other servers become utilized more often, you can allocate the resources in other directions. Unlike using a physical server, where you would normally have to unplug or upgrade the device memory, turn off the machine, and physically install memory chips, in a virtual environment, you don’t have to worry about these time-consuming tasks. If you need more disk space or memory, you can increase the virtual resources with just a few clicks from the hypervisor. Virtualization offers many advantages and this is the main reason virtual servers and networks have become so popular in modern data centers.
Virtual Switches
As with real servers, the virtual machines managed by the hypervisor need to communicate with each other and with the outside world to accomplish different tasks (e.g., an application server communicating with a database server). This leads to the concept of virtualizing networking devices, in addition to virtualizing desktops and servers as detailed in the previous section (see Figure 5.24 below).
Before moving to the virtual world, servers and desktops were connected to networks composed of enterprise switches, firewalls, routers, and other devices that offered necessary functionality and features, including redundancy features. Now that servers and desktops have moved to virtual worlds, network devices also have to migrate to the virtual environment to provide similar functionality. This is an important consideration when making the change from the physical world to the virtual world.
FIG 5.24 – Virtualization of Network Devices
Network virtualization is often almost as important as the actual server virtualization. When migrating from a physical to a virtual network infrastructure, a number of challenges must be taken into consideration:
- Integration with the outside world: how many NICs will the physical hosting machine have?
- How will the cumulative bandwidth from all the servers in the physical world be transposed to the virtual world and be accommodated with a limited number of Ethernet connections (sometimes just one)?
- Will the throughput offered by the physical server be enough to properly serve the virtualized servers running on the system?
- How will network redundancy be built into the virtual environment (multiple network connections into the VMs)?
The considerations presented above become very important in terms of uptime and availability, especially in large data centers that host critical business applications. Considering that network virtualization eliminates the need for a dedicated connection per server, everything should now be accomplished in software, including assigning IP addresses, VLANs, and other specific configuration. This can become even more difficult to manage because you cannot physically touch the network equipment or trace the cabling to and from the servers. All of these functionalities are fully accomplished using hypervisor software.
By virtualizing the network layer, you not only transfer all the functionality to the virtual world but also obtain extra features. Traditional switches don’t have built-in functionalities like redundancy, load balancing, or QoS. These features can be easily implemented and configured in a virtual environment because everything is done in software, and the virtual system manufacturer might implement extra tweaks so you can manage certain applications to perform at a higher priority than others. For example, you can use integrated load balancing hypervisor functionality to balance the traffic between multiple VM Web servers.
Virtualizing network components offers two major advantages over using physical devices:
- Cost savings
- Centralized control
Many virtual systems also have some basic integrated security features, perhaps some firewall functionalities built right into the virtualization software. An important note is that third-party providers are starting to create virtual firewalls and Intrusion Prevention Systems (IPS) that can be loaded into these virtual environments to provide exactly the same security posture in the virtual world as you had in the physical world.
Note: Virtual network devices can be part of the hypervisor system or they can be dedicated virtual machines that can be loaded just like any other VM server.
Network as a Service
After virtualizing desktops, servers, and network devices, the next step is moving the entire network infrastructure into the “cloud” where it will operate as a Network as a Service (NaaS). If things become too complicated within the network and you don’t have the expertise to build and maintain it, you can outsource this process to another company and use it as a service, with all the required functionalities (usually by purchasing a subscription), and the network is now part of the cloud.
As virtualization software has become more popular, third-party providers have started to offer virtualization inside the cloud, with the customer not having anything at his facility. This implies that all the applications, platforms, and the network are moved into the cloud and all the IT functions of the company are virtualized so that everything is running in a completely separate facility. The network and everything associated with the management of the network then becomes invisible to the customer, who simply uses a single link that connects the local facility to the cloud without worrying about any network configuration aspects. In this case, everything is done separately because the network is running as a service at a third-party facility.
When offering NaaS in the cloud, any changes that occur within the network are invisible to the customer. The customer has a single connection to the cloud and does not care how the networking aspect works once the information is sent to the cloud, as the ISP is responsible for all of the virtualization services. This offers great flexibility in situations in which the ISP wants to take all the servers and move them into a data center that has much more capacity and availability. This is simply done by picking up the virtual system and deploying it almost immediately to a new physical location that may be geographically dispersed from the initial one, transparently and without the customer being affected in any way. Ultimately, the customer is not even interested in such details, as the main concern is that the service provided by the network and its applications are running as expected.
There might be many reasons why you would want to take your network and move it into the cloud, running it as a service. One situation might be that you have an important application that is used by thousands of people, which requires a lot of resources and bandwidth to operate. Instead of having all the networking and communication resources at your facility, including large network pipes and very expensive connections, you can simply put this into the cloud and have it managed by third-party providers. These service providers already have high-capacity connections to the Internet so you don’t have to spend the money on the bandwidth and maintenance services of these connections.
Complete network virtualization offers another interesting advantage, which is commonly referred to as a “follow the sun” service. This is a concept that is based on the fact that servers can be relocated relatively quickly, and based on their geographical region, service providers can optimize resource utilization and response times (most of the traffic for certain applications is done during the day, which happens at different intervals across the globe).
Another advantage of network virtualization is the ease of expanding and contracting how many resources you are using. If your applications are used by millions of people on a particular day or time period (e.g., tax applications), you can easily allocate more bandwidth, disk space, or memory with just a few clicks and suddenly increase the application’s capacity. When the busy period has passed and you don’t need all the allocated resources, there is no need to pay for them, so you can decrease certain parameters (e.g., network throughput or CPU cycles) to a level that is more reasonable for what the application is doing, again with just a few clicks.
If a customer uses NaaS and someday decides to move to a different location, it makes absolutely no difference how the applications will perform because they are hosted and managed by the service provider somewhere inside the cloud, which can be accessed from anywhere. Running NaaS inside the cloud provides a lot of functionality, which can be a perfect fit for certain business applications and services.
On-Site versus Off-Site Virtualization
Virtualization technology offers many choices regarding where you manage and maintain the virtualized environment. You might have everything on your premises or you might choose to install them in a different location, off-site.
In an on-site configuration, you own and manage the infrastructure within your premises. You are responsible for building and maintaining it, and if there are any issues associated with the hosting aspect, you are responsible for solving them. There are a number of advantages to hosting the virtualized environment on-site:
- You have control over what happens. If anything needs to be changed or moved, you have complete control over every modification on the hosting devices and connections.
- You also have control over possible resource upgrades on the devices, including memory, disk space, CPU capacity, and bandwidth.
- You have complete security over the entire infrastructure. You can install the equipment in a locked room and limit access to the physical servers, which is something that you usually don’t have available if the virtual environment is hosted in a remote location (off-site).
There are also some disadvantages to the on-site approach:
- It is more costly than hosting the equipment at a third party because you have to purchase the servers, racks, connections, and the operating systems, and you have to make sure that you own a controlled environment. All of these aspects involve both CAPEX (initial expenses) and OPEX (recurring expenses) that you need to think about.
- You need a networking infrastructure that includes enterprise switches, routers, firewalls with redundancy, and security features built in.
- All of the factors above make the infrastructure hard to upgrade. You have to think about how much room is available in the racks, you have to purchase new equipment, and sometimes it is difficult to make rapid changes because there are a number of physical devices that offer limited performance.
In an off-site environment, everything is hosted in the cloud. You don’t have to worry about where these particular systems are in the data center as they don’t even exist at your facility. All the applications, servers, and operating systems are somewhere else and you don’t necessarily care where that is. This brings the following main advantages:
- You don’t have any kind of infrastructure costs: no servers, no cooling, or anything that requires an initial investment.
- The management and maintenance of all the infrastructure is handled by a third-party service provider, so you don’t need a lot of staff to manage the devices and make sure they operate properly.
- The infrastructure can be located anywhere in the world (single hosting location or multiple hosting locations).
- Many service providers offer huge capacity virtual environments, so if you need more resources (e.g., disk space, memory, bandwidth, etc.), the ISP can provide this with minimum effort.
Hosting the infrastructure off-site also has some disadvantages:
- All of the customer data is stored at a different facility, with no physical access to it. In cases where the data is extremely sensitive, having your virtualized environment somewhere in the cloud may not be the best option.
- Off-site hosting has some associated contractual limitations. Usually, this involves signing a long-term contract with the service provider that offers limited flexibility for that duration. If the environment changes rapidly, you may need to modify some of the contractual terms to avoid different kinds of limitations.
End of Chapter Questions
Please visit https://www.howtonetwork.com/ccnasimplified to take the free Chapter 5 exam.
Chapter 5 Labs
Lab 1: WAN Lab – Point-to-Point Protocol
The physical topology is shown in Figure 5.25 below:
FIG 5.25 – PPP lab
Lab Exercise
Your task is to configure the network in Figure 5.25 to allow full connectivity using PPP in a WAN. Please feel free to try the lab without following the Lab Walk-through section.
Text in Courier New font indicates commands that can be entered on the router.
Purpose
Not all networks run the default encapsulation of HDLC. Many companies use PPP, especially for ISDN connections. PPP is popular due to improved security features.
Lab Objectives
- Use the IP addressing scheme depicted in Figure 5.25. Router A needs to have a clock rate on interface Serial 0/0: set this to 64000.
- Set Telnet access for the router to use the local login permissions for username banbury and the password ccna.
- Configure the enable password to be cisco.
- Configure PPP on the Serial interface to provide connectivity to the neighbor.
- Enable CHAP authentication.
- Configure a default route to allow full IP connectivity.
- Finally, test that the PPP link is up and working by sending a ping across the link.
Lab Walk-Through
- To set the IP addresses on an interface, you will need to do the following:
Router#config t
Router(config)#hostname RouterA
RouterA(config)#
RouterA(config)#interface Serial0/0
RouterA(config-if)#ip address 192.168.1.1 255.255.255.252
RouterA(config-if)#clock rate 64000
RouterA(config-if)#no shutdown
RouterA(config-if)#interface Loopback0
RouterA(config-if)#ip address 172.16.1.1 255.255.0.0
RouterA(config-if)#interface Loopback1
RouterA(config-if)#ip address 172.20.1.1 255.255.0.0
RouterA(config-if)#^Z
RouterA#
Router B:
Router#config t
Router(config)#hostname RouterB
RouterB(config)#
RouterB(config)#interface Serial0/0
RouterB(config-if)#ip address 192.168.1.2 255.255.255.252
RouterB(config-if)#no shutdown
RouterB(config-if)#interface Loopback0
RouterB(config-if)#ip address 172.30.1.1 255.255.0.0
RouterB(config-if)#interface Loopback1
RouterB(config-if)#ip address 172.31.1.1 255.255.0.0
RouterB(config-if)#^Z
RouterB#
To set the clock rate on a Serial interface (DCE connection only), you need to use the clock rate # command on the Serial interface, where # indicates the speed:
RouterA(config-if)#clock rate 64000
Ping across the Serial link now.
- To set PPP CHAP authentication, you need to set a username and password on each router. The username must match the hostname of the calling router exactly:
RouterA(config)#username RouterB password cisco
Router B:
RouterB(config)#username RouterA password cisco
- To set the enable password, do the following:
RouterA(config)#enable secret cisco
Router B:
RouterB(config)#enable secret cisco
You now need to configure PPP as the WAN link for this lab.
- To enable PPP, you will need to do the following:
RouterA(config)#interface Serial0/0
RouterA(config-if)#encapsulation ppp
RouterA(config-if)#ppp authentication chap – Use CHAP to authenticate
Router B:
RouterB(config)#interface Serial0/0
RouterB(config-if)#encapsulation ppp
RouterB(config-if)#ppp authentication chap
- To configure a default route, there is one simple step (in configuration mode):
RouterA(config)#ip route 0.0.0.0 0.0.0.0 Serial0/0
Router B:
RouterB(config)#ip route 0.0.0.0 0.0.0.0 Serial0/0
- To test the PPP connection, you will need to first check that the link is up. To do this, use the show interface command:
RouterA#show interface Serial0/0
Serial0/0 is up, line protocol is up
Hardware is HD64570
Internet address is 192.168.1.1/30
MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation PPP, Loopback not set
[output truncated]Router B:
RouterB#show interface Serial0/0
Serial0/0 is up, line protocol is up
Hardware is HD64570
Internet address is 192.168.1.2/30
MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation PPP, Loopback not set
LCP Open
Open: IPCP, CDPCP
Make sure that Serial 0/0 is up and the line protocol is up.
Next, ping the neighbor Serial interface; this will test whether the link is up:
RouterA#ping 192.168.1.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.1.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 28/31/32 ms
Router B:
RouterB#ping 192.168.1.1
If everything is OK, you will receive five replies and have a 100 percent success rate.
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.1.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 28/31/32 ms
- To test the PPP negotiation, you can shut the Serial interface and then no shut it with the debug ppp authentication and debug ppp negotiation You can see the CHAP challenge taking place and the line coming up as you read the debug.
RouterB#config t
RouterB(config)#debug ppp authentication
RouterB(config)#debug ppp negotiation
RouterB(config)#interface s0/0
RouterB(config-if)#shut
RouterB(config-if)#
01:41:37: %LINK-5-CHANGED: Interface Serial0, changed state to administratively down
RouterB(config-if)#
01:41:37: Se0 IPCP: Remove link info for cef entry 192.168.1.1
01:41:37: Se0 IPCP: State is Closed
01:41:37: Se0 CDPCP: State is Closed
01:41:37: Se0 PPP: Phase is TERMINATING
01:41:37: Se0 LCP: State is Closed
01:41:37: Se0 PPP: Phase is DOWN
01:41:37: Se0 IPCP: Remove route to 192.168.1.1
RouterB(config-if)#
01:41:38: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0, changed state to down
RouterB(config-if)#no shut
RouterB(config-if)#^Z
RouterB#
01:41:46: %SYS-5-CONFIG_I: Configured from console by console
01:41:46: %LINK-3-UPDOWN: Interface Serial0, changed state to up
01:41:46: Se0 PPP: Treating connection as a dedicated line
01:41:46: Se0 PPP: Phase is ESTABLISHING, Active Open
01:41:46: Se0 PPP: Authorization NOT required
01:41:46: Se0 LCP: O CONFREQ [Closed] id 184 len 15
01:41:46: Se0 LCP: AuthProto CHAP (0x0305C22305)
01:41:46: Se0 LCP: MagicNumber 0x093B9E12 (0x0506093B9E12)
01:41:48: Se0 LCP: State is Open
01:41:48: Se0 PPP: Phase is AUTHENTICATING, by both
01:41:48: Se0 CHAP: O CHALLENGE id 180 len 28 from RouterB
01:41:48: Se0 CHAP: I CHALLENGE id 180 len 28 from RouterA
01:41:48: Se0 PPP: Sent CHAP SENDAUTH Request to AAA
01:41:48: Se0 CHAP: I RESPONSE id 180 len 28 from RouterA
01:41:48: Se0 PPP: Phase is FORWARDING, Attempting Forward
01:41:48: Se0 PPP: Phase is AUTHENTICATING, Unauthenticated User
01:41:48: Se0 PPP: Sent CHAP LOGIN Request to AAA
01:41:48: Se0 PPP: Received SENDAUTH Response from AAA = PASS
01:41:48: Se0 CHAP: O RESPONSE id 180 len 28 from RouterB
01:41:48: Se0 PPP: Received LOGIN Response from AAA = PASS
01:41:48: Se0 PPP: Phase is FORWARDING, Attempting Forward
01:41:48: Se0 PPP: Phase is AUTHENTICATING, Authenticated User
01:41:48: Se0 CHAP: O SUCCESS id 180 len 4
01:41:48: Se0 CHAP: I SUCCESS id 180 len 4
01:41:48: Se0 PPP: Phase is UP
01:41:48: Se0 IPCP: O CONFREQ [Closed] id 2 len 10
01:41:48: Se0 IPCP: Address 192.168.1.2 (0x0306C0A80102)
01:41:48: Se0 IPCP: I CONFACK [ACKsent] id 2 len 10
01:41:48: Se0 IPCP: Address 192.168.1.2 (0x0306C0A80102)
01:41:48: Se0 IPCP: State is Open
01:41:48: Se0 IPCP: Install route to 192.168.1.1
01:41:48: Se0 IPCP: Add link info for cef entry 192.168.1.1
01:41:49: %LINEPROTO-5-UPDOWN: Line protocol on Interface Serial0, changed state to up
RouterB#un all
All possible debugging has been turned off
Show Runs
RouterA#show run
Building configuration…
Current configuration: 739 bytes
!
version 15.1
!
hostname RouterA
!
enable secret 5 $1$jjQo$YJXxLo.EZm9t6Sq4UYeCv0
!
username RouterB password cisco
!
ip subnet-zero
!
interface Loopback0
ip address 172.16.1.1 255.255.0.0
!
interface Loopback1
ip address 172.20.1.1 255.255.0.0
!
interface Serial0/0
ip address 192.168.1.1 255.255.255.252
encapsulation ppp
ppp authentication chap
clockrate 64000
!
ip classless
ip route 0.0.0.0 0.0.0.0 Serial0/0
!
end
RouterA#
—
RouterB#show run
Building configuration…
Current configuration: 721 bytes
!
version 15.1
!
hostname RouterB
!
enable secret 5 $1$HrXN$ThplDHEZdnCbbeA/Ie67E1
!
username RouterA password cisco
!
ip subnet-zero
!
interface Loopback0
ip address 172.30.1.1 255.255.0.0
!
interface Loopback1
ip address 172.31.1.1 255.255.0.0
!
interface Ethernet0
no ip address
shutdown
!
interface Serial0/0
ip address 192.168.1.2 255.255.255.252
encapsulation ppp
ppp authentication chap
!
ip classless
ip route 0.0.0.0 0.0.0.0 Serial0/0
no ip http server
!
end
RouterB#