Network Automation has been given significant importance in the CCNA exam updates. It's no surprise because, if you observe networking industry trends, you may have noticed that some are redefining the way network engineers build and manage modern infrastructures. This has caused a need for us to develop new skills necessary to deploy and manage networks.
One such trend is the rise in network automation technologies. Big vendors such as Cisco, VMware, Juniper, and others have invested heavily in the development and maintenance of these technologies and the tools that are used to support them.
An important concept to understand is that network programmability seeks to significantly decrease human to machine interaction in order to fulfill the goals of automation. Network programmability speeds up service delivery by automating tasks that are typically done manually, per network device.
Network programmability speeds up service delivery by automating the tasks that are typically done via the Command Line Interface (CLI), and this is the biggest revolution happening in the networking industry at the moment. Using DevOps and automation tools that are commonplace in software development, network engineers can now perform more optimal workflows by using these same tools. As network engineers, we are going to have to keep up-to-date with these trends and associated skills if we are to remain relevant.
If you find this area of networking interesting, you may well want to consider the CCNA DevNet certification.
What is Network Automation?
If you ask a typical network engineer what their day to day routine looks like, I can bet their answer would be mostly sitting in front of a computer, logging into their devices, and running commands to perform particular tasks. Of course, I know this because it was my job for many years.
When the commands they use become repetitive, they use a text editor to type them and copy-paste them to individual devices . It should be noted that these network engineers might manage hundreds or thousands of devices. Every time they need to make a change to the whole network, they do these recurring tasks for each device.
Network automation allows us to run these tasks in a more autonomous way by simplifying the process of network management. This prevents network engineers from having to write and manually execute the same commands multiple times. This also allows engineers to be more creative while providing customized solutions to organizations.
In the next couple of chapters, we will look at how and why these traditional networks are replaced by modern Controller Based Software-defined Networks (SDNs). We will also understand the technical details of how these SDNs work and how our routine applications will be affected by them. By the end, you will have a good grounding in Software-defined Networking.
Uses of Network Automation
Network automation is used for various common tasks as follows;
- Network Device provisioning – Network device provisioning is likely one of the first (and most mundane) tasks that comes to engineers’ minds when they consider typical workday. Device provisioning is adding base configurations such as IP addresses, management passwords, logging servers, etc. Automation simply means configuring network devices more efficiently, faster, and with fewer errors because human interaction with each network device is decreased. Automated processes also streamline the replacement of faulty equipment.
- Device software management – Controlling the download and deployment of software updates is a relatively simple task, but it can be time-consuming and may be subject to human error. Many automation tools have been created to address this issue, but they can lag behind customer requirements. A simple network programmability solution for device software management is beneficial in many environments.
- Data collection and telemetry – A common aspect of effectively maintaining a network is collecting data from network devices and telemetry about network behavior. Even the way data is collected is changing because now many devices such as Cisco IOS-XE devices can push data (and stream) in real time in contrast to being polled every 5-15 minutes.
- Compliance checks – Network automation methods allow the unique ability to quickly audit large groups of network devices for configuration errors and automatically make appropriate corrections with built-in regression tests.
- Reporting – Automation decreases the manual effort required to extract information and coordinate data from disparate information sources for creating meaningful reports readable by humans.
- Troubleshooting – Network automation makes troubleshooting easier by making configuration analysis and real-time error checking very fast and simple, even with many network devices.
How Network Automation Impacts Network Management
Network Automation isn’t a new idea. Research has been taking place on various network automation strategies for quite some time.
There is much more to network automation than having programmatic interfaces on network devices (we’ll come to these in a bit). There are many technologies that are used when introducing network programmability and automation into a given environment.
- Linux – The foundation of automation starts with Linux. From version control to programming languages and configuration management, tools such as Ansible and Puppet almost always run on Linux OSs. This is (in part) why Linux has been introduced into the CCNA exam.
- Device and controller APIs – An API is the mechanism by which an end user makes a request to a network device, and the network device responds. This mechanism provides increased functionality and scalability over traditional network management methods, and this is how modern tools interact with network devices.
- Version control – Everything should be versioned using a platform such as Git. Using a platform (Git or other) makes it easier to share and collaborate on projects involving everything—from code to configuration files. You can use many different tools to accomplish automated testing in an environment where version control is used to manage configuration files.
- Software development – While not every network programmability engineer will be an expert programmer, understanding software development processes is critical to understanding how software development can be used to extend or customize open source tools.
- Automated testing – A key area of network programmability (and software development) is automated testing. Deploying proper testing, before or after a change on the network, in an automated fashion improves the predictability and determinism of network resources. Network administrators should use tests that run automatically under defined conditions or whenever a change is being proposed.
- Continuous integration – CI tools are used commonly by developers, and they can drastically improve the release cycle of not only the software but also of network configuration changes. Deploying CI tools and pipelines can help with the execution of your tests so that they run when changes are being proposed (using version control tools).
Comparison of Networks
Although SDN has been widely adopted, most of the existing networks in the world are still traditional, that is, non-SDN. They have been in use for a long time and still involve a lot of manual task handling. The reasons these networks were successful in the first place and the problems in them which led to the development of Controller Based Networks are discussed below.
Traditional Networks
In traditional networks, the data plane acts on the forwarding decisions, and the control (and the management) plane learns or computes forwarding decisions. This means that the control plane and the data plane reside within physical device.
FIG 25.1 – Traditional network
The above figure shows a traditional network with which we are already familiar. Each device has an internal control and data plane. This means that all devices are equally intelligent and can make decisions independently because the control plane is local. Of course, as discussed earlier, the data plane is what is responsible for the actual packet forwarding.
In these scenarios, the networking devices are completely trusted to make decisions, handle logic, and forward data after making the forwarding decisions. Also, each networking device should hold relevant information about the networking topology to make a decision on the control plane.
Controller-Based Networks
Figure 25.2 below illustrates the separation of the control and data planes.
FIG 25.2 –– Controller-based networks
Controller-based networks are considered an evolution of networking architecture rather than a new type of network. In controller-based networks, a controller (and manager) is the central point of decision making in a network. Physical devices retain only data plane functions. The control plane has a complete picture of the networking topology, the devices, and their states. This enables the control plane to make logic-based decisions based on the network information and the packets transfers between devices. The networking devices do not make any forwarding decisions, thereby reducing the device-level processing.
As shown in Figure 25.3 below, traditional networks are managed on a device-by-device basis.
FIG 25.3 – Overview of traditional networks
The devices are accessed via Console, Telnet, or SSH to carry out the configuration management. Figure 25.4 below illustrates management using SDN.
FIG 25.4 – Overview of controller-based networks
Software-defined Networking (SDN) Components
Software-defined Networks consist of certain features and components you need to be aware of. These include:
- Overlay and Underlay protocols
- How a control plane is separated from a data plane and its effect on the network
- Northbound and Southbound APIs
- Cisco SDN
Overlay and Underlay Protocols
Underlay Protocols
The network underlay comprises the switches, routers, and wired and wireless links, which essentially is your network infrastructure.
Underlay protocols are based on the actual physical infrastructure of the network. They are responsible for the delivery of packets from one device to another across the network and are capable of forming a path across the physical networking devices for packet transmission.
A protocol, such as OSPF, allows the devices to exchange updates of their states. These updates include information such as the status of links and information about any new configuration. Based on these updates, the devices build a topology and can make intelligent routing decisions. The decisions for routing a packet are thus made by considering the best physical routing path available for the packet destination.
Not only common routing protocols but also switching protocols are underlay. STP is a perfect example; it is built to avoid flooding of data frames over physical links. When a physical link in STP goes down, the whole network’s tree of STP is updated through the use of BPDUs. These protocols rely on the physical state of the involved devices and their links.
Overlay Protocols
Underlay protocols depend on physical links for network knowledge. Overlay protocols rely on underlay protocols to obtain network information. The transfer of knowledge is shown below:
FIG 25.5 – Information transfer to overlay protocols
The links formed in overlay protocols are completely virtual, making them independent of a physical link. They create tunnels based on the physical connections. An overlay routing protocol cannot detect a physical link failure.
FIG 25.6 – Underlay vs overlay protocols
Consider the underlay protocol OSPF, it can build various routing tables based on the data it has collected from the physical links in the network. Let us assume three of those routes are headed to the same network through an overlay protocol called protocol X. The protocol X can hold all these three routes and considers them as a single route to the end network. This creates a tunnel using protocol X to the end network with three routes inside this tunnel.
If the link to the optimal route among the three routes goes down, OSPF will change the path to the second-best route. Because the routes are in the same tunnel and one route is responsible for data transmission inside the tunnel, the overlay protocol will not be affected.
OMP in the figure above refers to Overlay Management Protocol. OMP is an all-encompassing information management and distribution protocol that enables the overlay network by separating services from transport. It’s currently outside the syllabus, so we won’t discuss it here.
VXLANs
A Virtual Extended LAN (VXLAN) is a perfect example of how overlay protocols work. It is a protocol that overcomes the shortcomings of VLANs.
Firstly, traditional networks do not support more than 4096 VLANs. High-density networks, such as data center networks, often require more than 5000 VLANs. Secondly, VLANs and traditional switched networks use Spanning Tree Protocol (STP), which limits broadcast storms within the network by blocking some of the ports. Because some links are used for preventing the storms, this prohibits complete utilization of links and network resources.
Another drawback of existing VLANs is the lack of load balancing. If a data has more than one valid link to the root bridge, load balancing of the data cannot be done between the links.
VXLANs offer a solution to these problems. The following is a list of a few advantages of VXLANs:
- Higher number of supported virtual networks – In traditional networks, the maximum number of supported VLANs was 4096. Using VXLANs, this number has been increased to a possible of 16 million virtual networks.
- No shutting down of links due to STP – VXLANs use Layer 3 routing protocols in the underlay transport protocols. Unlike Spanning Tree Protocols, they do not shut down links to avoid broadcast storms, thus enabling complete utilization of links within networks.
- Load balancing of traffic – Because VXLAN uses Layer 3 routing for underlay data transportation, many of these routing protocols support load balancing of their data if there are multiple links to the destination.
VXLAN Overview
To understand how VXLAN works, a short review of VLANs may be in order.
VLANs or Virtual Local Area Networks help break down a single physical network into one or more networks logically. This allows better security within computer networks and segmentation within the networks based on functionality.
These virtual networks may increase the work needed during network setup and management, especially if activities such as Spanning Tree per VLAN are carried out. However, this complexity, if properly handled, comes with numerous benefits, especially in terms of network organization. For instance, in a corporate organization, each department can be assigned a separate VLAN as below:
FIG 25.7 – VLAN overview
The devices from different VLANs cannot communicate with each other by default. In routing domains, devices from different networks are unable to communicate unless there is a routing protocol or static route set to inform them about each other’s routes.
To enable communication between VLANs, routing needs to be configured; specifically for the scenario in Figure 25.7, inter-VLAN routing is required. This allows the specific network segments (department VLANs in this case) to route data to each other.
FIG 25.8 – Inter VLAN routing
The routing done in this network is based on sub-interfaces, which we discussed earlier in this book. In this network, the switches were used as the transport fabric between the VLANS. The transportation was done entirely on Layer 2 devices. This is where the main difference between VLANs and VXLANs lies.
In VLANs, the transport fabric is carried out by Layer-2 devices and protocols. This means that the underlay is Layer-2, which is performed by switches and switching protocols, such as STP.
In VXLANs, the transport fabric is carried out by Layer-3 devices and protocols. The underlay is, therefore, Layer-3, utilizing routers and routing protocols, such as OSPF and BGP, while the overlay is the VXLAN tunnels which is Layer-2.
FIG 25.9 –VXLAN underlay and overlay
As you can see from Figure 25.9 above, VXLAN creates a tunnel across the Layer-3 fabric. The Layer-3 protocols are used to transport the frame from one tunnel endpoint to another (the routers).
A Deeper Understanding of VXLAN
Before digging deeper into the working of VXLAN, some definitions related to VLAN are listed below:
- VNI (Virtual Network Instance) – A logical network instance which creates a broadcast domain of its own. A VNI is equivalent to a single VLAN.
- VNID (Virtual Network Identifier) – ID for a specific VNI. VLAN IDs previously were used to identify each specific VLAN. In VXLAN, VNIDs are used to identify VNI instances.
- VTEP (Virtual Tunnel Endpoint) – Elements of the network that instantiate and maintain the VXLAN tunnels. In our case above, the two routers, acting as the gateway for the networks, were used to create the tunnel across the Layer-3 network. These routers are the VTEPs.
- Bridge Domain – Groups of ports sharing the same broadcast characteristics. These may be used for multicasting etc.
Note that the VXLAN Tunnel Endpoints (VTEPs) can either be physical or virtual devices. In the topology above, a physical router was used as VTEP for both ends of the VXLAN tunnel. Virtual machines using hypervisors such as VMware can also be used to create VTEPS.
One advantage of using VTEPs is the amount of given flexibility by customizing the logic running within VTEP devices, thus enhancing Software-defined Networking within our networks.
Another major advantage of using the physical VTEP devices, such as physical routers, is increased performance. The physical devices are much faster in terms of processing and creating tunnels when compared to the virtual VTEP devices.
FIG 25.10 – VXLAN packet transfer example
Figure 25.10 above shows two Virtual Network Interfaces, VNI 1 and VNI 2. Computer A and computer (host) C are in VNI1, while computer B and computer D are in VNI 2.
Steps 1–4 in figures below demonstrate what happens when Computer B sends a frame to Computer D. Because Computer B and Computer D are in the same virtual network but separated by a Layer 3 routing fabric, a Layer-3 header needs to encapsulate the frames sent from Computer B to Computer D when the frame arrives at VTEP1. The Layer-3 header will allow the frame to be transmitted across the Layer-3 network. The Layer-3 information is then de-encapsulated at VTEP2 which leaves only the frame information to reach Computer D. This is enough to ensure that data communication happens between Computer B and Computer D. The following figures illustrate how the headers of the packet will look at each step of transmission.
FIG 25.11 – Packet 1 and packet 2 headers
FIG 25.12 – Packet 3 and packet 4 headers
As you can see, the packet is encapsulated in step 2 and de-encapsulated in step 3. This means that the frame in step 1 and step 4 has the same header information, thus making communication between B and D similar to a single ethernet network.
Another important aspect of VXLAN is multicast groups. It is a group in the Layer-3 transport network representing a specific VNI. When a device inside the local network of a VTEP belongs to a specific VNI, a VTEP device joins the multicast group of the VNI.
An advantage of a VTEP joining a multicast group is that when a broadcast message to a specific VNI is received in the Layer-3 transport network, all VTEPs registered with that multicast group will receive the broadcast. This is useful in instances such as initial address learning through Address Resolution Protocol (ARP).
Control Plane and Data Plane
Software-defined Networking is a concept based around the separation of the control plane and the data plane. In fact, the Open Networking foundation describes Software-defined Networking as the physical separation of the network control plane from the forwarding plane (data plane), and where a control plane controls several devices.
To understand the difference between a control plane and a data plane, an understanding of network devices operation is required.
FIG 25.13 – Incoming packet to router
The diagram above shows the incoming packet to a router from a host. When the packet is received by the router, the first thing that happens is that the packet’s headers are examined. Figure 25.14 shows what constitutes a packer header.
FIG 25.14 – IP Header example (image retrieved from https://flylib.com/books/en/2.298.1.25/1/)
Packet headers contain information such as Source IP address, Destination IP address, Source MAC, Destination MAC, Flags, Protocols, etc. This information is used to make decisions in the networking devices, for example, routing and switching decisions to determine what will happen to a packet based on the access list, traffic management, and other important decisions.
When the packet header is opened, the networking devices decide what should be done to the packet. The decisions are based on protocols, algorithms, or network policies. The decisions made could include to multicast the packet, unicast the packet, broadcast the packet, modify and then do one of these actions, or even drop the packet.
The action of packet opening and its analysis based on the headers and protocols running on the device can be referred to as networks’ brain at work. The actions performed on the packet, such as multicasting, can be referred to as the muscle of the networking devices at work.
FIG 25.15 – How packets and frames are processed in networking devices
From the diagram above, it can be seen that the processing is done at the “brain” of the device, which is known as the control plane. The muscle of the device, which is involved in pushing the packets out, is known as the forwarding/data plane.
With Software-defined Networking, the control plane and the data plane are separated out as two entities. The aim is to make the brain and the muscle of the network operate in separate physical places, instead of on the same networking chassis.
You may have already guessed some of the issues that have led to the need for separation of the control and data planes. Firstly, the amount of data going through the networking devices was increasing at an exponential rate. This resulted in the data plane (muscle), responsible for pushing the data, requiring more work due to heavier traffic than before. The control plane (the brain) was also called more frequently because packets were arriving more frequently. In a nutshell, this substantially increased the amount of processing in the networking devices.
Secondly, the complexity of processing algorithms has also increased in order to improve efficiency. Modern networking algorithms are capable of integrating some intelligence through machine learning. The amount of processing required for a single packet has thus increased exponentially. Separating the planes became a logical solution leaving the devices to concentrate on forwarding the traffic.
Keeping both planes on the same device is certainly not ideal for certain environments, such as data centers, which rely on fast switching of packets. It will not only overload the device buffers and CPU but also limit the ability of network engineers to completely customize the network traffic flow.
With SDN, when a networking device receives a packet, instead of opening up the packet and analyzing the headers, it retains the packet and sends the headers to the controller, which does the analysis and decision making. The controller opens the packet headers and reviews the information. The information collected from the headers is then used to make logical decisions including routing, switching, and setting policies within the network.
After the controller has decided what should be done to a packet, this decision is sent back to the networking device. The networking device will then perform the action that has been specified by the controller.
The processing that was traditionally done by networking devices is eliminated. Any artificial intelligence required when forwarding the packets will be done by the controller. Any communication of networking information with external applications will be done by the networking controller. The networking devices are, therefore, reduced to hub-like devices. They focus on only one thing, which is forwarding the packets instead of processing the information received from them.
FIG 25.16 – Visualization of how SDN network architecture now operates
Below is a final flow diagram when a packet is received at a router:
FIG 25.17 – How packets are processed in SDN
Northbound and Southbound APIs
We’ve referred to APIs a few times so far but now is the perfect time to delve a little deeper. An application program interface (API) is the mechanism by which an end user makes a request to a network device, and the network device responds to the end user. This method provides increased functionality and scalability over traditional network management methods.
In order to transmit information, APIs require a transport mechanism such as SSH, HyperText Transfer Protocol (HTTP), or HyperText Transfer Protocol Secure (HTTPS); there are other possible transport mechanisms as well.
Traditionally, methods such as SNMP, Telnet, and SSH were among the only options to interact with a network device. However, over the last few years, networking vendors, including Cisco, have developed and made available APIs on their platforms in order for network operators to more easily manage network devices and gain flexibility in functionality.
The syllabus specifically mentions both Southbound and Northbound APIs.
Southbound APIs
Southbound APIs enable communication between the controller and the networking devices. APIs, in this context, can be thought of as channels of information between systems. In the case of Southbound APIs, the channel of information is between the networking device and the networking controller.
This communication mainly involves an exchange of packet header information as described previously and the state of the devices if required; for example, when an interface of the networking device goes down, the controller can be configured to obtain this information.
The communication between these devices is facilitated by certain protocols, the most popular protocol being the OpenFlow Protocol. It was developed and maintained by Open Networking Foundation (ONF) and allows the controller to perform modification in forwarding plane information from the controller.
Some other popular Southbound APIs include:
- OpenFlow
- NETCONF
- RESTCONF
- OpFlex
- SNMP
- REST
- gRPC (developed by Google)
When compared to Northbound APIs, Southbound APIs require higher proficiency of the protocols, such as OpenFlow, to create custom solutions using them.
Northbound APIs
Northbound APIs or northbound interfaces are responsible for communication between the controller and the services that run over the network, they are interfaces connecting to higher-level components within a network.
For example, a controller can make changes to the device forwarding planes based on certain configurations. If a user needs this information, then the controller needs to send this information to an external application. The protocols used to connect to the external applications are known as Northbound APIs. Currently, REST API is predominately being used as a single northbound interface that you can use for communication between the controller and all applications.
Examples of Northbound APIs include:
- Representational State Transfer (REST)
- Data transfer from the controller to the applications is done via HTTP/HTTPS
- During the data transfer, the most used ‘content-type’ is ‘application/json’
- Simple Object Access Protocol (SOAP)
- Typically sent via HTTP (Just like REST)
- During Data transfer, the ‘content-type’ is ‘text/xml’
Defined under IETF RFC 4227 (https://tools.ietf.org/html/rfc4227)
The Northbound APIs allow communication of the network activity with the end user applications and business logic. These APIs give an understanding of what is happening in the Southbound APIs (e.g. OpenFlow).
There are some cases where the same protocol can be used as both a Northbound and a Southbound API. For instance, Cisco’s implementation of NETCONF (IETF RFC 6241) uses both APIs to communicate between the networking device and the controller, and also from the controller to the higher-level applications.
NETCONF data is modeled by a language known as YAML Ain't Markup Language or YAML for short. It is a language that specifies the format in which the retrieved data needs to be formatted.
Figure 25.18 below further illustrates the differences between Northbound and Southbound APIs.
FIG 25.18 – Northbound vs southbound APIs (detailed)
Cisco SDN Solutions
The concept of SDN is revolutionary. Not only does it allow complete control of the network but also more time to focus on business policies and improving the network to suit the business need and goals; rather than constantly working on how network protocols can be configured.
Cisco has presented the idea of SDN in the context of an Intent Based Network (IBN). IBN is a concept that aims to put business policies at the forefront of networking decisions. IBN enables network managers to focus on intent of the business, for example, to improve network performance for mobile devices, and aid in a seamless integration of the network to business needs. Cisco aims to achieve an Intent Based Network through its SDN solutions.
SDN addresses the need for:
- Centralized configuration, management, control, and monitoring of network devices (physical or virtual)
- The ability to override traditional forwarding algorithms to suit unique business or technical needs
- Allowing external applications or systems to influence network provisioning and operation
- Rapid and scalable deployment of network services with life-cycle management
SDN allows network engineers to provision, manage, and program networks more rapidly, as it greatly simplifies automation tasks by providing a single point of administration for infrastructure programming.
The nature of controller-based networking makes a centralized policy easy to achieve. A network-wide policy can be easily defined and distributed consistently to the devices connected to the controller. For example, instead of attempting to manage ACLs across many individual devices, a flow rule can be defined on the central controller and pushed down to all the forwarding devices as part of the normal operations.
SDN solutions are developed to a much greater extent within organizations aiming to develop their own SDN controllers. Cisco, being an industry leader in innovation and networking solutions, has developed various controllers discussed below.
Unlike most vendors providing a single solution for the whole network, Cisco has developed custom SDN solutions for each section of the network:
Network section | Cisco solution |
Wide Area Network | SD-WAN |
Branch Networks | SD-Branch |
Data Center | Application Centric Infrastructure (ACI) |
Access Networks (to end users) | SD-Access |
Overall network management | DNA Center |
FIG 25.19 – Cisco SDN solutions for each part of the network
Cisco SD-WAN
The traditional WAN function was to connect users at a branch or campus to the applications hosted on servers in a data center. Typically, dedicated MPLS circuits were used to ensure security and reliable connectivity. This no longer works in a cloud-centric world because WAN networks designed for a different era are not ready for the unprecedented explosion of WAN traffic that cloud adoption brings. The rapid increase in WAN traffic causes management complexity, application performance unpredictability, and data vulnerability.
Cisco SD-WAN is a software-defined approach to managing WANs. Cisco SD-WAN simplifies the management and operation of a WAN by separating the networking hardware from its control mechanism. This solution virtualizes much of the routing that used to require dedicated hardware.
SD-WAN represents an evolution of networking from an older, hardware-based model to a secure, software-based virtual IP fabric. The overlay network forms a software overlay that runs over standard network transport services, including the public internet, MPLS, and broadband. The overlay network also supports next-generation software services, thereby accelerating the shift to cloud networking.
- The Cisco SD-WAN solution is comprised of separate orchestration, management, control, and data planes.
- The orchestration plane assists in the automatic onboarding of the SD-WAN routers into the SD-WAN overlay.
- The management plane is responsible for centralized configuration and monitoring.
- The control plane builds and maintains the network topology and makes decisions on where traffic flows.
- The data plane is responsible for forwarding packets based on decisions from the control plane.
Wide Area Networks are a crucial feature of modern networks, especially when communication over a very large geographical region is required. Most businesses are transitioning from local data handling to storing their data and applications on the cloud, so the 80/20 rule (80% local to 20% non-local network traffic) has flipped to 20/80. The transport fabric used to transmit this data to and from the cloud uses WAN.
This major shift forced WAN to handle more traffic than before because it is now the bridge between the Local Area Networks and the Data Centers. SD-WAN aims to solve this issue. Cisco’s SD-WAN key components are mentioned below:
- Orchestration Plane (vBond Orchestrator)– vBond orchestrator performs the initial authentication of the vEdge devices, and it is the first point of contact between the vBond and vEdge.
- Control Plane (vSmart Controller) – This is the vSmart controller (control plane) which is responsible for the SD-WAN centralized control plane of the SD-WAN network architecture. It contains business logic and makes routing decisions based on this logic.
- Management plane (vManage) – Centralized network management system provides a GUI interface to monitor, configure, and maintain all Cisco SD-WAN devices and links in the underlay and overlay network.
vManage is the Management plane of the SD-WAN and provides a single point of control for Day 0, Day 1, and Day 2 operations.
vBond, vSmart, and vManage can all be virtual devices in a data center. A vBond can also be a physical router in the network.
SD-WAN runs on an overlay protocol known as Overlay Management Protocol (OMP). This protocol supports OSPF and BGP in the underlay and uses them for performing route advertisements.
FIG 25.20 – SD-WAN Overview
Cisco ACI
ACI is the abbreviation for Application Centric Infrastructure. It is Cisco’s SDN solution for the data center. This solution is very effective for core network systems with high amounts of data going through them and can withstand large faults.
Cisco ACI has APIC as its SDN controller. APIC stands for Application Policy Infrastructure Controller. This controller is specially designed for use within the core of computer networks. In order to understand how an ACI works, the operating of the ACI Object Model and Tenants needs to be discussed first. The object model is built on a group of one or more tenants.
A tenant hosts groups of devices with specific characteristics. There is tenant policy and tenant networking, and each uses a different method to group devices in them.
- Tenant networking uses Layer-2 and Layer-3 characteristics to group devices.
- Tenant policy uses policy characteristics to group devices. For example, if all application servers within the data center are required to operate on the same policies, then they can be placed under the same group in the tenant policy. This can be done even if they are in different networks.
The devices used within the ACI model are: Cisco Nexus 9500 switches for the spine of the network, Cisco Nexus 9300 switches for the leaf of the data center, and the APIC controller for making and applying the policies for these networking devices.
The use of Nexus switches gives us an option whether to switch to the ACI mode or to remain in the NX-OS mode. Both modes allow Software-defined Networking and automation but make note that the ACI mode is made specifically for the ACI solution while NX-OS is made for general networking, which can be managed using the DNA center technology. Following is a simple overview of how the ACI architecture looks:
FIG 25.21 – Overview of ACI architecture (© Cisco DevNet)
Cisco SD-Access
Cisco's Software-Defined Access (SD-Access) solution is a programmable network architecture that provides software-based policy and segmentation from the edge of the network to the applications. SD-Access is implemented via Cisco DNA Center which provides design settings, policy definition, and automated provisioning of the network elements, as well as assurance analytics for an intelligent wired and wireless network.
In enterprise architecture, the network may span multiple domains, locations or sites such as main campuses and remote branches, each with multiple devices, services, and policies. The Cisco SD-Access solution offers an end-to-end architecture that ensures consistency in terms of connectivity, segmentation, and policy across different locations (sites).
SD-Access is aimed at the end user’s access to the network. It includes mobile phones, personal computers, IP phones, and everything else at the network end. These are some of the problems SD-Access aims to solve together with SDN:
- The access layer of computer networks is expanding rapidly. Originally designed to cater to computers, servers, and printers, there has been a huge increase in the need to support other devices such as mobile phones, IoT devices, laptops, and many more devices at the access layer.
This means an increase in the number of users and more diversity in the types of data traversing the computer network as well as an associated growth in the number and complexity of security threats.
- It is harder to separate the application traffic, user data, and networking device data being sent across the network. This makes the QoS configuration much more complex when, at the same time, the type of data being considered is very important.
- With the rise in the use of wireless communication, more than half the data in the network comes from wireless. Most of the traditional networks are designed and configured for wired networks. It is the responsibility of the network to figure out how to diversify the network configuration for wireless devices as well.
SD-Access separates user, device, and application traffic. This makes it simpler to set policies for the specific data without the need to redesign the network.
SD-Access also allows us to treat both wireless and wired data under the same network fabric. Configuring the whole fabric becomes a lot easier because the network controller allows diversity in data handling. Wired and wireless data are not treated separately.
The main components of the SD-Access are:
Cisco DNA Center
This enables device management and operates at the management plane which is responsible for forwarding configuration and policy distribution. It is an overall management platform for Cisco’s SDN solutions and allows easier management of the devices at the access level via a programmable API and a GUI.
Identity Service Engine (ISE)
Cisco ISE provides host onboarding and policy enforcement capabilities. With SD-Access, it allows mobility within the network and encourages Bring Your Own Device (BYOD) within the network. It simplifies access control within the mobile devices and even IoT devices and ensures that the correct security policies are set for a specific task.
Network platform
Cisco DNA Center is a crucial part not only in SD-Access but also in Cisco’s SDN. Cisco DNA Center is a tool that can be used in any part of the network where Cisco’s SDN solutions are implemented. It is used to manage device configurations through APIs allowing automation of devices.
Identity Service Engine
ISE is a tool that is crucial for the end user security and the end user policy management within networks. The ISE enables engineers to devise policies based on who is accessing the network, what are they accessing, and when are they accessing the network. It also enables the engineers to filter out devices using both wireless and wired networks, thereby making the network context-aware when building the access layer of the networks.
Cisco DNA Center
Cisco defines its DNA center as the network management and command center for Cisco's Digital Network Architecture (DNA). Cisco DNA is comprised of all SDN solutions discussed so far. The Cisco DNA can be managed by Cisco DNA Center.
FIG 25-22: Cisco DNA Center (Copyright Cisco Systems)
Cisco DNA Center can fetch networking device details, general network details, and any other management information about a network. The rest of the solutions mentioned previously were helpful in controlling the network from protocols to policies. Cisco DNA Center focuses on device management in a network, for example, device interfaces and other general configurations such as IP addresses. Cisco DNA Center can also be used to detect events, such as an interface changing state (going up/down) would qualify as an event. This information can be obtained using Cisco DNA Center.
Cisco DNA Center resides on a Cisco DNA Center appliance, you can visit the Cisco website for more information on models and purchase options. The Cisco DNA Center dashboard provides an overview of network health and helps in identifying and remediating issues. Automation and orchestration capabilities provide zero-touch provisioning based on profiles, facilitating network deployment in remote branches.
Advanced assurance and analytics capabilities use deep insights from devices, streaming telemetry, and rich context to deliver an experience while proactively monitoring, troubleshooting, and optimizing your wired and wireless network.
FIG 25-23 – Cisco DNA Center dashboard
The following are the Cisco DNA Center tools:
- Discovery – scans the network for new devices. Cisco DNA center uses Cisco Discovery protocol to discover network devices and interconnections between devices.
- Inventory – provides the inventory for devices.
- Topology – helps you to discover and map network devices to a physical topology with detailed device-level data.
- Image Repository – helps you to download and manage physical and virtual software images automatically.
- Command Runner – allows you to run diagnostic CLI commands against one or more devices.
- License Manager – visualizes and manages license usage.
- Template Editor – is an interactive editor to author CLI templates.
- Network Plug and Play – provides a simple and secure approach to provision networks with a near zero touch experience.
- Telemetry – provides telemetry design and provision.
- Data and Reports – provides access to data sets and schedules data extracts for download in multiple formats like Portable Document Format (PDF) reports, comma-separated values (CSV), Tableau, and so on.