Fundamentally, a software defined network is a type of network architecture that separates the network data plane from the control plane.

Network data planeprogrammable switches:
the network devices that forward traffic
Network control planecontrollers, apps:
the software logic that controls ultimately how traffic is forwarded through the network

The separation of the network’s data plane and control plane, allows a network operator to control network behavior from a single high-level control program. Deployments of software define networking are often used to solve a variety of network management problems in real networks.



In the conventional network architecture, network devices (particular routers) are bundled with a specialized control plane and various features. The vertical integration of hardware and software essentially binds you to whatever software and features are shipped with that particular device. This bundling effectively slows innovation.

Software define networking effectively breaks these pieces apart:

Apps โŸบ Control plane โŸบ Interface (OpenFlow) โŸบ Data plane (switches)

SDN was motivated by the observation that distributed network configuration can be very buggy and unpredictable, where all of the network devices are configured independently in a low-level device specific manner. In the past, those lower-level configuration were taken and analyzed, trying to infer the behavior of the network. But, what we ultimately discovered was that inference was difficult and it would be easier to let a single centralized control point to dictate the forwarding behavior of the network. SDNs are much easier to coordinate. In particular a network operator can write a program that allows the behavior of different network devices to be coordinated.

History and Evolution

Indeed the term Software Defined Networking was coined in 2009, but many of the ideas have roots in earlier technologies, dating back as far as the phone network.

We can think about the intellectual history of software defined networking as proceeding in three stages:

Active networkingintroduced the notion of programmable networks.
Control and data plane separationoffered open interfaces between control and data planes.
OpenFlow API and Network Operating Systemsthe first instance of widespread adoption of an open interface between control plane and data plane.

Evolution of Supporting Technologies

In the early days, data and control sent over the same channel – a technology called in-band signaling. It offered some advantages in terms of simplicity, but the resulting network turned out to be fairly brittle and insecure.

In the early 1980s, AT&T took a particular turn towards separating the data and control planes in something called the network control point, which was developed for the telephone network. The idea was that all signalling would go to the network control point or the NCP, which could also talk to a back end data base which could have additional auxiliary information about customers. This could enable a number of new services. Elimination of in-band signalling reduces expenditures.

Active Networks

The programmability in networks roots in the active networks projects of 1990s. Simply put, active networks are networks where the switches perform custom computations on packets, as the packets travel through those switches. The motivation for active networks, was to accelerate innovation. (SDN actually has the same motivation).

Active nodes in the network, where routers could download new services into the infrastructure, that could allow for some user-driven innovation. The main idea is that messages / packets would carry both:

  1. data
  2. procedures that might operate on that data

These active nodes (routers) that perform these custom operations, might coexist with legacy routers that do nothing more than forward the traffic. Each of these programmable nodes, might then perform additional processing, in addition, to forwarding the traffic.



There are two different approaches to active networks:

Capsules
(integrated)
Every message is a program.
Active nodes evaluate content carried in packets.
Code dispatched to execution environment.
Programmable switches
(discrete)
Custom processing functions run on the routers.
Packets are routed through programmable nodes.
Program depends on the packet header.

However, unfortunately the timing of Active Networks was off. There was no clear application of Active Networks at that time, there was no data center and cloud then. Also hardware support was expensive, then everyone was using ASICs, whereas now we have a lot more options to support programmable data plans, like TCAMs, FPGAs, NPUs, etc.

Active Networks focus too much on security, special language for safe code, carrying code, as opposed to the general concept of how do we provide programmability in the network.

In contrast, OpenFlow did a very good job grappling with backward compatible with switch hardware. It is simply a firmware upgrade to provide OpenFlow support into existing switches.

Network Virtualization

The history of network virtualization can be traced back to 1990s. Network virtualization is the representation of one or more logical network topologies on top of the same underlying physical infrastructure. Network virtualization offers benefits of sharing and prospect of customizability.

We might have multiple parties who want to use a fixed physical infrastructure. Each of parties might want access to different underlying physical network resources and want to use the infrastructure to have the ability to create their own view of logical topology, sitting on top of that physical infrastructure. These would be able to run independently without interfering with one another.

In 1998, Switchlets (the tempest architecture) separates control framework from switches. It also achieved the virtualization of the switch. The idea behind switchlets was to allow multiple control architectures to operate over a single ATM network.

In 2006, VINI (a virtual network infrastructure) achieves the virtualization of network infrastructure, bridging the gap between ‘lab experiments’ and live experiments at scale. VINI also used a separation of the data and control planes to achieve some of its goals of network virtualization.

In 2007, Cabo separates infrastructure providers and service providers. The idea of separating service providers from infrastructure providers is something that we see a lot in commercial software defined networks today.



History of SDN

Remember that separation of control from data is a good idea. It enables more rapid innovation. It enables the controller to potentially see a network-wide view, thereby making it easier to infer and reason about network behavior. Finally, having a separate control channel makes it possible to have a separate software controller, which facilitates the introduction of new services to the network much more easily.

Three different ways were developed to control packet-switched networks:

IETF FORCES (2003)

The standard essentially defined protocols that would allow multiple control elements (CE), via the FORCES interface, to control forwarding elements (FE), which would essentially be responsible for forwarding packets, metering, shaping, performing traffic classification and so forth.

FORCES looks a lot like OpenFlow, but because it required new standardization, adoption and changes to the hardware, deployment ultimately became very difficult.

Routing Control Platform (2004)

Essentially use existing protocols as control channels to send control messages to the forwarding elements. The RCP effectively used BGP as a control channel so that the forwarding elements thought that they were talking to just another router, but in fact, all of the smarts for the network were centralized at a single point. However, the problem with the approach is that the control that one has over the network is constrained by what existing protocols like BGP can support.

Ethane Project (2007)

Customizing the hardware in the data plane, potentially makes it easier to support a much wider range of applications in the control plane. The problem with this approach, of course, is that it requires custom switches that support the Ethane protocol.

So what we are looking for is something that could operate on existing protocols, yet wouldn’t require customizing the hardware. The answer is OpenFlow: a standard control protocol could control the behavior of existing hardware.

OpenFlow (2008)

In OpenFlow, a separate controller communicates with the switch’s flow table, to install forwarding table entries into the switch, that control the forwarding behavior of the network.

OpenFlow Controller โŸบ OpenFlow Protocol โŸบ Flow table (Layer-2 switch)

Because most switches already implemented flow tables, the only thing that was necessary to make OpenFlow a reality was to convince the switch vendors to open the interface to those flow tables, so that a separate software controller could dictate what would be populated in those flow tables.

Mininet Technologies and Python API

You can create various network topology using Mininet command line. For instance:

  • Minimal network with two hosts:
    sudo mn --topo minimal
  • A network with 4 hosts and 4 switches. Each host is connected to a dedicated switch, all of the switches are connected linearly:
    sudo mn --topo linear,4
  • A network with 3 hosts all connected to 1 switch:
    sudo mn --topo single,3
  • A tree topology with defined depth and fan-out:
    sudo mn --topo tree,depth=2,fanout=2


You can also use Python to write your own Mininet topologies:

from mininet.top import Topo
from mininet.net import Mininet

net = Mininet()

# create nodes
c0 = net.addController()
h0 = net.addHost('h0')
s0 = net.addSwitch('s0')
h1 = net.addHost('h1')

# create links between nodes (2-ways)
net.addLink(h0, s0)
net.addLink(h1, s0)

# Configure IP addresses in interfaces
h0.setIP('192.168.1.1', 24)
h1.setIP('192.168.1.2', 24)

net.start()
net.pingAll()
net.stop()

The file system that each of the hosts sees is shared. That means that if you invoke a file operation on one of the hosts, such as writing to a file, then that file would be seen on all of the other hosts in your topology.

Control and Data Separation

The control plan is logic that controls the forwarding behavior in the network. Examples of the control plane are:

  • routing protocols
  • network box configuration
    • firewall configuration
    • load balancer configuration

The data plane, on the other hand, forwards traffic according to the control plane logic. Examples of data planes are

  • IP forwarding
  • layer 2 switching

Data plane is sometimes implemented in hardware, but also is increasingly implemented in software routers.

By separating the data plane and the control plane, each can evolve, and be developed independently. In particular, the network control software can evolve independently of the hardware.

Furthermore, the separation allows the network to be controlled from a single high level software program. Doing so not only is it easier to reason about the behavior of the network, but it’s also easier to debug and check this behavior.

Opportunities

Where does the separation help? Largely, it helps in data centers and in routing. It helps make certain applications in enterprise networks easier to manage. It also helps in research networks by allowing research network to coexist with production networks on the same physical infrastructure.

Data Centers

Data centers are relatively common place to move a virtual machine from one physical location in the data center to another, as traffic demands shift. As virtual machines migrate, that central controller knows about the migration and can update the switch state accordingly so that network paths update in accordance with the virtual machine migration.

How to address hosts in a data center?

Layer 2A flat topology with less configuration or administration.
However tens of thousands of servers clearly results in poor scaling because these layers and networks are typically broadcast.
Layer 3Use existing routing protocols and scaling properties are much better, but the administration overhead is a lot higher because we have to configure these routing protocols.

To get the best of both worlds (layer 2 and layer 3), one idea is basically to:

  1. construct a large Layer 2 network using Layer 2 addressing, and
  2. make those addresses topology-dependent rather than topology-independent.

We can use MAC addresses, but we can renumber / readdress / reassign these hosts, so that the addresses of these hosts have MAC addresses that depend on where they are in this topology, i.e. topology-dependent MAC addresses.



Now hosts can still send traffic to the other hosts IP addresses in this data center topology, but the problem is that since we’ve reassigned the MAC address in the topology to be topology-dependent. The hosts don’t actually know that they’ve had their MAC addresses reassigned. They still think that they have their old flat MAC addresses.

As we know, when a particular host wants to send traffic to another host IP address, it will use the Address Resolution Protocol (ARP) to send out a broadcast query that asks “who has a particular IP address?” In other words, “what is the MAC address for this particular IP address, that I would like to send to?”

The trick here is that we don’t want the destination host to respond, because it still thinks it has its old flat MAC address. We want a separate controller to basically intercept all of these ARP queries. When a switch receives a query, that switch can kick that query to a central controller (also called fabric manager), which can then reply with the topology-dependent pseudo MAC address.

And then all of the traffic can be rewritten with the appropriate source and destination topology-dependent MAC addresses.

Interdomain Routing

Nowadays the policies in interdomain routing protocol BGP (Border Gateway Protocol) are very constrained. BGP artificially constrains routes that any particular router in the network could select. That’s because route selection is based on a fixed set of steps. And there are a limited number of knobs to control in the inbound and outbound traffic. It’s also very difficult to incorporate other information.

The separation of the control and data plane makes it a lot easier to select routes based on a much richer set of policies. Because the route controller can directly update the state in the data plane independently of whatever software or other technology may be running on the routers and switches themselves.

This is useful in quite a few scenarios:

  1. Planned maintenance on an edge router. Suppose network operator wants to do planned maintenance on the router egress1, he could use something like the RCP (Routing Control Platform) to directly tell other routers stop send their traffic to egress1, but instead to egress2.
  2. Let customers themselves control the selection. If a particular customer wanted to use one data center to reach its particular services. The network could use the RCP to send traffic for one customer to one data center, and another customer to a different data center. This will be very difficult in today’s networks, because BGP is routing traffic based on destination prefix.
  3. Better security for interdomain routing. If a particular autonomous system learned two routes to a destination, and one of routes looks suspicious, and the other route did not. The control plane (or an RCP for example), could tell the other routers in that autonomous system to use the preferred route.

More Opportunities

Other opportunities include:

  • dynamic access control
  • seamless mobility migration
  • server load balancing
  • network virtualization
  • using multiple wireless access points
  • energy-efficient networking
  • adaptive traffic monitoring
  • denial-of-service attacks detection

In the case of denial-of-sevice attacks, usually it will be difficult to squelch the offending traffic near the source. The data and control separation can be used to measure systems and detect attacks. The high-level control software can identify the entry point of the offending traffic and victims under the attack. Then by modifying routers, the offending traffic at the entry point can be dropped.



Challenges

However, there are two continual challenges in the separation of data and control plane:

  1. Scalability: control elements are usually responsible for many (often thousands) forwarding elements.
  2. Reliability: what happens when a controller fails or is compromised?
  3. Consistency: ensure consistency across multiple control replicas.

RCP’s Approaches

Scalability

RCP must store routes and compute routing decisions for every router across the autonomous system. A single autonomous system may have hundreds to thousands of routers.

  • RCP first eliminates redundancy. It stores a single copy of each route, and avoid redundant computation.
  • Secondly, RCP accelerates lookups, maintains indexes to identify affected routers. Therefore, when a particular event happens, the RCP only needs to compute new routing information, or routing tables for the routers that are affected by that change. Rather than recomputing the state for the entire network.
  • Finally RCP simply focuses on performing inter-domain BGP routing alone.

Reliability

The RCP design advocates having a “hot spare”, whereby multiple identical RCP servers essentially run in parallel. And the backup or standby RCP could take over in the event that the primary fails.

Each replica has its own feed of the routes from all the routers in the autonomous system. Each replica receives the same inputs and runs the same routing algorithm. There is no need for a consistency protocol if both replicas always see the same information.

Consistency

However, there are potential consistency problems, if different replicas see different information. What we want is for route assignments to be consistent even in the presence of failures and partitions.

We previously just said that if every RCP receives the same input, runs the same algorithim, then the output should be consistent and we want some way to guarantee that. Fortunately a flooding-based Interior Gateway Protocol (IGP), such as OSPF or ISIS, essentially means that each one of these replicas already knows which partitions it’s connected to. That information that a replica receives is enough to make sure that the replica only computes routing table information for the routers in the partition that replica is connected. And that alone is enough to guarantee correctness.

The solution to the problem is:

  1. only use state from routes in a partition in assigning its routes
  2. mutiple RCPs receive same state from each partition they can reach
    • IGP provides complete visibility and connectivity
    • RCP only acts on partition if it has complete state

No consistency protocol needed to guarantee consistency in steady state.



ONIX’s Approaches

Scalability and Consistency

ONIX only keeps a subset of the overall network information base (NIB) in memory, called Partitioning. Then apply various consistency protocols to maintain consistency across different partitions. The ONIX design essentially describes the notion of a hierarchical set of controllers. It combines statistics and topology information.

Reliability

ONIX talks about different types of failures that may occur on the network. First ONIX simply just assumes that it is the application’s responsibility to detect and recover from network failures.

If the network failure affects reachability to ONIX, the design suggests that the use of a reliable protocol or multi-path routing could help ensure that the ONIX controller remains reachable, even in the case of a network failure.

If ONIX itself fails, the solution is to apply replication and then use a distributed coordination protocol amongst those replicas. Because ONIX has been designed for a far more general set of applications than the routing control platform, a more complicated distributed coordination protocol is necessary.

Routing Control Platform (RCP)

Routing Control Platform is an early example of control and data plane separation, which uses the Border Gateway Protocols as a control channel to control the forwarding decisions of routers inside an autonomous system.

Solve the Problems with BGP

There are many problems with BGP: it converges slowly and sometimes not at all. It causes routing loops. It’s misconfigured frequently, and certain network management tasks like traffic engineering are very difficult. And fixing BGP is pretty hard.

The fundamental problem with BGP is that the autonomous system (AS) is the logical entity for inter-domain routing. But the BGP state and logic are decomposed across routers, so no router has complete BGP state. And each router makes routing decisions based on a partial and incomplete view of the state across the entire autonomous system (AS). So representing a single coherent network-wide policy is quite difficult.

BGP also interacts in odd ways with other protocols, most notably with the IGP (Interior Gateway Protocol) that runs inside an autonomous system.

By contrast, the Routing Control Platform represents an AS as a single logical entity that operates on a complete view of an autonomous system’s routes, and computes those routes for all routers inside the autonomous system. The routers themselves no longer have to compute routes.

In a complete deployment of the Routing Control Platform, each autonomous system’s RCP communicates routes with RCP in other autonomous systems . But there are incremental deployment phases that allow a single autonomous system to deploy an RCP and still gain some benefits.

Deploy RCPs

In the first phase, only a single autonomous system has to deploy an RCP.

  • Before phase one, we just start with conventional iBGP.
  • Afterwards, a single RCP learns the best iBGP routes and IGP topology inside a single autonomous system.

In this situation, only a single AS deploys RCP, but that single AS still gets the benefits of that deployment. An example application of this limited deployment is controlling path changes.



In the second phase, a single RCP can not only control routes inside its AS, but can implement AS-wide policy based on the eBGP routes it learns from its neighbors.

  • Before phase 2, RCP gets ‘best’ iBGP routes and IGP topology.
  • Afterwards, RCP gets all eBGP routes from neighbors.

At this stage, one particular example is efficient aggregation of IP prefixes.

In the third phase, all ASes have RCPs deployed, and the RCPs communicate with one another via some inter-AS protocol (which might not even be BGP).

  • Before the phase, RCP gets all eBGP routes from neighbors.
  • Afterwards, ASes exchanges routes via RCPs.

An application of this third phase of deployment is more flexible routing through better network management and various protocol improvements. Once RCPs are talking to one another, it’s possible to replace BGP entirely. So there is a very broad range of applications that could be deployed in this third phase.

The 4D Network Architecture

The 4D architecture has three goals:

  1. Achieve network-level objectives, rather than router level objectives. Network operators should be configuring the entire network to achieve a goal, rather than individual routers. It minimizes the maximum link utilization across the network, and ensure connectivity under all layer 2 failures.
  2. Achieve network-wide views. The complete visibility of what’s going on in the network allows for more coherent decision making. These views might include views of the network wide traffic matrix, the topology and the status of various equipment across the network.
  3. Direct control. The software subsystem that controls forwarding should have direct sole control over data plan operations such as packet forwarding, filtering, marking, and buffering.

The 4D planes are

Decision planeall management and control
Dissemination planecommunication from / to routers
Discovery planetopology and traffic monitoring
Data planetraffic handling

The 4D paper talks about eliminating the control plane, meanwhile SDN still has a control plane, but its not really what the 4D paper was calling as a control plane.

“Control plane” in 4D means the distributed routing protocols that are implemented across the routers. The control plane today is the “Decision plane” in 4D.

The “Dissemination plane” lives on , but we today we call it a control channel:

  • In RCP, dissemination plane is BGP.
  • In OpenFlow, it is secchan.

The dissemination plane is nothing more than the control protocol that the control plane (decision plane in 4D) uses to talk to the data plane.

The 4D architecture followed on the RCP work as a generalization. And in some sense inspired the entire SDN movement.



For more on Introduction to Software Defined Networking, please refer to the wonderful course here https://www.coursera.org/learn/sdn


Related Quick Recap


I am Kesler Zhu, thank you for visiting my website. Check out more course reviews at https://KZHU.ai

Don't forget to sign up newsletter, don't miss any chance to learn.

Or share what you've learned with friends!

Leave a Reply

Your email address will not be published. Required fields are marked *