Bruce Davie For almost as long as there have been packet-switched networks, there have been ideas about how to virtualize them. For example, there were early debates in the networking community about the merits of "virtual circuits" versus connectionless networks. But the concept of network virtualization has become more widespread in recent years, helped along by the rise of SDN as an enabling technology.
Virtualization has a robust history in computer science, but there remains some confusion about precisely what the term means. Arguably this is due in part to confusion caused by colloquial usage of "virtual" as a synonym for "almost", among many other uses. Virtual memory provides an easy example to help understand what virtualization means in computing. Virtual memory creates an abstraction of a large and private pool of memory resources, even though the underlying physical memory may be shared by many applications and users and considerably smaller than the apparent pool of virtual memory. This abstraction enables programmers to operate under the illusion that there is plenty of memory and that no-one else is using it, while under the covers the memory management system takes care of things like mapping the virtual memory to physical resources and avoiding conflict between users. Similarly, server virtualization presents the abstraction of a virtual machine (VM), which has all the features of a physical machine. Again, there may be many VMs supported on a single physical server, and the operating system and users on the virtual machine are happily unaware that the VM is being mapped onto physical resources. A key point here is that virtualization of computing resources preserves the abstractions that existed before they were virtualized. This is important because it means that users of those abstractions don't need to change - they see a faithful reproduction of the thing being virtualized. So what happens when we try to virtualize networks? We are able to present familiar abstractions to users of the virtual network, while mapping those abstractions onto the physical network in a way that insulates the user from the complexity of this mapping. An early success for virtual networking came with the introduction of virtual private networks (VPNs), which allowed carriers to present corporate customers with the illusion that they had their own private network, even though in reality they were sharing underlying resources with many other users. One instance of this was the flavor of VPN known as MPLS VPNs, which gave each customer their own private address space and routing tables, along with control over the topology of their network, all implemented on top of a single IP network. VPNs, however, only virtualize a few resources, notably addressing and routing tables. Network virtualization as commonly understood today goes further, virtualizing every aspect of networking. That means that a virtual network today supports all the basic abstractions of a physical network - switching, routing, firewalling, load balancing - virtualizing the entire network stack from layers two through seven. In this sense, they are analogous to the virtual machine, with its support of all the abstractions of a server: CPU, storage, I/O, etc. Like virtual machines, virtual networks are also allowing a whole set of operational advances. They can be created rapidly under programmatic control; snapshots can be taken; networks can be cloned and migrated to entirely new locations, e.g., for disaster recovery. There's still lots of room for growth in the virtual networking space. Modern cloud operators increasingly depend on virtual networks to automate their provisioning of services. Operators of emerging 5G networks are looking at options for virtualizing their networks. For a more in depth discussion of this topic, we refer you to this blog post, co-authored with Martin Casado, one of the pioneers of both SDN and network virtualization.
0 Comments
Larry Peterson ![]() Earlier posts talked about the softwarization of the network in fairly general terms, but the idea got rolling ten years ago with the introduction of Software Defined Networks (SDN). The fundamental idea of SDN is to decouple the network control plane (i.e., where routing algorithms like RIP, OSPF, and BGP run) from the network data plane (i.e., where packet forwarding decisions get made), with the former moved into software running on commodity servers, and the latter implemented by white-box switches like the ones described in Section 3.4 of the book. The original enabling idea of SDN was to define a standard interface between the control plane and the data plane so that any implementation of the control plane could talk to any implementation of the data plane; this breaks the dependency on any one vendor’s bundled solution. The original interface is called OpenFlow, and this idea of decoupling the control and data planes came to be known as disaggregation. OpenFlow was a great first step, but a decade of experience has revealed that it is not sufficient as the interface for controlling the data plane. This is for the same reason any API layered on top of hardware falls short: it does not expose the full range of features that switch vendors put into their hardware. To address this shortcoming, the SDN community is now working on a language-based approach to specifying how the control and data planes interact. The language is called P4, and it provides a richer model of the switch's packet forwarding pipeline. Another important aspect of disaggregation is that a logically centralized control plane can be used to control a distributed network data plane. We say logically centralized because while the state collected by the control plane is maintained in a global data structure (e.g., a Network Map), the implementation of this data structure could still be distributed over multiple servers (i.e., it could run in a cloud). This is important for both scalability and availability, where the two planes are configured and scaled independent of each other. This idea took off quickly in the cloud, with today’s cloud providers running SDN-based solutions both within their datacenters and across the backbone networks that interconnect their datacenters. A consequence of this design that isn’t immediately obvious is that a logically centralized control plane doesn’t just manage a network of physical (hardware) switches that interconnects physical servers, but it also manages a network of virtual (software) switches that interconnect virtual servers (e.g., Virtual Machines and containers). If you’re counting “switch ports” (a good measure of all the devices connected to your network) then the number of virtual ports in the Internet shot past the number of physical ports in 2012. One of other key enablers for SDN’s success, as depicted in the Figure, is the Network Operating System (NOS). Like a server operating system (e.g., Linux, IOS, Android, Windows) that provides a set of high-level abstractions that make it easier to implement applications (e.g., you can read and write files instead of directly accessing disk drives), a NOS makes it easier to implement network control functionality, otherwise known as Control Apps. A good NOS abstracts the details of the network switches and provides a “network map” abstraction to the application developer. The NOS detects changes in the underlying network (e.g., switches, ports, and links going up-and-down) and the control application simply implements the behavior it wants on this abstract graph. What that means is that the NOS takes on the burden of collecting network state (the hard part of distributed algorithms like Link-State and Distance-Vector algorithms) and the control app is free to simply implement the shortest path algorithm and load the computed forwarding rules into the underlying switches. By centralizing this logic, SDN is able to produce a globally optimized solution. The published evidence confirms this advantage (e.g., Google's private wide-area network B4). As much of an advantage as the cloud providers have been able to get out of SDN, its adoption in enterprises and Telcos has much much slower. This is partly about the ability of different markets to manage their networks. The Googles, Microsofts, and Amazons of the world have the engineers and DevOps skills needed to take advantage of this technology, whereas others still prefer pre-packaged and integrated solutions that support the management and command line interfaces they are familiar with. As is often the case, business culture changes more slowly than technology. |