We are officially shutting down PlanetLab at the end of May, with our last major user community (MeasurementLab) having now migrated to new infrastructure. It was 18 years ago this month (March 2002) that 30 systems researchers got together at the Intel Lab in Berkeley to talk about how we could cooperate to build out the distributed testbed to support our research. There were no funding agencies in the room, no study group, and no platinum sponsors. Just a group of systems people that wanted to get their research done. We left the meeting with an offer from David Tennenhouse, then Director of Research at Intel, to buy 100 servers to bootstrap the effort. In August, the day before SIGCOMM, a second underground meeting happened in Pittsburgh, this time drawing 80 people. The first machines came online at Princeton and Berkeley in July, and by October, we had the 100 seed machine up and running at 42 sites. The rest, as they say, is history.
In retrospect, it was a unique moment in time. The distributed systems community, having spent the previous 15 years focused on the LAN, was moving on to wide-area networking challenges. The networking community, having architected the Internet, was ruminating about how it had become ossified. Both lacked a realistic platform to work on. My own epiphany came during an Internet End-to-End Research Group meeting in 2001, when I found myself in a room full of the Internet’s best-and-brightest, trying to figure out how we could possibly convince Cisco to interpret one bit in the IP header differently. I realized we needed to try a different approach.
PlanetLab enabled a lot of good research, much of which has been documented in the website’s bibliography. Those research results are certainly important, but from my point of view, PlanetLab has had impact in other, more lasting ways. One was a model for how computer scientists can share research infrastructure. Many of the early difficulties we faced deploying PlanetLab had to do with convincing University CIOs that hosting PlanetLab servers had an acceptable risk/reward tradeoff. A happy mistake we made early on was asking the VP for Research (not the University CIO) for permission to install servers on their campus. By the time the security-minded folks figured out what was going on, it was too late. They had no choice but to invent Network DMZs as a workaround.
A second was to expose computer scientists to real-world operational issues that are inevitable when you’re running Internet services. Researchers that had been safely working in their labs were suddenly exposed to all sorts of unexpected user behavior, both benign and malicious, not to mention the challenges of keeping a service running under varied network conditions. There were a lot of lessons learned under fire, with unexpected traffic bursts (immediately followed by email from upset University system admins), a common right-of-passage for both grad students and their advisors. I’m not surprised when I visit Google and catch up with former faculty colleagues to hear that they now spend all their time worrying about operational challenges. Suddenly, network management is cool.
Then there were the non-technical, policy-related issues, forcing us to deal with everything from DMCA take-down notices to FBI subpoenas to irate web-surfers threatening to call the local Sheriff on us. These and similar episodes were among the most eye-opening aspects of the entire experience. They were certainly the best source of war stories, and an opportunity to get to know Princeton’s General Counsel quite well. Setting policy and making judgements about content is really hard… who knew.
Last, but certainly not least, is the people. In addition to the fantastic and dedicated group of people that helped build and operate PlanetLab, the most gratifying thing that happens to me (even still today) is running into people--usually working for an Internet company of one sort or another--who tell me that PlanetLab was an important part of their graduate student experience. If you are one of those people and I haven’t run into you recently (or even if I have) please leave a comment and let me know what you’re up to. It will be good to hear from you.
Two industry trends with significant momentum are on a collision course. One is the cloud, which in pursuit of low-latency/high-bandwidth applications is moving out of the datacenter and towards the edge. The promise and potential of applications ranging from Internet-of-Things (IoT) to Immersive UIs, Public Safety, Autonomous Vehicles, and Automated Factories, has triggered a gold rush to build edge platforms and services. The other is the access network that connects homes, businesses, and mobile devices to the Internet. Network operators (Telcos and CableCos) are transitioning from a reliance on closed and proprietary hardware to open architectures leveraging disaggregated and virtualized software running on white-box servers, switches, and access devices.
The confluence of cloud and access technologies raises the possibility of convergence. For the cloud, access networks provide low-latency connectivity to end users and their devices, with 5G in particular providing native support for the mobility of those devices. For the access network, cloud technology enables network operators to enjoy the CAPEX & OPEX savings that come from replacing purpose-built appliances with commodity hardware, as well as accelerating the pace of innovation through the softwartization of the access network.
It is clear that the confluence of cloud and access technologies at the access-edge is rich with opportunities to innovate, and this is what motivates the CORD-related platforms we are building at ONF. But it is impossible to say how this will all play out over time, with different perspectives on whether the edge is on-premise, on-vehicle, in the cell tower, in the Central Office, distributed across a metro area, or all of the above. With multiple incumbent players—e.g., network operators, cloud providers, cell tower providers—and countless startups jockeying for position, it’s impossible to predict how the dust will settle.
On the one hand, cloud providers believe that by saturating metro areas with edge clusters and abstracting away the access network, they can build an edge presence with low enough latency and high enough bandwidth to serve the next generation of edge applications. In this scenario, the access network remains a dumb bit-pipe, allowing cloud providers to excel at what they do best: run scalable cloud services on commodity hardware. On the other hand, network operators believe that by building the next generation access network using cloud technology, they will be able to co-locate edge applications in the access network. This scenario comes with built-in advantages: an existing and widely distributed physical footprint, existing operational support, and native support for both mobility and guaranteed service.
While acknowledging both of these possibilities, there is a third outcome that not only merits consideration, but is also worth actively working towards: the democratization of the network edge. The idea is to make the access-edge accessible to anyone, and not strictly the domain of incumbent cloud providers or network operators. There are three reasons to be optimistic about this possibility:
Having not cracked open Computer Networks: A Systems Approach for several years, the thing that most struck me as I started to update the material is how much of the Internet has its origins in the research community. Everyone knows that the ARPANET and later TCP/IP came out of DARPA-funded university research, but even as the Web burst onto the scene in the 1990s, it was still the research community that that led the way in the Internet's coming-of-age. There's a direct line connecting papers published on congestion control, quality-of-service, multicast, real-time multimedia, security protocols, overlay networks, content distribution, and network telemetry to today's practice. And in many cases, the technology has become so routine (think Skype, Netflix, Spotify), that it's easy to forget the history of how we got to where we are today. This makes updating the textbook feel strangely like writing an historical record.
From the perspective of writing a relevant textbook (or just making sense of the Internet), certainly it's important to understand the historical context. It is even more important to appreciate the thought process of designing systems and solving problems, for which the Internet is clearly the best use case to study. But there are some interesting challenges in providing perspective on the Internet to a generation that has never known a world without the Internet.
One is how to factor commercial reality into the discussion. Take video conferencing as an example. Once there was a single experimental prototype (vic/vat) used to gain experience and drive progress. Today there is Skype, GoToMeeting, WebEx, Google Hangouts, Zoom, UberConference, and many other commercial services. It's important to connect-the-dots between these familiar services and the underlying network capabilities and design principles. For example, while today's video conferencing services leverage the foundational work on both multicast and real-time protocols, they are closed-source systems implemented on top of the network, at the application level. They are able to do this by taking advantage of widely distributed points-of-presence made possible by the cloud. Teasing apart the roles of cloud providers, cloud services, and network operators is key to understanding how and where innovation happens today.
A second is to identify open platforms and specifications that serve as good exemplars for the core ideas. Open source has become an important part of today's Internet ecosystem, surpassing the role of the IETF and other standards bodies. In the video conferencing realm, for example, projects like Jitsi, WebRTC, and Opus are important examples of the state-of-the-art. But one look at the projects list on the Apache Foundation or Linux Foundation web sites makes it clear that separating the signal from the noise is no trivial matter. Knowing how to navigate this unbelievably rich ecosystem is the new challenge.
A third is to anticipate what cutting edge activity happening today is going to be routine tomorrow. On this point, the answer seems obvious. It will be how network providers improve feature velocity through the softwarization and virtualization of the network. By another name, this is Software Defined Networking (SDN), but more broadly, this represents a shift from building the network using closed/proprietary appliances to using open software platforms running on commodity hardware. This shift is both pervasive and transformative. It impacts everything from high-performance switch design, to architecting access networks (5G, Fiber-to-the-Home), to how network operators deal with lifecycle management, to the blurring of the line between the Internet and the Cloud. Recognizing that this transformation is underway is essential to understanding where the Internet is headed next.