Larry Peterson Having not cracked open Computer Networks: A Systems Approach for several years, the thing that most struck me as I started to update the material is how much of the Internet has its origins in the research community. Everyone knows that the ARPANET and later TCP/IP came out of DARPA-funded university research, but even as the Web burst onto the scene in the 1990s, it was still the research community that that led the way in the Internet's coming-of-age. There's a direct line connecting papers published on congestion control, quality-of-service, multicast, real-time multimedia, security protocols, overlay networks, content distribution, and network telemetry to today's practice. And in many cases, the technology has become so routine (think Skype, Netflix, Spotify), that it's easy to forget the history of how we got to where we are today. This makes updating the textbook feel strangely like writing an historical record.
From the perspective of writing a relevant textbook (or just making sense of the Internet), certainly it's important to understand the historical context. It is even more important to appreciate the thought process of designing systems and solving problems, for which the Internet is clearly the best use case to study. But there are some interesting challenges in providing perspective on the Internet to a generation that has never known a world without the Internet. One is how to factor commercial reality into the discussion. Take video conferencing as an example. Once there was a single experimental prototype (vic/vat) used to gain experience and drive progress. Today there is Skype, GoToMeeting, WebEx, Google Hangouts, Zoom, UberConference, and many other commercial services. It's important to connect-the-dots between these familiar services and the underlying network capabilities and design principles. For example, while today's video conferencing services leverage the foundational work on both multicast and real-time protocols, they are closed-source systems implemented on top of the network, at the application level. They are able to do this by taking advantage of widely distributed points-of-presence made possible by the cloud. Teasing apart the roles of cloud providers, cloud services, and network operators is key to understanding how and where innovation happens today. A second is to identify open platforms and specifications that serve as good exemplars for the core ideas. Open source has become an important part of today's Internet ecosystem, surpassing the role of the IETF and other standards bodies. In the video conferencing realm, for example, projects like Jitsi, WebRTC, and Opus are important examples of the state-of-the-art. But one look at the projects list on the Apache Foundation or Linux Foundation web sites makes it clear that separating the signal from the noise is no trivial matter. Knowing how to navigate this unbelievably rich ecosystem is the new challenge. A third is to anticipate what cutting edge activity happening today is going to be routine tomorrow. On this point, the answer seems obvious. It will be how network providers improve feature velocity through the softwarization and virtualization of the network. By another name, this is Software Defined Networking (SDN), but more broadly, this represents a shift from building the network using closed/proprietary appliances to using open software platforms running on commodity hardware. This shift is both pervasive and transformative. It impacts everything from high-performance switch design, to architecting access networks (5G, Fiber-to-the-Home), to how network operators deal with lifecycle management, to the blurring of the line between the Internet and the Cloud. Recognizing that this transformation is underway is essential to understanding where the Internet is headed next.
0 Comments
Leave a Reply. |