Larry Peterson It is important to recognize the various perspectives on computer networks (e.g., that of network architects, application developers, end users, and network operators) to understand the technical requirements that shape how networks are designed and built. But this presumes all design decisions are purely technical, which is certainly not the case. Many other factors, from economic forces, to government policy, to societal influences, to ethical considerations, influence how networks are designed and built.
Of these, the marketplace is often the most influential, and corresponds to the interplay between network operators that sell access and connectivity (e.g., AT&T, Comcast, Verizon, DT, NTT, China Mobile), network equipment venders that sell hardware to network operators (e.g., Cisco, Juniper, Ericsson, Nokia, Huawei, NEC), cloud providers that host content and scalable applications in their datacenters (e.g., Google, Amazon, Microsoft), service providers that deliver content and cloud apps to end-users (e.g., Facebook, Apple, Netflix, Spotify), and of course, subscribers and customers that download content and run cloud applications (i.e., individuals, but also enterprises and businesses). Not surprisingly, the lines between all these players are not crisp, with many companies playing multiple roles. For example, service providers like Facebook run their own clouds and network operators like Comcast and AT&T own their own content. The most notable example of this cross-over are the large cloud providers, who (a) build their own networking equipment, (b) deploy and operate their own networks, and (c) provide end-user services and applications on top of their networks. It's notable because it challenges the implicit assumptions of the simple "textbook" version of the technical design process. One such assumption is that designing a network is a one-time activity. Build it once and use it forever (modulo hardware upgrades so users can enjoy the benefits of the latest performance improvements). A second is that the job of designing and implementing the network is completely divorce from the job of operating the network. Neither of these assumptions is quite right. On the first point, the network’s design is clearly evolving. The only question is how fast. Historically, the feature upgrade cycle involved an interaction between network operators and their vender partners (often collaborating through the standardization process), with timelines measured in years. But anyone that has downloaded and used the latest cloud app knows how glacially slow anything measured in years is by today's standards. On the second point, the companies that build networks are almost always the same ones that operate them. The only question is whether they develop their own features or outsource that process to their venders. If we once again look to the cloud for inspiration, we see that develop-and-operate isn’t just true at the corporate level, but it is also how the fastest moving cloud companies organize their engineering teams: around the DevOps model. (If you are unfamiliar with DevOps, we recommend you read "Site Reliability Engineering: How Google Runs Production Systems" to see how Google practices it.) What this all means is that computer networks are now in the midst of a major transformation, due largely to market pressure being applied by agile cloud providers. Network operators are trying to simultaneously accelerate the pace of innovation (sometimes known as feature velocity) and yet continue to offer a reliable service (preserve stability). And they are increasingly doing this by adopting the best practices of cloud providers, which can be summarized as having two major themes: (1) take advantage of commodity hardware and move all intelligence into software, and (2) adopt agile engineering processes that break down barriers between development and operations. This transformation is sometimes called the “cloudification” or “softwarization” of the network, but by another name, it’s known as Software Defined Networks (SDN). Whatever you call it, this new perspective will (eventually) be a game changer, not so much in terms of how we address the fundamental technical challenges of framing, routing, fragmentation/reassembly, packet scheduling, congestion control, security, and so on, but in terms of how rapidly the network evolves to support new features and to accommodate the latest advances in technology. This general theme is important and we plan to return to it in future posts. Understanding networks is partly about understanding the technical underpinnings, but also partly about how market forces (and other factors) drive change. That you are able to make informed design decisions about technical approach A versus technical approach B is a necessary first step, but that you are able to deploy that solution and bring it to market more rapidly and for less cost than the competition is just as important, if not more so.
0 Comments
Larry Peterson Having not cracked open Computer Networks: A Systems Approach for several years, the thing that most struck me as I started to update the material is how much of the Internet has its origins in the research community. Everyone knows that the ARPANET and later TCP/IP came out of DARPA-funded university research, but even as the Web burst onto the scene in the 1990s, it was still the research community that that led the way in the Internet's coming-of-age. There's a direct line connecting papers published on congestion control, quality-of-service, multicast, real-time multimedia, security protocols, overlay networks, content distribution, and network telemetry to today's practice. And in many cases, the technology has become so routine (think Skype, Netflix, Spotify), that it's easy to forget the history of how we got to where we are today. This makes updating the textbook feel strangely like writing an historical record.
From the perspective of writing a relevant textbook (or just making sense of the Internet), certainly it's important to understand the historical context. It is even more important to appreciate the thought process of designing systems and solving problems, for which the Internet is clearly the best use case to study. But there are some interesting challenges in providing perspective on the Internet to a generation that has never known a world without the Internet. One is how to factor commercial reality into the discussion. Take video conferencing as an example. Once there was a single experimental prototype (vic/vat) used to gain experience and drive progress. Today there is Skype, GoToMeeting, WebEx, Google Hangouts, Zoom, UberConference, and many other commercial services. It's important to connect-the-dots between these familiar services and the underlying network capabilities and design principles. For example, while today's video conferencing services leverage the foundational work on both multicast and real-time protocols, they are closed-source systems implemented on top of the network, at the application level. They are able to do this by taking advantage of widely distributed points-of-presence made possible by the cloud. Teasing apart the roles of cloud providers, cloud services, and network operators is key to understanding how and where innovation happens today. A second is to identify open platforms and specifications that serve as good exemplars for the core ideas. Open source has become an important part of today's Internet ecosystem, surpassing the role of the IETF and other standards bodies. In the video conferencing realm, for example, projects like Jitsi, WebRTC, and Opus are important examples of the state-of-the-art. But one look at the projects list on the Apache Foundation or Linux Foundation web sites makes it clear that separating the signal from the noise is no trivial matter. Knowing how to navigate this unbelievably rich ecosystem is the new challenge. A third is to anticipate what cutting edge activity happening today is going to be routine tomorrow. On this point, the answer seems obvious. It will be how network providers improve feature velocity through the softwarization and virtualization of the network. By another name, this is Software Defined Networking (SDN), but more broadly, this represents a shift from building the network using closed/proprietary appliances to using open software platforms running on commodity hardware. This shift is both pervasive and transformative. It impacts everything from high-performance switch design, to architecting access networks (5G, Fiber-to-the-Home), to how network operators deal with lifecycle management, to the blurring of the line between the Internet and the Cloud. Recognizing that this transformation is underway is essential to understanding where the Internet is headed next. |