Can Edge Computing Exist Without the Edge? Part 1: The Edge
If the title sounds like a trick question, it really depends on who you ask. Semantically, it seems clear that if you take the "edge" and combine it with "computing" you get edge computing. But if you have been reading headlines, you would be justified in having doubts that the answer is that simple. You may also still be asking the fundamental question, "what is the edge, anyway?"
For all of the excitement around these ideas, it is fascinating to see just how vaguely defined, and therefore overused, the terms are. To be fair, as the technology evolves, businesses are continuously discovering new use cases for how to leverage edge computing. This leads opportunistic marketers to co-opt the terms edge and edge computing, which then causes analysts and the media to repeatedly question whether the existing definitions still hold.
As a basic framing, the edge is a place, distributed away from the core of the data center. The purpose of the edge is to shift data and decisions closer to users and devices to deliver better user experiences. As Gartner VP analyst Bob Gill said in The Edge Manifesto, the edge is designed "for the placement of content, compute and data center resources on the edge of the network, closer to concentrations of users. This augmentation of the traditional centralized data center model ensures a better user experience demanded by digital business."
Let's explore two of the prevailing platforms that claim to be at the edge: Amazon Web Services (AWS) and Akamai. The most logical definitions of the edge today speak about it as a place that's closest to users, devices, and data creation. As Akamai's CEO and co-founder Tom Leighton puts it: "The edge is where all the users are. It's where all the devices are. And it's where all the bandwidth is -- that's why more and more functionality is moving to the edge." As of this writing, AWS lists 217 points of presence while Akamai maintains over 4,100.
This highlights the difference between the architecture of the cloud versus that of the edge. The cloud is architected to create large, centralized installations that provide access to tools that help businesses build, deploy, and operate apps on that cloud platform. Cloud data centers are deployed in physical locations that are centralized by geographic location where there are large concentrations of businesses and users, with a focus on enabling availability and uptime for the applications developers build.
The edge is architected to create nimble, massively distributed installations that provide access to services that help businesses minimize latency, maximize scale, and provide a consistent security posture for apps deployed on any platform. Akamai's edge nodes are deployed in colocation facilities and embedded deep into carrier networks to maximize control over how content is delivered from the application to the user.
This is why Akamai maintains that the edge complements the cloud -- it surrounds and extends cloud infrastructure and applications, and creates a defensive shield around it, to improve the experience of users accessing the applications and content that businesses publish. Our goal has been to ensure that over 90% of the content and data that our customers' customers request is within one network hop of Akamai's edge. If we eliminate as many variables for delivering content to consumers (people or increasingly connected "things") as possible, we can ensure the best, most consistent experience and provide protection from an increasing number of security threats.
Cloud providers cannot truly provide edge capabilities unless they deploy their services to the edge. These providers have created incredibly sophisticated capabilities, but when they are centrally deployed, they are not truly edge services. It's why AWS is investing in what it calls Local Zones -- to provide true edge capabilities in local markets. A Local Zone is basically an edge data center. It resides outside the regions where AWS currently has availability zones, and is designed to bring localized infrastructure closer to end users that require very low levels of latency (in the single-digit milliseconds). As of this writing, AWS has two Local Zones in Los Angeles, one deployed in December 2019 and the other in August 2020.
Cloud providers will likely never expand to include a truly robust set of edge locations. The platforms were architected to provide access to serverless compute capabilities via a centralized architecture, not a distributed one. Their business models are designed and optimized with availability and scalability in mind, versus low latency. And it takes a concerted effort over many years to develop the necessary relationships with thousands of local internet service providers (ISPs) and mobile network operators (MNOs) to create an edge network. Cloud providers may deploy some edge locations to satisfy specific use cases, but given the existence of an edge platform like Akamai that can already deliver low-latency computing and security solutions globally, the economics point to partnering, not building out competing systems.
So now that we have established the difference between the cloud and the edge, in upcoming posts we will dive into these concepts:
What edge computing is, and how it differs from cloud computing
Why economics dictate which data is best managed at the core, in the cloud, and on the edge
How the edge complements the cloud for latency, scalability, and security use case