Why Take It to the Edge
Edge computing is the next natural paradigm shift in IT, bringing a new wave of decentralization. Over the past decade, IT has embraced two seemingly juxtaposed trends: the consolidation of infrastructure and data in private, public, or hybrid clouds, and the growing distribution and diversity of devices that access them. How can these trends continue to coexist? The answer is at the edge.
The distances between a small number of large, centralized data centers and billions of mobile devices dispersed around the globe are too great for adequate performance -- be it to support latency-critical Internet of Things (IoT) connections, or adaptive and responsive applications. While the demand for personalized experiences has increased, there is also rising concern about holding and processing all user data, including personal information, in one place – complete delegation of control to the cloud often conflicts with data protection regulations and Zero Trust principles. Finally, a key driver for cloud adoption was cost reduction, but growing interactions between end users and the cloud result in network round-tripping and more traffic, storage, and compute costs.
Edge computing allows you to offload cloud resources. By pushing applications, data, and services away from centralized nodes to the network periphery, the edge brings functionality, insights, and decision-making closer to the users and things that act upon them. Shifting control and trust to the edge allows for novel, user-centric, applications and experiences while minimizing the transfer of personal data. With less distance for data to travel, unnecessary demands on the cloud are minimized, and so are the associated expenses.
Edge computing Is serverless computing
What makes edge computing even more attractive and easier to adopt is that it is also serverless -- at least when you employ a solution like EdgeWorkers, which allows developers to quickly create functions and deploy them across Akamai’s globally-distributed platform. No hardware is needed, and there is no runtime environment or OS to maintain. You don't need to worry about scalability, availability, and performance such as cold start times or the distribution of code on the edge network -- most importantly, you don't need to own or maintain an edge network.
Building and operating a large and distributed network of edge nodes is an undertaking that doesn't make business sense even for companies large enough to run their own private cloud. The Akamai Intelligent Edge Platform is the world’s largest and most sophisticated edge network with over 4,000 locations across nearly 140 countries, capable of scaling to more than 300 Tbps. By opening this platform up for edge computing, all major aspects needed to run your code are handled on our network, so your developers can focus on what they do best: create and deploy applications that provide value to your end users and differentiation for your business.
The new paradigm: put your code where it runs best
Edge computing is decisively complementary, not competitive, to cloud computing. Private, public, and hybrid clouds will remain a crucial strategy for infrastructure and data, but edge computing will utilize them more efficiently by allowing you to put your code where it runs best.
The paradigm shift is not about moving everything to the edge, but instead distributing workloads to more strategically-located places. Do you need to keep distances, traffic, and latency between data and consumers low? Is your application required to minimize distribution and centralization of sensitive data, such as personally identifiable information (PII)? Do you plan to use insights based on user context and location to make real-time personalization decisions? If you have any of these requirements, then the best place for that code is almost always the edge.
Unlike the challenges of cloud adoption, edge computing does not require intrusive changes or a radical deviation from existing practices. Instead, it is an additional tool that extends and enhances existing systems, applications, and concepts. After all, most organizations already use some form of edge technology for caching, monitoring, or protection. And if that can be extended to execute custom code, use cases for edge computing can be identified and implemented as needed, and without the same migration pains as the move to cloud.
By using EdgeWorkers, you can reduce wait times for end users by decreasing latency from seconds to single-digit microseconds for geolocation-based personalization services, or by performing consent evaluation mandated by regulations locally at the edge rather than by calling back to the cloud. You can also perform URL and routing transformations directly at the edge to optimize caching and minimize round-tripping, resulting in shorter load times and decreased network traffic and compute cycles at the origin servers.
The possibilities for custom code at the edge are almost infinite. The Akamai Intelligent Edge Platform is open for developers to use serverless and with zero effort. Just bring your own code -- we’ll take care of the rest.
To get started on serverless computing at the edge, you can sign up for EdgeWorkers.