Google Finally Gets The Edge Computing Strategy Right With Distributed Cloud Edge – Forbes

Posted: April 9, 2022 at 4:05 am

Announced at the Google Cloud Next 21 conference, Google Distributed Cloud (GDC) plays a critical role in the success of Anthos by making it relevant to telecom operators and enterprise customers. Google Distributed Cloud Edge, a part of GDC, aims to make Anthos the foundation for running 5G infrastructure and modern workloads such as AI and analytics.

Recently, Google announced the general availability of GDC Edge by sharing the details of the hardware configuration and the requirements.

5G

In its initial form, GDC Edge runs on two form factors - rack-based configuration and GDC Edge appliance. Lets take a closer look at these choices.

This configuration targets telecom operators and communication service providers (CSP) for running 5G core and radio access networks (RAN). The CSPs can expose the same infrastructure to their end customers for running workloads like AI inference that need ultra-low latency.

The location where the rack-based hardware runs is designated as a Distributed Cloud Edge Zone. Each zone runs on dedicated hardware that Google provides, deploys, operates, and maintains. The hardware consists of six servers, and two top-of-rack (ToR) switches connecting the servers to the local network. In terms of storage, each physical server comes with 4TiB disks. The gross weight of a typical rack is 900lbs or 408kg. The Distributed Cloud Edge rack arrives pre-configured with the hardware, network, and Google Cloud settings specified when it was ordered.

Once a DCE zone is fully configured, customers can group one or more servers from the rack to create a NodePool. Each node of the NodePool acts as a Kubernetes worker node connected to the Kubernetes control plane running in the nearest Google Cloud region.

This distributed topology gives Google the flexibility to upgrade, patch, and manage the Kubernetes infrastructure with minimal disruption to customer workloads. It allows DCE to benefit from a secure and highly available control plane without taking up the processing capacity on the nodes.

Google took a unique approach to edge computing by moving the worker nodes to the edge while running the control plane in the cloud. This is very similar to how Google manages GKE, except that the worker nodes are a part of the NodePool deployed at the edge.

The clusters running on DCE may be connected to Anthos management plane to gain better control over the deployments and configuration.

A secure VPN tunnel connects the local Distributed Cloud Edge infrastructure to a virtual private cloud (VPC) configured within Google Cloud. Workloads running at the edge can access Google Compute Engine resources deployed in the same VPC.

The rack-based configuration demands connectivity to the Google Cloud at all times. Since it runs in a controlled environment in a CSP facility, meeting this requirement is not a challenge.

Once the clusters are provisioned on the DCE infrastructure, they can be treated like other Kubernetes clusters. It is also possible to provision and run virtual machines based on kubevirt within the same environment.

CSPs from the United States, Canada, France, Germany, Italy, Netherlands, Spain, Finland, and the United Kingdom can order rack-based infrastructure from Google.

The GDC Edge Appliance is a Google Cloud-managed, secure, high-performance appliance for edge locations. It provides local storage, ML inference, data transformation, and export functionality.

According to Google, GDC Edge Appliances are ideal for use cases where bandwidth and latency limitations prevent organizations from processing the data from devices like cameras and sensors back in cloud data centers. These appliances simplify data collection, analytics, and processing at remote locations where copious amounts of data coming from these devices need to be processed quickly and stored securely.

The Edge Appliance targets enterprises from the manufacturing, supply chain, healthcare, and automotive verticals with low-latency and high throughput requirements.

GCD Edge Appliance

Each appliance comes with a 16 core CPU, 64GB RAM, an NVIDIA T4 GPU, and 3.6TB usable storage. It has a pair of 10 Gigabit and 1 Gigabit Ethernet ports. With the 1U rack-mount form factor, it supports both horizontal or vertical orientation.

The Edge Appliance is essentially a storage transfer device that can also run a Kubernetes cluster and AI inference workloads. With ample storage capacity, customers can use it as a cloud storage gateway.

For all practical purposes, the Edge Appliance is a managed device running Anthos clusters on bare metal. Customers follow the same workflow as installing and configuring Anthos in bare metal environments.

Unlike the rack-based configuration, the clusters run both the control plane and the worker nodes locally on the appliance. But, they are registered with the Anthos management plane running in the nearest Google Cloud region. This configuration makes it possible to run the edge appliance in an offline, air-gapped environment with intermittent connectivity to the cloud.

Analysis and Takeaways

With Anthos and GDC, Google defined a comprehensive multicloud, hybrid, and edge computing strategy. GDC Edge targets CSPs and enterprises through purpose-built hardware offerings.

The telecom operators need a reliable and modern platform to run 5G infrastructure. Google is positioning Anthos as the cloud native, reliable platform for running containerized network functions (CNFs) required for 5G Core and Radio Access Networks (RAN). By delivering a combination of managed hardware (rack-based GDC Edge) and software (Anthos) stack, Google wants to enable CSPs to offer 5G Multi-Access Edge Computing (MEC) to enterprises. It has partnered with AT&T, Reliance JIO, TELUS, Indosat Ooredoo, and more recently with Bell Canda and Verizon to run 5G infrastructure.

Googles approach is different from Amazon and Microsoft for delivering 5G MEC. Both AWS and Azure have 5G-based zones that act as extensions to their data center footprint. AWS Wavelength and Azure Private MEC enable customers to run workloads in the nearest edge location, managed by a CSP. Both Amazon and Microsoft are partnering with telecom providers such as AT&T, Verizon and Vodafone to offer hyperlocal edge zones.

Google is betting big on Anthos as the fabric to run 5G MEC. Its partnering with leading telcos worldwide in helping them build the 5G infrastructure based on its proven cloud native infrastructure based on Anthos. Though Google may have a competing offering for AWS Wavelength and Azure Private MEC in the future, its current strategy is to push GDC Edge as the preferred 5G MEC platform. This approach puts the CSP at the front and center of its edge computing strategy.

Google has finally responded to Azure Stack HCI and AWS Outposts with the GDC Edge Appliance. Its targeting enterprises who need a modern, cloud native platform to run data-driven, compute-intensive workloads at the edge. The edge appliance may be deployed in remote locations with intermittent connectivity, unlike the rack-based configuration.

With Anthos as the cornerstone, Google's Distributed Cloud strategy looks promising. It is aiming to win the enterprise edge as well as the telco edge with purpose-built hardware offerings. Google finally has a viable competitor for AWS Wavelength, AWS Outposts, Azure Edge Zones, and Azure Stack.

View original post here:

Google Finally Gets The Edge Computing Strategy Right With Distributed Cloud Edge - Forbes

Related Posts