

May 23, 2025
GigaOm Radar for Full-Stack Edge Deployments v2
Andrew Green
1. Executive Summary
Full-stack edge deployment solutions are cloud-managed and cloud-connected hyperconverged tools that provide all the capabilities necessary to run applications at customers' preferred locations for local data collection and processing.
These solutions bring a cloud-like experience to edge locations, which are otherwise difficult to manage at scale with traditional IT practices, especially in a DevOps-oriented and agile organization. The solutions are labeled as full-stack because they offer customers all the required technologies at all levels to manage their edge deployments. For example, customers do not need to provide or manage operating systems (OSs).
This means the solutions can run on bare metal hardware, providing type 1 hypervisors, or OSs with type 2 hypervisors, or containerization engines. The solutions can then deploy and run applications, often via self-service mechanisms consumed from a service catalog. Advanced solutions can enable customers to build their applications natively on the platform, which can take advantage of edge-native runtimes. Further, administrators can interact programmatically with the solution and provide a suite of visibility and troubleshooting tools.
These solutions are also labeled as “edge deployments” to differentiate them from as-a-service edge solutions, such as those provided by content delivery networks (CDNs) or edge development platforms. In other words, full-stack edge deployments are “near edge” solutions.
Particularly important use cases are those for which applications need:
To run in air-gapped environments with limited or unstable network connectivity.
To support latency-sensitive use cases for near-real-time processing.
To enable data to remain local due to legislative or regulatory requirements.
To act as an alternative where data transfers from the edge are prohibitive from either a performance or cost point of view.
To simplify cloud architectures, especially in geographically distributed scenarios.
To remove dependencies on cloud platforms because customers have no control over the underlying infrastructure and virtualization technologies.
To optimize costs, such as through storage cost savings using storage tiering and minimizing data transfer costs.
The solutions that qualify for this report all deliver full-stack edge deployments but come from different backgrounds and can tackle specific use cases. We capture the differences in their ability to scale, which must be delineated in terms of support for scaling up, scaling out, and scaling down.
Those that scale up can support large workloads and massive amounts of data to be stored and processed on their nodes. Those that scale out can manage thousands of geographically distributed nodes. Those that scale down offer lightweight virtualization and runtimes that consume very small amounts of compute and memory resources, suitable for internet of things (IoT) deployments and small form-factor devices. It’s important to note that a vendor can deliver on more than one of these scalability types.
This is our second year evaluating the full-stack edge deployments space in the context of our Key Criteria and Radar reports. This report builds on our previous analysis and considers how the market has evolved over the last year.
This GigaOm Radar report examines 16 of the top full-stack edge deployment solutions and compares offerings against the capabilities (table stakes, key features, and emerging features) and nonfunctional requirements (business criteria) outlined in the companion Key Criteria report. Together, these reports provide an overview of the market, identify leading full-stack edge deployment offerings, and help decision-makers evaluate these solutions so they can make a more informed investment decision.
GIGAOM KEY CRITERIA AND RADAR REPORTS
The GigaOm Key Criteria report provides a detailed decision framework for IT and executive leadership assessing enterprise technologies. Each report defines relevant functional and nonfunctional aspects of solutions in a sector. The Key Criteria report informs the GigaOm Radar report, which provides a forward-looking assessment of vendor solutions in the sector.
2. Market Categories and Deployment Types
To help prospective customers find the best fit for their use case and business requirements, we assess how well full-stack edge deployment solutions are designed for specific deployment models (Table 1).
For this report, we recognize the following deployment models:
Type 1 hypervisor: This type of hypervisor runs directly on bare metal to provision compute instances such as virtual machines (VMs).
Type 2 hypervisor: This type of hypervisor runs on top of a host OS to provision VM compute instances.
Host OS: Vendors that do not offer a type 1 hypervisor (or another way of running applications on bare metal) can provide a proprietary or open source host OS to run type 2 hypervisors or container runtimes.
Container runtime: This runs on top of a host OS to provision container compute instances.
Integrated hardware and software: These are prepackaged hardware and software solutions with ready-made images.
Table 1. Vendor Positioning: Deployment Model
Table 1 components are evaluated in a binary yes/no manner and do not factor into a vendor’s designation as a Leader, Challenger, or Entrant on the Radar chart (Figure 1).
3. Decision Criteria Comparison
All solutions included in this Radar report meet the following table stakes—capabilities widely adopted and well implemented in the sector:
Converged infrastructure compatibility
Centralized management
Bare metal virtualization and containerization
Software-defined functions
Logging and telemetry
Tables 2, 3, and 4 summarize how each vendor included in this research performs in the areas we consider differentiating and critical in this sector. The objective is to give the reader a snapshot of the technical capabilities of available solutions, define the perimeter of the relevant market space, and gauge the potential impact on the business.
Key features differentiate solutions, highlighting the primary criteria to be considered when evaluating a full-stack edge deployment solution.
Emerging features show how well each vendor is implementing capabilities that are not yet mainstream but are expected to become more widespread and compelling within the next 12 to 18 months.
Business criteria provide insight into the nonfunctional requirements that factor into a purchase decision and determine a solution’s impact on an organization.
These decision criteria are summarized below. More detailed descriptions can be found in the corresponding report, “GigaOm Key Criteria for Evaluating Full-Stack Edge Deployment Solutions.”
Key Features
Plug-and-play provisioning: To support a seamless deployment experience, solutions strive to be as close as possible to plug-and-play provisioning. This means that customers, upon receiving their solution hardware, can plug the device in and provide network connectivity. The solution will be automatically provisioned and connected to the management platform.
Cloud-like management: This metric evaluates a solution’s ability to provide an administrator and developer experience similar to that available in the cloud. Using a web-based interface, organizations can provision compute resources and services, define access controls, and run applications.
Cloud integrations: While full-stack edge deployments can operate completely independently of cloud environments, we expect that organizations will integrate these solutions with their existing cloud infrastructure to support use cases such as backup and disaster recovery, batch and large data processing, cloud bursting, and supporting distributed applications across the cloud and edge deployments.
DevOps suitability: To support DevOps use cases, full-stack edge deployment solutions must provide capabilities to enable developers and infrastructure teams to interact programmatically with the solution. These teams require the tools, flexibility, and automation necessary to build, deploy, and manage applications across diverse environments, ensuring optimal resource use, operational efficiency, and accelerated delivery.
Marketplace and services catalog: Solutions provide a marketplace of applications and services from which customers can self-serve the procurement and deployment of various applications. This helps organizations leverage a curated set of validated and prepackaged applications.
Edge security: This criterion refers to the protection of devices deployed at the edge and the applications running on them. It should encompass safeguards at the hardware and network levels, support for third-party tool integrations, data protection, access controls, and secure software practices.
Visibility and monitoring: Full-stack edge deployments must provide health monitoring, observability, and troubleshooting capabilities for deployed devices, services, and applications. This enables organizations to proactively identify and address issues, optimize resource use, and maintain the security and stability of their distributed infrastructure.
Cluster management: This feature is evaluated based on the solution’s ability to define and manage clusters, which are collections of multiple deployments or nodes. This capability enables the consistent application of configurations, application deployments, and updates across all members of a cluster.
Table 2. Key Features Comparison
Emerging Features
Development environments and software development kits (SDKs): Instead of enabling DevOps teams to programmatically interact with the solution, this metric assesses how well the tools offered by the solution enable developers to natively build and run edge applications.
Non-x86 compute: By supporting non-x86 compute architectures, a full-stack edge deployment solution can cater to a wider variety of edge computing use cases, allowing customers to choose the most suitable hardware for their specific application requirements, whether they're power efficiency, performance, or specialized acceleration needs.
Edge AI: This refers to the solution’s ability to support inference use cases at the edge. Inference is the post-training phase of an AI or machine learning (ML) product, when it processes a novel input for generation, analysis, or categorization. These models should be optimized to work on the edge devices, where resources are limited compared to those in data centers.
Edge-native runtime: VMs can support large applications and are generally persistent or durable. Containers are more flexible and can also be ephemeral. However, both are fairly resource-intensive and subject to cold-start delays. Edge-native runtimes are specialized environments designed to run applications and services at the edge, optimizing performance and resource use.
Table 3. Emerging Features Comparison
Business Criteria
Scale-up support: The solution can support large-scale use cases by enabling high-performance, resource-intensive deployments at a single location. This is achieved through the solution's ability to handle large hardware configurations, such as multinode clusters and full racks of compute, storage, and networking resources.
Scale-out support: The solution can manage a large number of geographically distributed edge nodes. To do this, the full-stack edge deployment solution provides clear interfaces and centralized orchestration capabilities. This allows administrators to deploy, configure, and manage the virtualized components, services, and applications across multiple edge sites from a single pane of glass.
Scale-down support: To accommodate resource-constrained IoT devices and other small-form-factor edge hardware, the full-stack edge deployment solution must be able to scale down its virtualization technologies and software components. This involves optimizing the solution's runtime, compilers, and system services to require minimal memory and CPU usage.
Partner ecosystem: A full-stack edge deployment vendor should have a broad ecosystem of third-party hardware providers, software providers, channel partners, and distributors.
Resiliency: The solution should provide robust high-availability and disaster recovery capabilities to ensure business continuity. For example, the solution should support data replication across multiple nodes to eliminate single points of failure.
Support services: The vendor should make support services consisting of professional services, managed services, and technical support available to customers.
Table 4. Business Criteria Comparison
4. GigaOm Radar
The GigaOm Radar plots vendor solutions across a series of concentric rings with those set closer to the center judged to be of higher overall value. The chart characterizes each vendor on two axes—balancing Maturity versus Innovation and Feature Play versus Platform Play—while providing an arrowhead that projects each solution’s evolution over the coming 12 to 18 months.
Figure 1. GigaOm Radar for Full-Stack Edge Deployments
As you can see in Figure 1, most vendors are positioned in the Maturity/Platform Play quadrant. While this is largely consistent with the previous version of this report, some vendors have moved into the quadrant since then, particularly including software-only solutions that moved up from the Innovation/Platform Play quadrant. This is due to our updated way of distributing vendors across the quadrants based on their developments around the report’s emerging features and no longer considering whether a solution is an integrated hardware-software one or a software-only one.
Vendors in the Platform Play hemisphere offer comprehensive solutions for edge deployments, which are often vertical agnostic. On the other hand, vendors in the Feature Play half cater to specific verticals, namely IoT and industrial use cases, where demand for edge solutions is high.
Vendors in the Innovation hemisphere are delivering a good set of capabilities across emerging features, namely around edge AI, edge runtimes, non-x86 compute, and integrated development environments (IDEs).
Overall, the pace of innovation across vendors is generally consistent, with only two Outperformers and one Forward Mover in this year’s report, the rest being Fast Movers.
In reviewing solutions, it’s important to keep in mind that there are no universal “best” or “worst” offerings; every solution has aspects that might make it a better or worse fit for specific customer requirements. Prospective customers should consider their current and future needs when comparing solutions and vendor roadmaps.
INSIDE THE GIGAOM RADAR
To create the GigaOm Radar graphic, key features, emerging features, and business criteria are scored and weighted. Key features and business criteria receive the highest weighting and have the most impact on vendor positioning on the Radar graphic. Emerging features receive a lower weighting and have a lower impact on vendor positioning on the Radar graphic. The resulting chart is a forward-looking perspective on all the vendors in this report, based on their products’ technical capabilities and roadmaps.
Note that the Radar is technology-focused, and business considerations such as vendor market share, customer share, spend, recency or longevity in the market, and so on are not considered in our evaluations. As such, these factors do not impact scoring and positioning on the Radar graphic.
For more information, please visit our Methodology.
5. Solution Insights
Acumera: Acumera Reliant Platform
Solution Overview
Acumera is an edge computing platform used by multisite operators to deploy and manage applications within their on-premises locations. These applications include both VMs and container-based applications deployed using Docker. The platform offers application delivery, configuration management, orchestration, monitoring, and data collection.
The Acumera solution is a single product composed of multiple components: the physical edge deployment, a management and control plane, and a cloud-based platform manager. The solution is both cloud- and hardware-agnostic and is differentiated by offering two concurrent hypervisors.
Acumera systems run Debian Linux and support two concurrent hypervisors: KVM and QEMU for OS virtualization. The solution provides native kernel-level Docker container support and is managed by the cloud-based orchestrator via GUI or API. Images for VMs are distributed from cloud to edge according to customers’ configuration and requirements via their VPN. Containers are distributed via standard container registries from all major cloud providers, and there’s support for standard private container repositories.
Acumera is positioned as a Challenger and Fast Mover in the Maturity/Platform Play quadrant of the full-stack edge deployments Radar chart.
Strengths
Acumera scored well on a number of decision criteria, including:
Plug-and-play provisioning: The platform is designed to support zero-touch provisioning (ZTP) and provides a solution for imaging bare metal to fully build edge systems that can include VM or container images. Alternatively, VMs and containers can be deployed from the cloud upon system initialization.
DevOps suitability: The Acumera Reliant Platform has an extensive API that allows developers and infrastructure teams familiar with API-driven development or infrastructure-as-code (IaC) methodologies to interact with the system using approaches familiar to teams who are used to interacting with cloud technologies. Acumera’s Reliant Virtual Controller feature set, available locally at the edge node via CLI or API calls, allows control and interaction with the edge at scale. Terraform scripts are available to deploy the customer-specific management plane to AWS, Google Cloud Platform (GCP), and Microsoft Azure. Acumera’s API is compatible with all common scripting languages, including PowerShell, JavaScript, and Python.
Cloud-like management: Acumera can be managed via a cloud-based web application for global configuration management and orchestration or a local web application for basic functional management and status. All edge systems also have a local GUI for local command, control, and status. The tool exposes functions via a comprehensive API for configuration management and orchestration that is designed to connect to any pipeline or workflow tools that support RESTful APIs.
Opportunities
Acumera has room for improvement in a few decision criteria, including:
Marketplace and services catalog: Acumera does not currently offer a marketplace of applications and services from which customers can self-serve the procurement and deployment of various applications.
Cluster management: The solution does not provide some of the more advanced cluster management features, such as supporting cross-cluster application execution, allowing secure data exchange, and interaction between isolated instances.
Edge security: While the solution includes an integrated, full-feature, stateful, logging firewall and supports container and VM isolation and low-level access control lists (ACLs), it could further implement capabilities such as hardware attestation or built-in endpoint security.
Purchase Considerations
Organizations interested in purchasing Acumera’s solution will engage directly with the vendor rather than going through channel partners. The price is based on the number of edge locations supported, and customers are charged on a monthly basis in a SaaS subscription model. Pricing scales based on the number of locations a customer deploys.
Use Cases
Acumera caters to a variety of use cases, and it has demonstrated expertise in specific verticals such as retail, hospitality, and healthcare. Examples of applications run on the Acumera platform include business-facing applications such as point-of-sale, payment processing, digital signage, content management, AI/ML-based solutions for analytics, and those supporting IoT-based systems.
AWS: AWS Outposts
Solution Overview
AWS Outposts is a family of fully managed solutions that deliver AWS infrastructure and services to on-premises or edge locations for a consistent hybrid experience. Outposts solutions extend and run native AWS services on-premises and are available as Outposts servers and Outposts racks.
The AWS Outposts rack is an industry-standard 42U form factor. It provides the same AWS infrastructure, services, APIs, and tools to data center or co-location spaces. Outposts racks provide AWS compute, storage, database, and other services locally while still allowing you to access the full range of AWS services available in the region for a truly consistent hybrid experience.
The AWS Outposts servers come in a 1U or 2U form factor. They provide the same AWS infrastructure, services, APIs, and tools to on-premises and edge locations that have limited space or smaller capacity requirements, such as retail stores, branch offices, healthcare provider locations, or factory floors. Outposts servers provide local compute and networking services.
Outposts is a fully managed service, which means that AWS specialists will install and manage the appliances, including troubleshooting and carrying out updates and patching, backup, provisioning, incident management, business continuity, and cost optimization services.
Once an AWS Outposts solution is activated in the Amazon Managed Services (AMS) Multi-Account Landing Zone or Single-Account Landing Zone account, organizations need to follow the existing AMS change management processes to provision and manage AWS resources. AMS-hosted infrastructure can be managed by specifying an AWS Outposts-specific subnet. AWS Outposts lifecycles can be managed directly in the AWS Outposts console using the AWS Outposts self-provision services role.
AWS is positioned as an Entrant and Forward Mover in the Maturity/Platform Play quadrant of the full-stack edge deployments Radar chart.
Strengths
AWS scored well on a number of decision criteria, including:
Cloud-like management: Outposts’ most notable strength is the shared AWS cloud management mechanism, which creates a single-vendor cloud-edge deployment with the same management interface and the same services running both in the cloud and locally on edge deployments.
Visibility and monitoring: Outposts benefits from Amazon CloudWatch capacity availability metrics and alarms to monitor the health of applications. Users can create CloudWatch actions to configure automatic recovery options and monitor the capacity utilization of their Outposts over time. Metrics include statistics about data points for Outposts, as well as logs that capture detailed information about the calls made to AWS APIs. These calls can be stored as log files in Amazon S3. It can also monitor and support Traffic Mirroring.
Edge security: AWS Outposts provides extensive security, including at-rest and in-transit encryption via TLS and request signing using an access key ID and a secret access key associated with an AWS Identity and Access Management (IAM) principal.
Opportunities
AWS has room for improvement in a few decision criteria, including:
Plug-and-play provisioning: Since AWS is a managed service, the customer does not have to handle the physical deployment and booting of the hardware but must work with AWS to arrange the deployment. Once deployed, the solution needs to set up an AWS VPC, subnet, and custom route table and configure the local gateway connectivity and the on-premises network.
Cloud integrations: While Outposts is deeply integrated with the AWS portfolio and ecosystem, there is no out-of-the-box way to integrate with non-AWS environments.
Cluster management: The solution does not provide some of the more advanced cluster management features, such as using the management platform as the primary interface for managing multiple deployments, or supporting cross-cluster application execution, allowing secure data exchange and interaction between isolated instances.
AWS was classified as a Forward Mover given its slow release cadence and few year-on-year developments.
Purchase Considerations
AWS Outposts is a managed service, which means AWS assumes the responsibility for procuring, delivering, installing, and operating the solution’s hardware. The shared responsibility model that AWS adheres to in its public cloud solution also extends to the AWS Outposts deployment. Third-party auditors regularly test and verify the effectiveness of AWS’s security. The most notable challenge AWS Outposts faces is that it offers a limited set of AWS services. While the 42RU deployment can offer most services that organizations with an AWS footprint rely on, such as EC2, S3, EKS, RDS, and VMware cloud, the 2RU deployment can support only Amazon EC2, Amazon ECS, AWS IoT Greengrass, and Amazon Sagemaker Edge Manager.
The solution also has other technical service limitations, such as limited airgap operations functionality, no metrics and logs for the RDS service, Linux workload ingest working only if the pre-Workload Ingest EC2 instance is on a non-AWS Outposts subnet, and volume creation for the Elastic Block Store service/AWS Outposts activated in non-AWS Managed Services accounts can’t be transitioned into AWS Managed Services.
Use Cases
Outposts supports workloads and devices requiring low-latency access to on-premises systems, local data processing, data residency, and application migration with local system interdependencies.
The AWS solution is suitable for delivering high-quality experiences for interactive applications like real-time multiplayer games, running manufacturing execution systems (MES), high-frequency trading, and medical diagnostics that require low network latency and large amounts of compute power at the edge.
Azion: Edge Computing Platform
Solution Overview
Azion’s full-stack edge compute solutions run on Azion's network edge as well as customer edge locations and multicloud environments. Azion is the only vendor in this report that offers its solution both as software-only, to be deployed at customers’ preferred locations, and via an as-a-service delivery model, with Azion operating a global network of geographically distributed PoPs.
The Azion Marketplace is a curated digital catalog that offers ready-to-use edge-running software. Customers can easily purchase and deploy solutions from Azion or independent software vendors (ISVs). Customers can also become ISVs by launching and distributing their own software to a vast audience through the Azion Marketplace, which offers solutions ranging from security and performance to databases, from vendors such as Radware, Fauna, Upstash, and hCaptcha, among others. The offerings encompass edge-native functions, cloud-native network functions (CNFs), third-party web application firewalls (WAFs), bot managers, and databases, as well as professional services.
Azion Cells employs a layered isolation strategy. The hypervisor implements proprietary kernel-level isolation to partition processes and resources. The runtime system harnesses the power of the V8 engine, a high-performance open source JavaScript and WebAssembly interpreter. Using V8's Isolate, each edge function runs in its own execution context, ensuring data and process integrity. Azion Cells creates a secure sandbox for code execution to ensure each function operates within its defined boundaries, safeguarding against potential vulnerabilities.
Azion is positioned as a Leader and Outperformer in the Innovation/Platform Play quadrant of the full-stack edge deployments Radar chart.
Strengths
Azion scored well on a number of decision criteria, including:
Development environments and SDKs: Azion’s solution focuses on the developer experience, so developers can choose from a curated set of templates, import existing applications from GitHub, and interact with the solution via a CLI, UI, APIs, SDKs, or using natural language via Azion's ChatGPT plug-in. The Azion Console is available as open source, allowing extensive customization, including white-labeling. Developer-oriented features include a built-in IDE based on VSCode, complete with live preview. It is integrated with ChatGPT for creating or understanding code, and it has a version control system and integration tools such as GitHub, with easy automation for testing and deployment.
Edge-native runtime: The Azion platform operates on Azion's proprietary hypervisor and edge runtime environment, Azion Cells, which is purpose-built for edge computing. The solution supports software-defined networking (SDN) and edge-native functions analogous to virtual network functions (VNFs) and cloud-native network functions (CNFs). The Azion Edge Orchestrator enables real-time management and control of edge resources, including load balancers, firewalls, and other services, while the Edge Traffic Router provides SDN capabilities, which dynamically route packets across the network for optimal performance based on real-time analytics.
Edge security: Comprehensive security capabilities for both runtime and endpoint can be put in place by implementing proprietary kernel-level isolation to partition processes and resources. Application and network security, access control, and identity management features are also strengths.
Azion was classified as an Outperformer given its comprehensive year-on-year developments and extensive development pipeline.
Opportunities
Azion has room for improvement in a few decision criteria, including:
Plug-and-play provisioning: Though not inherently a challenge, customers using Azion Cells must manage the underlying hardware and operating systems, while other solutions in the report offer a fully packaged hardware-software solution or a type 1 hypervisor or operating system.
Marketplace and services catalog: While Azion has a good marketplace and service catalog populated with proprietary and third-party apps, the solution has room to further expand its partner ecosystem.
Edge security: As a software-only solution, Azion does not offer hardware security features such as attestation or supply chain security on hardware operated by the customer. The Azion-operated hardware implements secure boot, firmware integrity verification, tamper-resistant deployments, and leverages Hardware Security Modules (HSMs).
Purchase Considerations
Customers can leverage both Azion’s globally distributed network for as-a-service delivery of the solution and edge deployments. This is particularly useful in situations where applications are sensitive to network performance when distributed across geographical areas. Customers who wish to purchase an integrated hardware-software solution from a single vendor will not find Azion’s solution suitable. While not inherently a challenge, customers must evaluate Azion’s distinctive architecture to determine whether it fits their requirements. Moreover, other products featured in this report use virtualization technologies that developers and administrators are familiar with, while Azion’s edge-native architecture can entail disruption and refactoring of services or processes.
Use Cases
Azion’s full-stack edge deployment solution can be used to build and run serverless applications on the Azion-operated network edge, as well as on remote devices, on-premises facilities, and multicloud environments. The solution can be used to enforce zero trust security policies, create latency-sensitive and real-time data analysis applications, build IoT-oriented applications, and deliver content across geographies.
Broadcom: VMware Cloud Foundation Edge (VCF Edge)
Solution Overview
VMware Cloud Foundation Edge (VCF Edge) by Broadcom is an edge computing product portfolio that enables organizations to build, run, manage, connect, and protect edge-native applications at both near- and far-edge locations. VCF Edge is cloud agnostic and supports multicloud environments, running VMs, containers, and Kubernetes workloads on a unified stack, and real-time workloads.
The VCF Edge runtime system is made up of vSphere ESXi (hypervisor), VMware vSAN (HCI storage) with a shared vSAN witness, an edge-optimized vSphere Kubernetes Service (VKS) container runtime, VCF Operations (formerly Aria Operations) management with entitlements to the full VCF stack, including VCF Automation (formerly Aria Automation), NSX networking, and HCX for data migration.
VCF Edge includes the VCF ecosystem of tools, such as VKS Cluster Management, and services to operationalize the Kubernetes runtime through VCF Operations. VCF Edge supports GPU workloads, which are used for computer vision and ML at the edge and can be deployed using PCI passthrough or GPU sharing through the ESXi hypervisor. Customers can leverage GPUs for both VM and container workloads.
Broadcom is positioned as a Challenger and Fast Mover in the Maturity/Platform Play quadrant of the full-stack edge deployments Radar chart.
Strengths
Broadcom scored well on a number of decision criteria, including:
Visibility and monitoring: VCF Edge telemetry provides insight into communication endpoints and traffic patterns related to applications deployed at the edge. VCF Operations, an integrated component of VCF Edge, provides end-to-end network monitoring, visibility into network traffic and user behavior to help detect anomalous activity and threats leveraging ML models, and recommendations to resolve operational issues.
Cloud-like management: VCF Edge provides a unified operations layer to streamline operations across all edge sites to monitor, troubleshoot, and run diagnostics for edge sites from the central console without the need to connect to individual edge sites separately through a CLI. It provides an option to use either VMware vCenter or VCF Operations to manage workload domains or clusters across different edge sites. The solution offers a GUI to provision compute instances based on VMs or containers, which can then be managed through the centralized management platform. It can also provision and manage storage services such as object, block, or file storage to ingest and store data and services such as DNS, load balancers, firewalls, and network address translation (NAT), among others.
Cloud integrations: VCF Edge allows customers to extend their modern data center or cloud infrastructure to edge sites so that applications can use the data locally for quicker processing. VCF Edge is cloud agnostic and supports multicloud environments, running VMs, containers, and Kubernetes workloads on a unified stack, supporting real-time workloads. Consistent infrastructure across central data centers and edge locations minimizes risks and complexities, while integrating edge sites with central data center VCF instances. Consistent operations across edge, data center, and cloud minimizes the learning curve for the staff and allows them to use the same tools, skill sets, and processes across the entire IT infrastructure landscape.
Opportunities
Broadcom has room for improvement in a few decision criteria, including:
Plug-and-play provisioning: The solution does not offer a staging and kitting service for hardware partners, end customers, or managed services providers, in which applications are preinstalled and configured before shipping the devices to the end locations, including offline mode and in air-gapped scenarios.
Marketplace and services catalog: The vendor does not currently offer a marketplace of applications and services from which customers can self-serve the procurement and deployment of various applications.
Non-x86 compute: While the solution supports processor architectures such as GPUs, it doesn’t currently support other architectures such as ARM, DPUs, FPGAs, or ASICs.
Purchase Considerations
Organizations interested in VCF Edge are buying into a wider portfolio of virtualization, orchestration, and networking services. This can be both a benefit and a deterrent, whereby administrators can leverage existing VMware deployments and use products for a variety of use cases while simultaneously needing to handle multiple solutions to achieve all requirements for running workloads at the edge.
To achieve all the functionalities described, customers must deploy a wide range of VMware products and modules, which requires deep knowledge of the products or support via professional or managed services. In addition, the solution is not designed for scaling-down use cases that would, for example, run the edge compute stack on lightweight IoT devices or gateways.
Use Cases
The solution can support various use cases, including running real-time and non-real-time workloads, supporting GPU workloads at the edge, industrial uses like building a digital manufacturing platform for IT and OT teams, and virtual radio access networks (vRANs) for telecommunications.
Cisco-Nutanix: Cisco Compute Hyperconverged with Nutanix
Solution Overview
The Cisco Compute Hyperconverged with Nutanix is a co-engineered solution that combines Cisco's hardware expertise and SaaS-based infrastructure management with Nutanix's hyperconverged software platform.
This combination provides a unified hardware solution and full-stack virtualization built using Cisco Unified Computing System (UCS), Nutanix Cloud Platform software, and Cisco Intersight cloud operations software.
This integrated solution addresses the specific challenges of edge deployments by offering preconfigured and prevalidated HCI nodes based on Cisco UCS servers and the Nutanix software platform for easier deployments and configuration, scale compute and storage resources on-demand to meet dynamic edge workloads, built-in security features from Cisco and Nutanix that ensure data protection at the edge, centralized management, and support for containerized applications and VMs.
Nutanix Cloud Management (NCM) Self-Service automation can run applications on multiple hypervisors and clouds without platform lock-in and adjust workloads according to business priorities. Self-Service also provides policy-based governance, making it easier to optimize VM use and sizing, which can lead to significant savings in OpEx and CapEx as well as shorter time to value.
Cisco-Nutanix is positioned as a Leader and Fast Mover in the Maturity/Platform Play quadrant of the full-stack edge deployments Radar report.
Strengths
Cisco-Nutanix scored well on a number of decision criteria, including:
Cloud integrations: Nutanix full-stack appliances can connect and integrate with AWS and Microsoft Azure using Nutanix Cloud Clusters (NC2). NC2 can be deployed on public cloud infrastructure, which can interoperate with on-premises Nutanix clusters on UCS, delivering a hybrid multicloud solution with the flexibility, simplicity, and cost efficiency needed to run applications in private or public clouds. NC2 runs the core Nutanix HCI stack.
Nutanix products and services run on bare metal instances in public clouds, allowing customers to easily migrate or extend applications from on-premises to public cloud providers. Nutanix services can integrate or manage public cloud resources, such as configuring a cloud-tiering policy, through which aged objects can be removed from local on-premises cluster storage and stored in the public cloud using AWS S3 or Azure Blob Storage. They can monitor spending in the public cloud with dashboards for AWS, Azure, and GCP.
Cluster management: The underlying Nutanix Cloud Infrastructure software combines the cluster’s storage devices into a single distributed, multitier, object-based data store. Its self-healing architecture replicates data for high availability, remediates hardware failures, and alerts IT administrators so that problems can be resolved quickly and the business can operate normally.
DevOps suitability: With Self-Service and the Self-Service plug-in for Jenkins, administrators can create a fully automated CI/CD pipeline, resulting in faster application delivery and a more satisfied customer. In addition, the Self-Service domain-specific language used in NCM Self-Service is a specialized open source, Python-based programming language that allows developers to define and automate tasks and application workflows within their IaC environment. Certified solutions, such as Red Hat OpenShift (which includes CI/CD integrations, tools, and utilities) and Google Anthos, running on the Nutanix Cloud Platform, provide development environments and workspaces.
Opportunities
Cisco-Nutanix has room for improvement in a few decision criteria, including:
Development environments and SDKs: The solution does not provide tools—such as IDEs, SDKs, application templates and blueprints, or native version control—to enable developers to build and run edge applications natively.
Marketplace and services catalog: The solution can further improve on this metric by providing an easy mechanism for third parties to develop and publish their applications to the solution’s catalog.
Visibility and monitoring: While the solution offers good visibility and monitoring capabilities, a complex co-engineered solution can make the product difficult to monitor across the hardware, firmware, and software layers, especially when considering integrations and cross-deployments with cloud environments.
Purchase Considerations
The joint Cisco-Nutanix solution entails customers buying into a technology stack developed by two vendors, which can potentially complicate the solution's management. From a technical point of view, this is not considerably different from a vendor offering a wider solution following an acquisition. From a delivery and support point of view, the engagement is typically carried out through channel partners, which should alleviate most challenges with a new deployment.
While the solution has high scores across most key features and business criteria, it is not intended to scale down for lightweight devices, such as running on IoT devices, gateways, or other small-factor appliances. Neither the software nor the hardware component is designed to target these use cases.
Use Cases
The Cisco-Nutanix solution targets use cases for deployments in regional or remote data centers and remote office or other edge locations that require real-time data processing compute at the edge. Cisco caters to all use cases with the same hardware platform set, allowing customers to size their environments based on the application’s requirements.
ClearBlade
Solution Overview
ClearBlade provides a full-stack software solution for edge processing, which includes all necessary functions for protocol integration, logic processing, data persistence, optimized backhaul, and cloud management. The ClearBlade Edge is bare metal and works on multiple CPU types, including x86, ARM, MIPSLE, Power, and containerized functions. The Cloud Edge Management platform allows users to dynamically synchronize logic, data structures, integrations, sidecar polyglot processes, and files in real time over the customer-selected backhaul.
ClearBlade provides an edge software offering bundled with cloud edge orchestration. It is a suite of integrated products that includes the following components:
IoT Core is an entry-level IoT service that can connect to any device.
IoT Enterprise is designed to scale out to millions of devices efficiently.
Edge provides advanced IoT services and AI model execution at the edge.
Intelligent Assets is a full digital twin application that is connected to devices. It provides data visualization and remote command and control capabilities, including integration to service platforms to trigger repair, maintenance, and operations actions when needed.
ClearBlade recently released the GenAI Assistant, which allows using natural language commands and creating asset types, assets, and event types. For example, an operator might say “I want to monitor my fleet of trucks,” and the chatbot will guide them through connecting their assets. In the context of IoT-focused organizations, non-IT professionals with knowledge of their assets have an easier experience interacting with the solution.
ClearBlade is the only vendor that offers capabilities for all the emerging technologies described in the report, including non-x86 compute, development environments, edge AI, and edge-native runtime environments. For development environments, ClearBlade gives developers a VSCode environment. The intelligent assets store provides customers with both hardware and software add-ins for AI models and integrations. ClearBlade also supports the ONNX AI runtime systems.
ClearBlade is positioned as a Leader and Outperformer in the Innovation/Feature Play quadrant of the full-stack edge deployments Radar report.
Strengths
ClearBlade scored well on a number of decision criteria, including:
Cloud integrations: ClearBlade provides cloud integrations via queuing and publishing using services like Kafka, SQS, and PubSub. ClearBlade can architect data flows to get machine and device data streaming directly into business applications and data lake strategies.
Edge-native runtime: The complete ClearBlade stack is a proprietary runtime system developed to run on lightweight edge devices—the compiled runtime has a 35 MB footprint—and provides a full message broker, code execution engine, UI, authentication, and synchronization services. ClearBlade statically compiles its edge binary to be indifferent to virtualization technologies. ClearBlade can replace container and virtualization scenarios by running WebAssembly as a local service under a secure permissions model.
DevOps suitability: Clearblade offers capabilities such as integrations with CI/CD tooling like Jenkins, CircleCI, Gitlab, and IaC tools such as Terraform and Ansible. It also exposes features via APIs, supports declarative configurations via languages such as YAML, scripting via languages such as Python and JavaScript, CLI tools for managing edge clusters, and configuration management programs such as PowerShell.
ClearBlade was classified as an Outperformer given its extensive year-on-year feature releases and partnerships, such as the recently announced partnership with Google Distributed Cloud Edge, a solution that is also featured in this report.
Opportunities
ClearBlade has room for improvement in a few decision criteria, including:
Cloud-like management: With a focus on IoT devices, the solution does not support provisioning entities such as containers or networking appliances.
Cluster management: The solution does not provide some of the more advanced cluster management features, such as using the management platform as the primary interface for managing multiple deployments, or supporting cross-cluster application execution, allowing secure data exchange and interaction between isolated instances.
Purchase Considerations
ClearBlade’s solutions are sold via Google Marketplace and priced monthly by data volume, vCPU, or edge gateway. They are then aggregated monthly and paid by Google. Solutions are also available directly from ClearBlade, enabling customers to integrate with their preferred cloud provider.
While ClearBlade has excellent scale-out and scale-down capabilities, the solution has limited scale-up capabilities, meaning it is not intended to host resource-intensive applications or process large volumes of data at the edge. If customers require an integrated hardware-software solution provided by a single vendor, ClearBlade’s software solution is not suitable.
Use Cases
The solution suits heavy industries, utilities, and industrial IoT use cases. Some of the verticals supported by ClearBlade include rail transport, water, oil and gas, mining, smart cities, logistics, healthcare, and energy.
Dell Technologies: Dell NativeEdge
Solution Overview
The Dell Technologies full-stack edge deployment is an integrated hardware-software solution that includes Dell NativeEdge orchestration software. Dell NativeEdge is an edge operations software platform that helps businesses securely scale their edge platform and orchestrate applications across distributed locations. It streamlines edge operations at scale through centralized management, secure device onboarding, zero-touch deployment, and automated management of infrastructure and applications. The Dell NativeEdge platform includes a centralized orchestrator and compute hardware enabled by the NativeEdge OS, all of which reside on the customer network for both communication and application delivery.
Dell Technologies has built a robust partner ecosystem across its entire portfolio and through its OEM business. The partner ecosystem is geared toward ISVs and system integrators (SIs) that serve multiple vertical industries. Partners can test their applications and solutions at scale and on the latest edge infrastructure. Certified partners have access to Dell Technologies’ certification labs and can access deeper integrations with marketing, engineering, CTOs, and sales teams for development and go-to-market efforts. Self-certification programs are available.
Some blueprint plug-ins serve use cases such as connecting to, discovering, and orchestrating workloads into other environments, such as existing vSphere clusters, Kubernetes clusters, and cloud environments like Azure.
Dell Technologies is positioned as a Leader and Fast Mover in the Maturity/Platform Play quadrant of the full-stack edge deployments Radar chart.
Strengths
Dell Technologies scored well on a number of decision criteria, including:
Plug-and-play provisioning: Dell NativeEdge offers both secure device onboarding and ZTP. This simply requires plugging in both Ethernet and power, and the endpoint device will connect to the NativeEdge Orchestrator automatically to onboard itself and provision the Dell NativeEdge OS onto the endpoint device. Dell NativeEdge allows customers to upload their own application images into the Orchestrator, giving them the power to deploy that application down to multiple endpoints. Endpoint devices will continue to run the application even when disconnected from the Orchestrator.
Cloud integrations: Dell NativeEdge has integrations with multiple cloud vendors. The NativeEdge Orchestrator can be deployed in a Kubernetes cluster. For example, the NativeEdge Orchestrator could be deployed on Amazon EKS or an Ubuntu EC2 instance with a Kubernetes cluster running inside of it.
Cloud-like management: The Dell NativeEdge solution has a web-based interface through which administrators can deploy VMs via a VM image or a solution blueprint. In addition, a solution blueprint can provide deeper integrations to deploy applications and container-based applications inside that VM. Dell NativeEdge offers blueprints preloaded to deploy ISV-based solutions. The solution can securely onboard NativeEdge endpoints and deploy applications to those devices. Its open architecture allows it to connect to multicloud environments and deploy applications to those environments. Dell NativeEdge requires customers to provide a Kubernetes environment in which to install the management software.
Opportunities
Dell Technologies has room for improvement in a few decision criteria, including:
Cloud integrations: The vendor can further improve on this metric by developing features such as defining failover and failback scenarios to cloud services in case of failures or configuration issues with the edge deployment, or leveraging cloud services such as analytics, observability, security, and identity management.
Development environments and SDKs: While Dell NativeEdge provides a solution blueprint written in YAML based on TOSCA standards, it could further improve this by offering built-in IDEs and SDKs.
Purchase Considerations
The solution is undergoing substantial development. Dell’s Edge Design Program is an exclusive feedback program for Dell NativeEdge that gives customers and partners early access to software and the ability to engage with the product managers who defined Dell NativeEdge. Currently, Dell is working with its ISV partners to develop blueprints for Dell Validated Designs. These will be preinstalled in the NativeEdge catalog, where Dell will maintain blueprints from both Dell and its partners, and customers will be able to add those blueprints to their catalog.
Use Cases
The Dell NativeEdge solution can support use cases such as running in air-gapped environments with limited or unstable network connectivity, latency-sensitive operations needing near-real-time processing, and those that require data to remain local to comply with legislative or regulatory requirements.
Google Cloud: Distributed Cloud Edge*
Solution Overview
Google Distributed Cloud Edge is a fully managed, integrated hardware-software solution that delivers applications equipped with AI, security, and open source at the edge. Google Distributed Cloud Edge uses a cloud-backed control plane that provides a consistent management experience for edge devices. Administrators can use the same tools, policies, and processes they use in GCP for mission-critical use cases running on the edge.
Distributed Cloud Edge is available in two form factors. Distributed Cloud Edge Rack is a rack of six Distributed Cloud Edge Servers and two top-of-rack (ToR) switches. This configuration supports both local control plane and cloud control plane clusters. Distributed Cloud Edge Server is a standalone device that connects directly to the local network through the existing network hardware. This form factor supports only local control plane clusters.
Google remotely manages the physical machines and ToR switches that constitute the Distributed Cloud Edge installation. This includes installing software updates and security patches and resolving configuration issues. Network administrators can also monitor the health and performance of Distributed Cloud Edge clusters and nodes and work with Google to resolve any issues.
Distributed Cloud Edge can run Google Kubernetes Engine (GKE) clusters on dedicated hardware provided and maintained by Google that is separate from the traditional GCP data center.
Google is positioned as an Entrant and Fast Mover in the Maturity/Platform Play quadrant of the full-stack edge deployments Radar chart.
Strengths
Google scored well on a number of decision criteria, including:
Plug-and-play provisioning: When the Google Distributed Cloud Edge hardware arrives at the designated location, it is preconfigured with hardware components, GCP services, and network settings specified during the ordering process. Google installers complete the physical installation, and a system administrator connects Distributed Cloud Edge to the local network. Once the hardware is connected, it communicates with GCP to download software updates and establish a connection with the associated Google Cloud project. At that point, node pools can be provisioned, and workloads deployed on Distributed Cloud Edge.
Cluster management: This includes configuring permissions, logging, and provisioning workloads for each cluster. The cluster administrator assigns nodes to node pools and node pools to Distributed Cloud Edge clusters. This means that the solution allows users to define clusters that span multiple nodes and can be deployed across geographies.
Edge security: Google’s Distributed Cloud Edge supports hardware security modules such as Trusted Platform Module (TPM), a platform certificate, and port lockdown. Distributed Cloud Edge uses Linux Unified Key Setup (LUKS) to encrypt the logical volumes on each Distributed Cloud Edge-connected node. Network traffic between Distributed Cloud Edge-connected hardware and GCP is encrypted using MASQUE tunnels or TLS using per-machine certificates. Distributed Cloud Edge automatically rotates these certificates on a regular schedule.
Opportunities
Google has room for improvement in a few decision criteria, including:
Cloud integrations: Although Google Distributed Cloud Edge is deeply integrated with the GCP portfolio and ecosystem, there are no out-of-the-box ways of integrating with non-GCP environments.
Marketplace and services catalog: While Google Cloud offers a marketplace of proprietary and third-party services, only a few of these are suitable for the Distributed Cloud Edge product portfolio, meaning the range of third-party services available for this product is limited
Non-x86 compute: While the solution supports the processor architectures such as GPUs, it doesn’t currently support other architectures such as ARM, DPUs, FPGAs, or ASICs.
Purchase Considerations
Google’s full-stack edge deployment solution is particularly suitable for customers who already have a GCP footprint. The service is designed to extend Google Cloud’s capabilities to remote locations or on-premises data centers. Organizations that already have the skills and knowledge to manage GCP environments can realize a shorter time to value by deploying this solution.
Distributed Cloud Edge nodes are not standalone resources and must remain connected to GCP for control plane management and monitoring purposes. The Distributed Cloud Edge hardware and workloads can continue to run for up to seven days if Distributed Cloud Edge is disconnected from GCP.
Distributed Cloud Edge places several restrictions on workloads and features. For example, GKE Enterprise does not support Anthos Service Mesh, except for the ConfigSync feature of Config Management.
Use Cases
Distributed Cloud Edge solutions are suitable for applications that require a stable network connection and can’t tolerate the traffic disruptions that commonly occur when transferring data over the internet. It can also be used for latency-sensitive use cases and in instances where applications generate large amounts of data that would be performance-prohibitive or cost-prohibitive to transfer to and from GCP. Another use case is for compliance with local laws or regulations that dictate data must remain on-premises and must not be stored either outside of the business or outside a specific geographic jurisdiction.
Litmus: Industrial DataOps Suite
Solution Overview
Litmus’s Industrial DataOps Suite is designed to address industrial edge compute use cases. The Litmus Industrial DataOps Suite is made up of three parts. Litmus Edge is the backbone, collecting data from heterogeneous industrial environments, building data pipelines enriched with context, analyzing the data at the edge, and enabling enterprise-scale data initiatives. Litmus UNS is a collaboration product for standardization and governance among IT, data, and operational teams. Lastly, Litmus Edge Manager works as the command center, simplifying remote management and providing unmatched scale to the rollout of the Industrial DataOps Suite. It is a complementary offering tightly coupled to Litmus Edge today and will be used to manage Litmus UNS.
Litmus Edge can run as an OS directly on bare metal hardware, such as an Intel or Arm Gateway, VM, or as a containerized deployment. This can create an air gap between OT assets and IT systems. Litmus Edge can securely connect to plant floors, allowing IT teams to manage the deployment. A hybrid model can have local OT teams managing the edge data use cases while a central enterprise IT team manages Edge IT data pipelines, devices, security, and infrastructure from the cloud.
Litmus Edge offers multiple features to ensure high availability, disaster recovery, business continuity, and data protection at the edge. Litmus Edge can be deployed on redundant hardware configurations, including mirrored storage and failover capabilities, to minimize downtime due to hardware failures. It can configure automatic failover between edge nodes and edge-to-cloud failover configurations. Data can be backed up in the cloud for off-site protection.
The solution supports horizontal and vertical scalability. For horizontal scaling, or scaling out, customers can add additional Litmus Edge instances to distribute workloads and process data demands efficiently. This allows for scaling compute, storage, and networking resources as needed. For vertical scaling, customers can upgrade individual instances with more powerful hardware configurations, such as more CPUs, RAM, and storage, to handle increased data volumes within a single node.
Litmus is positioned as a Leader and Fast Mover in the Maturity/Feature Play quadrant of the full-stack edge deployments Radar report.
Strengths
Litmus scored well on a number of decision criteria, including:
Cluster management: Administrators can define and manage Litmus Edge clusters in Litmus Edge Manager. This allows grouping-related deployments based on location, function, or other criteria. Cluster-level management enables applying configurations, deploying applications, and performing updates consistently across all members of the cluster. Data exchanges between isolated instances allow applications to access data or interact with components in other clusters.
Cloud-like management: Litmus Edge Manager serves as the primary platform for managing multiple deployments. It provides a unified interface to view and monitor all Litmus Edge instances across diverse locations, deploy and update applications remotely across different instances, manage configurations and access control for each instance, and gather data and aggregate insights from all deployments.
Marketplace and services catalog: Litmus offers a prebuilt marketplace catalog with popular applications preloaded, which include databases, AI/ML modeling, Power BI and other data analytics apps, vertical-specific and generic applications, and industry-specific uses such as energy monitoring. Customers can also bring in their own applications or a third-party app.
Opportunities
Litmus has room for improvement in a few decision criteria, including:
Cloud-like management: While Litmus Edge runs on a web browser and can be managed with a simple web-based interface, it could further improve by providing native implementations of identity management and authorization services, labeling and tagging methods to enable administrators to work with identities rather than IP addresses.
Edge-native runtime: The solution does not currently provide an edge-native runtime, which is a specialized runtime designed to run applications on resource-constrained devices to optimize performance and resource usage.
Purchase Considerations
Litmus offers three tiered packages: Foundation, for customers looking solely for a common data layer at the edge to collect, process, store, forward, and integrate data; Growth, for customers that want to run applications and analytics at the edge and use Litmus’s management platform; and Scale, for customers that want to orchestrate applications and do ML, digital twins, vision processing, and more advanced functions. The tiers allow customers to select the best package based on their stage of development and grow into newer features and functionality as they advance in their journey.
As a software-only solution, Litmus does not provide its own integrated hardware-software appliances. This means customers need to procure and manage their own hardware deployments, working with third-party suppliers or leveraging their existing infrastructure.
Use Cases
Mainly targeting industrial, IoT, and OT use cases, Litmus’s solution can run as an OS, VM, or containerized application on any gateway, VM, or local server. It can connect to any system or machine directly or via the local network. The vendor has demonstrated deployments in verticals such as aerospace, automotive, food and beverage, precision manufacturing, mining, oil and gas, electronics manufacturing, and agriculture.
Microsoft: Azure Stack Edge and Azure Local
Solution Overview
In late 2024, Microsoft updated its HCI product, launching Azure Local. It replaces all preexisting edge infrastructure products and is part of the adaptive cloud approach. Azure Local is cloud-connected infrastructure that can be deployed at physical locations under the customer’s operational control. Customers can operate and scale distributed infrastructure using the Azure portal and APIs. This includes foundational Azure compute, networking, storage, and application services.
Azure Stack Edge is a purpose-built integrated hardware-software solution that includes purpose-built devices such as Pro R, a ruggedized data center-grade appliance with a built-in NVIDIA T4 GPU; Pro, a 1U rack-mountable appliance, optimized for conditions in a data center or branch location; and Pro 2, a compact form factor optimized for edge and branch locations. Flexible mounting options and Mini R, a ruggedized, battery-operated small device designed for harsh environments and disconnected scenarios, are available.
Azure Stack Edge can run containerized applications and VMs at the location where data is created and collected. It can analyze, transform, and filter data at the edge, sending only the data needed to the cloud for further processing or storage. Azure Stack Edge acts as a cloud storage gateway and enables eyes-off data transfers to Azure while retaining local access to files. With its local cache capability and bandwidth throttling to limit usage during peak business hours, Azure Stack Edge can be used to optimize data transfers to Azure and back.
Azure Local hosts Windows and Linux VMs or containerized workloads and their storage. It's a hybrid product that connects the on-premises system to Azure for cloud-based services, monitoring, and management. An Azure Local system consists of a server or a cluster of servers running the Azure Local OS, connected to Azure. The Azure portal can monitor and manage individual Azure Local systems as well as view all Azure Local deployments.
The solution supports both containerized and non-containerized workloads. Containerized workloads are used for IoT Edge or Kubernetes to run containerized applications, while non-containerized ones deploy both Windows and Linux VMs on devices to run non-containerized applications.
Microsoft is positioned as a Challenger and Fast Mover in the Maturity/Platform Play quadrant of the full-stack edge deployments Radar chart.
Strengths
Microsoft scored well on a number of decision criteria, including:
Cloud integrations: Azure Arc is a bridge that extends the Azure platform to help build applications and services with the flexibility to run across data centers, at the edge, and in multicloud environments. It helps develop cloud-native applications with a consistent development, operations, and security model. Azure Arc runs on both new and existing hardware, virtualization and Kubernetes platforms, IoT devices, and integrated systems. It can do more with less by leveraging the existing investments to modernize with cloud-native solutions.
Cloud-like management: The solution allows customers to create generalized or specialized VM images, which are prepared from a Windows generalized image from a VHD, a generalized image from an ISO, or custom VM images starting from an Azure VM. The solution can enable compute resources to be provisioned via the Azure portal using templates, Azure PowerShell cmdlets, Azure PowerShell scripts, Python scripts, or the Azure CLI. These device VMs can be managed through the Azure portal, via the PowerShell interface of the device, or directly through the APIs.
DevOps suitability: Using Azure Stack and Arc, the solution enables organizations to develop edge-native applications; integrate Azure monitoring, security, and compliance into DevOps toolkits; create policy-driven application deployments; and propagate configuration across environments. The solution can integrate with tools such as GitHub, Terraform, and Visual Studio. Organizations can develop applications with end-to-end solutions from local data collection, storage, and real-time analysis. Azure Arc-enabled SQL Managed Instance or PostgreSQL can be deployed on any Kubernetes distribution and on any cloud.
Opportunities
Microsoft has room for improvement in a few decision criteria, including:
Marketplace and services catalog: Customers can use an Azure Marketplace image to create a VM image for Azure Stack Edge deployments, but this process is more difficult compared to other Azure and non-Azure cloud procurement processes.
Cluster management: The solution does not provide some of the more advanced cluster management features, such as using the management platform as the primary interface for managing multiple deployments, or supporting cross-cluster application execution, allowing secure data exchange and interaction between isolated instances.
Non-x86 compute: While the solution supports the processor architectures such as GPUs, it doesn’t currently support other architectures such as ARM, DPUs, FPGAs, or ASICs.
Purchase Considerations
The Azure Stack product family is mainly designed for high-performance use cases, which means the solution’s scale-out capabilities—for handling a very large number of edge deployments—are limited, as the products are mainly suitable for scale-up. Similarly, the scale-down features only go as far as the Mini R, without further flexibility for lightweight deployments. Compared to other solutions featured here that offer a turnkey experience, the Azure Stack suite of products requires configuration and development efforts, which results in a longer time to value.
Use Cases
When deployed in edge locations, Azure Local supports latency-sensitive use cases for near-real-time processing. Microsoft targets these solutions to verticals such as financial services, public sector, manufacturing, retail, and healthcare.
Scale Computing: SC//HyperCore
Solution Overview
Scale Computing’s full-stack edge deployment solution offers virtualization, servers, storage and backup, disaster recovery, and fleet management features to deliver a single manageable solution at scale in the data center, in the branch office, and for distributed edge locations. Scale Computing Platform is the overarching solution that includes Scale Computing Fleet Manager and Scale Computing HyperCore.
SC//HyperCore is an OS that includes a fully integrated KVM-based hypervisor with a patented block access and a direct-attached storage system that provides full fault tolerance and automated tiering across hybrid flash storage architectures. The SC//HyperCore software layer is a lightweight type 1 hypervisor that directly integrates into the OS kernel and leverages the virtualization offload capabilities provided by modern CPU architectures.
For resilience, the architecture is built with layers of redundancy, such as support for dual active/passive network ports, redundant power supplies, and redundant block storage with data striped across all cluster nodes. Intelligent automation handles drive failures and node failures, redistributing data across remaining drives and VMs across remaining nodes and automatically absorbing replacement drives and replacement nodes into the resource pools.
Scale Computing’s patented software-defined storage uses a storage architecture that combines all storage devices in the cluster into a single storage pool that is tiered between flash NVMe/SSD and spinning HDD storage where both exist. As data is written, multiple copies of data blocks are striped across all nodes in a cluster to protect against individual drive failure and node failure. The use of drive-through wide striping gives a performance advantage to every VM on the cluster.
Live VM migration lets VMs be nondisruptively migrated between nodes with no downtime. This live migration allows resource allocation to be rebalanced across the cluster and is also used during the rolling update process for the OS firmware. Thin VM cloning allows cloned VMs to share the same data blocks as their parent VM for storage optimization but with no dependencies. If the parent is deleted, the clone is not affected and continues operating without disruption.
Scale Computing is positioned as a Leader and Fast Mover in the Maturity/Platform Play quadrant of the full-stack edge deployments Radar chart.
Strengths
Scale Computing scored well on a number of decision criteria, including:
Cloud-like management: Autonomous Infrastructure Management Engine (AIME) is the orchestration and management engine that powers SC//HyperCore. It handles day-to-day administrative and maintenance tasks automatically, monitors the system for security, hardware, and software errors, and remediates those errors where possible. It identifies the root cause and minimizes the impact of those issues when it can’t repair them automatically, notifying users with a specific problem determination and actions (including actions to secure the environment) rather than just sending a stream of data that must be interpreted.
Plug-and-play provisioning: SC//HyperCore enables seamless programmatic deployment of containers. To run containers on SC//HyperCore, users simply deploy a container-optimized OS with a container runtime system of choice. SC//Platform’s Red Hat-certified Ansible collection enables automated container deployment and management. The idempotent nature of Ansible is perfect for deploying containerized applications at scale. Ansible is key to allowing IT and application teams to manage their edge infrastructure with IaC workflows.
Visibility and monitoring: SC//Fleet Manager is a cloud-hosted monitoring and management tool built for hyperconverged edge computing infrastructure at scale. It can monitor fleets from one to 50,000 SC//HyperCore-based clusters, including centralized upgrade management and orchestration. SC//Fleet Manager can centrally stage clusters to avoid having to send technical resources on-premises for installation, and it can securely access the SC//HyperCore UI for any cluster in the fleet without the need for complex remote access solutions to manage.
Opportunities
Scale Computing has room for improvement in a few decision criteria, including:
Marketplace and services catalog: One of Scale Computing’s current challenges is the lack of an integrated marketplace and services catalog that allows the purchasing and provisioning of third-party services.
DevOps suitability: The solution can further develop these features by supporting declarative configurations via languages such as YAML, scripting via languages such as Python and JavaScript, using CLI tools to manage edge clusters, or using configuration management programs such as PowerShell.
Non-x86 compute: While the solution supports the processor architectures such as GPUs, it doesn’t currently support other architectures such as ARM, DPUs, FPGAs, or ASICs.
Purchase Considerations
SC//HyperCore is sold as either a license based on the number of compute cores or a per-cluster site license for edge deployments. SC//Fleet Manager is sold as a license based on the number of clusters under management. Customers can purchase a full appliance up front or through a lease via a Scale Computing reseller partner, or they can lease the hardware. In the near future, customers will be able to purchase hardware directly from a certified Scale Computing imaging partner to improve configuration flexibility.
Use Cases
Use cases include IT operations simplification, mission-critical business app infrastructure, infrastructure automation, cost management and resource optimization, remote office/branch office (ROBO), or edge deployments.
Sidero Labs: Omni
Solution Overview
Sidero Labs’ Omni provides a SaaS or self-hosted solution for the simple and secure deployment, operation, and management of edge devices and Kubernetes clusters. It runs Talos Linux, an open source, hardened Linux OS purpose-built for Kubernetes. Edge devices running Talos Linux automatically create a secure, WireGuard-encrypted management network and automate and orchestrate the deployment and management of Kubernetes and Kubernetes applications, with integrated, secure enterprise authentication.
Omni is a platform for delivering hardened, robust Kubernetes on edge (or other) devices, without requiring local IT skills, and then enabling remote management, including deployment of applications and services over secure WireGuard tunnels over the public internet. Omni and Talos Linux provide full API client libraries and command-line tools to automate authentication and administration tasks.
Cloud instances are fully labeled with cloud provider, machine type, availability zone, and region, enabling intelligent targeting of workloads. Multiple appliance deployments can be managed using a labeling system or device-direct attributes to allocate nodes to classes of machines or clusters. Configuration of machines and clusters can be defined globally, per cluster, per class of machine, per role, such as control plane or worker, or with other labels.
Nodes can operate in offline mode. If the entire cluster is local to a site that is offline, there will be no impact from being disconnected. Upon reconnection, Omni will reconcile the disconnected cluster to ensure it conforms to the desired state. If the disconnected site is a member of a distributed cluster and the node can no longer reach the control plane nodes, the worker will continue its operations in offline mode.
Sidero Labs is positioned as a Challenger and Fast Mover in the Maturity/Platform Play quadrant of the full-stack edge deployments Radar chart.
Strengths
Sidero Labs scored well on a number of decision criteria, including:
Plug-and-play provisioning: Nodes are provisioned automatically once they boot off an Omni image, and with no further configuration required, they will attempt to securely join the WireGuard network whose information is embedded in that image. Thus, as soon as they are powered on or joined to the network, they will register.
Edge security: Omni-managed Talos Linux clusters offer good security services, including host and CNI-level firewalls, WireGuard encryption within the cluster and with the Omni platform, a Kernel Self-Protection Project (KSPP)-hardened OS, SecureBoot, TPM support, and certificate-based authentication.
Cluster management: Omni allows the management of Kubernetes clusters running in cloud environments as well as edge environments and enables clusters to span environments, with control planes in the cloud and workers on the edge. It also allows an edge cluster to burst into the cloud for additional capacity when required. Omni automates cluster backup to cloud storage, which can be used to provide disaster recovery automation to the cloud.
Opportunities
Sidero Labs has room for improvement in a few decision criteria, including:
Marketplace and services catalog: Sidero does not currently provide customers with self-service mechanisms to purchase and deploy first- and third-party services.
Visibility and monitoring: The solution could improve this feature by offering monitoring of firmware and driver to ensure the infrastructure is kept up to date, geographical displays of the deployment locations on global maps, topological views of edge deployments, and visibility into cloud and on-premises environments.
Cloud integrations: Sidero could develop features such as defining failover and failback scenarios to cloud services in the event of failures or configuration issues with the edge deployment, integrating with public cloud services—such as AWS S3 and Azure Blob Storage—for storage tiering and moving stale data to low-cost cloud environments, and leveraging cloud-based services such as analytics, observability, security, or identity management.
Purchase Considerations
Sidero Labs offers professional services to support the design of the Omni, Talos, and Kubernetes architectures. Technical support follows a 3-tier model and is offered with US East business hours support or enterprise 24/7 support, with service-level agreements (SLAs).
Use Cases
Sidero Labs’ solution can serve multiple use cases, as it can scale to hundreds of single and multinode clusters. The solution can support edge use cases at remote industrial sites, on factory floors for automation, in laboratory edge devices serving the healthcare and pharmaceutical verticals, and in retail locations. It can also support low-latency compute, local data processing, and data residency compliance.
Siemens: Industrial Edge*
Solution Overview
Siemens Industrial Edge is an open, ready-to-use edge computing platform consisting of applications, OT and IT connectivity, devices, and a central management system for each. It offers integrated hardware-software solutions that simplify the collection, processing, and analysis of data from industrial assets, enabling fast and reliable software rollout on the shop floor and insightful decision-making.
Siemens’s solution places edge devices close to production or automation lines and runs the Industrial Edge Apps, which are either Siemens-developed or third-party applications for data analysis and other use cases.
The hardware and software components are centrally managed using Industrial Edge Hub, which offers a global app repository and software license monitoring and Industrial Edge Management for lifecycle management of app software, hardware firmware, and all related configurations. The latter also offers mass roll-out capabilities and roles and rights management.
The underlying platform components of the Industrial Edge Management and Hub are developed by Siemens, while the runtime is available on Siemens or third-party devices. Industrial Edge Virtual Device (IEVD) offers the Industrial Edge device functionality without the need of physical hardware devices, running on ESXi.
The solution can send data to the cloud via MQTT to Insights Hub, including AWS, Azure, Alibaba Cloud, and all MQTT endpoints. A bidirectional asset model sync with Insights Hub is soon to be released. The Industrial Edge applications are deployed and run on Industrial Edge devices, which can be physical or virtual.
Mendix is a low-code IDE that can be used to develop Industrial Edge applications seamlessly with a plug-in. All components offer several APIs that are currently made public and collected in libraries. CI/CD pipelines can be set up for deployment, management, and setup of applications and infrastructure. Siemens’s solution supports customer-built applications via the app publisher—a tool to migrate any Docker image to an Industrial Edge application–via UI or CLI.
Siemens is positioned as a Challenger and Fast Mover in the Maturity/Feature Play quadrant of the full-stack edge deployments Radar chart.
Strengths
Siemens scored well on a number of decision criteria, including:
Marketplace and services catalog: One of Siemens’s strong points is the extensive catalog of applications and services available for Industrial Edge. The solution provides a centralized list of available services that users can provision in a self-service manner. The solution’s marketplace and services catalog includes identity management, resource monitoring and management, databus framework, cloud connectors, digital twins, and AI-based applications for use cases such as anomaly detection. The marketplace is based on an open ecosystem that includes third-party app providers, device builders, solution partners, and SIs.
Visibility and monitoring: Industrial Edge Hub can monitor instances, network traffic, hardware memory, CPU, and storage, as well as application status.
Plug-and-play provisioning: For provisioning, the Industrial Edge Management has several mechanisms, such as via a CLI and using Helm charts. Industrial Edge Management-Virtual is set up via a configuration template for the virtual machine. Both are then connected to the Industrial Edge HUB tenant of the user via an onboarding file.
Opportunities
Siemens has room for improvement in a few decision criteria, including:
Cloud integrations: The solution can set up bidirectional communication with cloud environments via MQTT, but it does not support out-of-the-box features for cloud failover, bursting, backup and disaster recovery, or running applications across the edge deployment and cloud.
DevOps suitability: The solution can further develop these features by supporting declarative configurations via languages such as YAML, scripting via languages such as Python and JavaScript, using CLI tools to manage edge clusters, or using configuration management programs such as PowerShell.
Edge-native runtime: While the IE Device Runtime is the basis for Industrial Edge OS on the devices, the solution could improve this by reducing the CPU and memory footprint of the runtime to make it suitable for deployments on resource-constrained or lightweight devices.
Purchase Considerations
Industrial Edge applications are priced per instance in a yearly subscription model, with self-development running at no additional cost. Essential applications for managing the solution, such as local data lakes or cloud connectors, are included for free.
Use Cases
Industrial Edge offers a set of tools and extensions to create microservice-based industrial applications for use cases such as performance analytics, energy monitoring and management, field-to-cloud connectivity, data consolidation, data model creation and semantics, anomaly detection of production components, AI/ML applications, virtualized controllers and sensors, and secure remote access.
SoftIron: HyperCloud and VM Squared
Solution Overview
SoftIron offers two full-stack edge products, HyperCloud and VM Squared. HyperCloud is SoftIron's private cloud solution designed to provide a true cloud experience for on-premises and sensitive workloads. HyperCloud is an integrated hardware-software solution built as a single, elastic, and resilient cloud architecture that can consolidate a customer's entire data center infrastructure.
VM Squared is a separate solution that provides a software-only virtualization technology for running workloads on commodity hardware without a hardware refresh. VM Squared uses an underlying software stack similar to HyperCloud’s. With these two products, organizations can deploy a cloud-like experience at their preferred locations without buying into a wider public cloud ecosystem.
The solution’s multitenancy capabilities are notable and particularly important in large-scale deployments that must support multiple business units. These are supported via ACLs that offer granular policies, such as granting specific users or groups access to certain hosts, networks, and storage, or limiting users or groups to specific VM operations, such as allowing reboot but not undeploy. The virtual data center is a combination of a group and resources, where users can execute instances on the hardware in attached resource providers but do not have visibility of the hardware itself.
SoftIron is positioned as a Challenger and Fast Mover in the Maturity/Platform Play quadrant of the full-stack edge deployments Radar chart.
Strengths
SoftIron scored well on a number of decision criteria, including:
Plug-and-play provisioning: As an integrated hardware-software solution, HyperCloud has good plug-and-play capabilities, enabling devices to integrate into the HyperCloud fleet on power-up. Once they find their HyperCloud, they download the latest software and are immediately added to the compute pool. Additionally, they contribute to the private cloud’s distributed fleet intelligence. Certificates, DNS, and authentication can be managed using simple commands.
Cloud-like management: HyperCloud provides a public cloud-like experience with APIs and tools for self-service and cost-effective resource provisioning and use. The solution is managed as a single, self-maintaining cloud, making it easy to upgrade and maintain the entire system even when fully disconnected from the public cloud.
Cluster management: The solution is modular and flexible, meaning the centralized management system has awareness of the hardware deployments to continuously and dynamically manage the deployed fleet. The solution provides automated self-managing clusters with autodetection of new nodes, such as when a compute node is removed and swapped out. HyperCloud can also automatically redistribute workloads.
Opportunities
SoftIron has room for improvement in a few decision criteria, including:
Visibility and monitoring: The solution could improve this feature by offering geographical displays of the deployment locations on global maps; topological views of edge deployments, cloud, and on-premises environments; and interactive dashboards with the ability to drill down into specific clusters and access detailed metrics and diagnostics from a unified interface.
Edge AI: While the solution supports AI inference at the edge and offers native implementation of AI runtimes such as ONNX, it can further improve by offering additional tools and workflows—such as PyTorch, Tensorflow, Keras, TFLite, or scikit-learn—to optimize trained AI/ML models and reduce the model size and memory footprint.
Cluster management: Even though SoftIron has very good capabilities for managing its deployments in a cluster, these mainly target single-location deployments rather than multi-location deployments.
Purchase Considerations
Organizations that consider deploying SoftIron’s HyperCloud need to think about the space, power, and cooling requirements associated with the deployment. Deploying these devices in colocation environments can help with the scaling of the solution, as new space and power requests can be met by large colocation providers. Deploying these solutions in on-premises data centers will depend on the types of facilities available and will likely require the displacement of current data center hardware.
Use Cases
The HyperCloud solution is well suited to organizations that need a private cloud for on-premises or sensitive workloads but want to avoid the management overhead and siloes associated with deploying multiple public cloud instances. Examples could include enterprises in regulated industries, government agencies, or organizations with large technical workloads that require low latency.
Synadia: NATS.io and Synadia Platform
Solution Overview
Synadia invented the open source technology NATS, which allows applications to securely communicate across any combination of cloud vendors, on-premises, edge, web, mobile, or IoT devices. Synadia offers an enterprise-grade distribution of NATS, Synadia Platform, and Synadia Cloud, which provide the operational tools and support to manage their NATS deployment.
The solution is a set of software components that can be deployed on any supported hardware, operating system, and architecture. The foundational components are:
NATS: providing connectivity and messaging.
JetStream (embedded in NATS): providing persistence, including real-time streams, key-value buckets, and object stores.
Nex: a NATS-native execution engine for hosting and managing workloads.
Control Plane: providing a centralized way to manage and observe a NATS system, including the application tier, through a user interface and API.
Private Link: a sidecar process for NATS that establishes an outbound connection to Control Plane (only required if Control Plane can’t directly connect to the NATS servers).
HTTP Gateway: enabling HTTP clients to interact with NATS and JetStream.
Connectors: providing a suite of NATS-based integrations to remote services and resources such as cloud services, databases, and other protocols.
Synadia is positioned as a Challenger and Fast Mover in the Innovation/Platform Play quadrant of the full-stack edge deployments Radar chart.
Strengths
Synadia scored well on a number of decision criteria, including:
Plug-and-play provisioning: An operator can deploy Control Plane centrally and then deploy NATS or Nex on the hosts or devices where connectivity, data, and workloads are required to be available. If Control Plane and NATS are on the same network, Control Plane will be able to directly connect to the NATS servers, and they will be visible in the interface. However, if Control Plane can’t communicate with the NATS servers directly, the Private Link component can be used to run locally next to the NATS server that will establish an outbound connection to Control Plane and self-register.
Cloud-like management: Nex provides a uniform interface for running NATS-powered workloads—which include containers, event-driven functions, and jobs—on various cloud and edge-native runtimes. NATS eliminates the need to work with IP addresses or FQDNs. Networking functions are implemented by NATS with pub/sub, streaming, Key-Value, and the rest of its functionalities.
Cluster management: Synadia Cloud is a global supercluster spanning the entire world and multiple cloud providers. A vehicle manufacturer uses NATS for intra-vehicle and to/from the cloud communications. A network equipment manufacturer uses NATS to ship logs from all of the devices at their customer’s location to their cloud back end. Retailers are deploying NATS to thousands of locations. Game companies monitor and control large numbers of game servers all over the world. Beyond superclustering, leaf nodes allow near infinite scalability, as they can even be daisy-chained.
Opportunities
Synadia has room for improvement in a few decision criteria, including:
Cloud integrations: Synadia could develop features such as defining failover and failback scenarios to cloud services in the event of failures or configuration issues with the edge deployment; integrating with public cloud services—such as AWS S3 and Azure Blob Storage—for storage tiering and moving stale data to low-cost cloud environments; and leveraging cloud-based services such as analytics, observability, security, or identity management.
Marketplace and services catalog: The vendor does not currently offer a marketplace of applications and services from which customers can self-serve the procurement and deployment of various applications.
Visibility and monitoring: While the Synadia Control Plane provides purpose-built monitoring, it could offer features such as monitoring of firmware, driver, and OS versions to ensure the infrastructure is kept up to date; geographical displays of the deployment locations on global maps; and topological views of edge deployments as well as cloud and on-premises environments.
Purchase Considerations
Synadia’s enterprise-grade distribution of NATS can be delivered as cloud-based SaaS, self-hosted, or a managed platform. Cloud SaaS pricing is based on a tiered consumption model, while self-hosted and managed platform pricing is based on the scale of the deployment.
Use Cases
Synadia’s NATS can be used for connecting distributed applications across multiple geographies, clouds, or out to the edge without requiring external software dependencies to run a production system. It can handle edge deployments by syncing data between the cloud and edge, including data mirrors and sourcing.
ZEDEDA
Solution Overview
ZEDEDA delivers an open, distributed, cloud-native edge management and orchestration solution, simplifying the security and remote management of edge infrastructure and applications. ZEDEDA’s solution is composed of the commercial SaaS cloud controller and EVE-OS.
EVE-OS is a lightweight, open source, Linux-based edge OS with open orchestration APIs deployed on bare metal edge hardware. The ZEDEDA controller leverages the open APIs embedded within EVE-OS to orchestrate both the hardware below and applications above.
ZEDEDA extends the cloud experience to the edge by providing fleet-level management, network visibility, auditing, remote orchestration and management, ZTP, zero trust security, role-based access control (RBAC), and the ability to deploy and manage nodes at scale from a central location.
ZEDEDA Edge Access is a remote access solution built into the ZEDEDA offering that enables IT administrators and platform operations teams to instantly access any remote device from any location at any time. It is a simple solution that provides secure access, control, and audit tracing for edge deployments.
ZEDEDA’s Edge Kubernetes Service is a secure, managed Kubernetes service for moving Kubernetes from the data center to the edge, enabling organizations to deploy, manage, and modernize their edge deployments. ZEDEDA-managed containers enable customers to run container workloads natively on EVE-OS for situations where the infrastructure for running Docker, K3s, or K0s is too heavyweight for the edge.
ZEDEDA is positioned as a Leader and Fast Mover in the Maturity/Platform Play quadrant of the full-stack edge deployments Radar chart.
Strengths
ZEDEDA scored well on a number of decision criteria, including:
Cloud-like management: ZEDEDA’s cloud controller provides orchestration and lifecycle management of both applications and hardware deployed at distributed edge locations. It includes ZEDEDA Edge Application Services, which are distributed, cloud-native, edge-first services that simplify the security and remote management of edge infrastructure and applications at scale.
Edge security: The solution’s security features include ZTP with a hardware-backed zero trust workflow, remote attestation, third-party security appliances, wireline encryption, RBAC, IDP integrations, user access controls, password control for users, and full policy-based key exchange (non-password) access for hosts.
Marketplace and services catalog: ZEDEDA includes an embedded marketplace with an extensive collection of applications and solutions, including OSs, container runtimes, network security, network switching and routing, SD WAN, SASE, data connectivity, data transformation, data visualization, data preparation and tagging, MLPps, AI runtimes, observability, eventing and actioning, and more. ZEDEDA offers model hubs in its marketplace, partnering with a number of large AI/ML providers. It has introduced AI/MLOps into its pipeline to allow instances to be run at the edge with the ease of running a VM.
Opportunities
ZEDEDA has room for improvement in a few decision criteria, including:
DevOps suitability: ZEDEDA could further develop its support of DevOps teams to programmatically interact with the solution. While it offers both a CLI and an API, it does not currently offer native integrations with CI/CD tooling or IaC tools, such as SDKs or IDEs.
Plug-and-play provisioning: While ZEDEDA has a very good provisioning mechanism for its instances, it does not include hardware in its offering, which means customers need to procure and manage the hardware component from a third party.
Cloud integrations: Although the solution can integrate and has awareness of public cloud environments, it could improve by allowing customers to define applications that span ZEDEDA and cloud workloads.
Purchase Considerations
ZEDEDA is delivered as-a-service with a subscription-based enterprise license and a pay-as-you-grow model. Subscriptions are based on edge compute capabilities with additional options available. The EVE-OS component is governed by the Linux Foundation and licensed under Apache 2.0. ZEDEDA also includes 24/7 support for EVE-OS.
Use Cases
ZEDEDA’s solution is suitable for managing globally distributed edge deployments, which are common in industries such as oil and gas, heavy industry, and retail. The solution can be used in both IoT and OT scenarios, including in air-gapped environments. It offers scale-down and scale-out capabilities, making it suitable for distributed compute use cases on resource-constrained devices.
6. Analyst’s Outlook
One of the most interesting observations about the vendors featured in this report is the range of use cases and the differences among them. While all vendors offer a full-stack solution for running workloads at the edge of the network, there is a huge variety in the types of workloads that can run. For example, vendors can deploy a full data center’s worth of appliances using solutions from vendors such as AWS, Microsoft, or SoftIron, or very small workloads on lightweight devices supported by solutions such as ClearBlade’s.
Organizations can purchase integrated hardware-software solutions from Siemens, Scale Computing, and Dell, or software-only solutions from ZEDEDA, Broadcom, Synadia, or Acumera. Those that distribute content globally can even tap into a global network provided by Azion that runs dozens of PoPs.
This comes as no surprise considering the underlying technology required (the report’s table stakes) includes low-level technologies such as hypervisors and OSs that can provide a platform for building any type of service. However, when we assess the solutions in this report with comparable categories such as HCI, full-stack edge deployments offer most (if not all) the tools required to develop, deploy, and run applications, not just the platform to host them.
Full-stack edge deployments bring the lessons from a decade of public cloud experimentation to network edge locations, where centralized management and remote orchestration once seemed impossible using traditional infrastructure management practices. However, just as the cloud has not been the agile and cost-efficient dream we expected in the mid-2010s, the edge might face similar maturity challenges as markets adopt these solutions more and run them against new edge cases.
To learn about related topics in this space, check out the following GigaOm Radar reports:
7. Methodology
*Vendors marked with an asterisk did not participate in our research process for the Radar report, and their capsules and scoring were compiled via desk research.
For more information about our research process for Radar reports, please visit our Methodology.
8. About Andrew Green
Andrew Green is an enterprise IT writer and practitioner with an engineering and product management background at a tier 1 telco. He is the co-founder of Precism.co, where he produces technical content for enterprise IT and has worked with numerous reputable brands in the technology space. Andrew enjoys analyzing and synthesizing information to make sense of today's technology landscape, and his research covers networking and security.
9. About GigaOm
GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.
GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.
GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.
10. Copyright
© Knowingly, Inc. 2025 "GigaOm Radar for Full-Stack Edge Deployments" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.