This GigaOm Research Reprint Expires December 11, 2026
The image appears to be a slide from a presentation about cloud infrastructure and management, specifically focused on primary storage. The slide has a dark purple background with geometric shapes and triangles in shades of gray, red, and blue.

In the center is a radar chart illustrating various aspects of primary storage, with data points scattered around the circular grid.

On the right side is a headshot photo of a smiling man with short gray hair and a beard, wearing a gray suit jacket over a light colored collared shirt. The text below his photo identifies him as "Whit Walters".

The top left corner features the GigaOm Radar logo in white and blue text. The bottom of the slide reads "PRIMARY STORAGE" in white all-caps text on the purple background.

Overall, the slide conveys a professional and modern visual style while presenting information about evaluating primary storage solutions as part of cloud infrastructure management. The featured analyst photo adds a personal touch and establishes expertise on the topic.
The image appears to be a slide from a presentation about cloud infrastructure and management, specifically focused on primary storage. The slide has a dark purple background with geometric shapes and triangles in shades of gray, red, and blue.

In the center is a radar chart illustrating various aspects of primary storage, with data points scattered around the circular grid.

On the right side is a headshot photo of a smiling man with short gray hair and a beard, wearing a gray suit jacket over a light colored collared shirt. The text below his photo identifies him as "Whit Walters".

The top left corner features the GigaOm Radar logo in white and blue text. The bottom of the slide reads "PRIMARY STORAGE" in white all-caps text on the purple background.

Overall, the slide conveys a professional and modern visual style while presenting information about evaluating primary storage solutions as part of cloud infrastructure management. The featured analyst photo adds a personal touch and establishes expertise on the topic.
December 12, 2025

GigaOm Radar for Primary Storage v6

Whit Walters

1.
Executive Summary

1. Executive Summary

Primary storage serves as the high-performance engine for an organization's most critical applications and data. Its importance has been magnified because enterprises now depend on it to drive strategic initiatives, from accelerating generative AI adoption to delivering real-time customer experiences. For IT leaders, application owners, and infrastructure managers, the choice of a primary storage platform is a foundational decision that directly impacts operational resilience, business agility, and the ability to extract value from data.

From a C-suite perspective, modernizing primary storage is no longer just an infrastructure refresh, it is a core business decision. The right platform mitigates significant risk by providing the last line of defense against cyberattacks with features like immutable snapshots and rapid recovery. It also acts as a catalyst for innovation, providing the scalable, low-latency performance needed for data-intensive analytics and AI model training. Ultimately, investing in a flexible and efficient storage architecture helps control costs by consolidating workloads and reducing management overhead.

This latest edition of the GigaOm Radar for Primary Storage marks a significant evolution in our analysis. We have consolidated our previous reports for large enterprises and midsize businesses into a single, comprehensive evaluation. This change reflects the market's reality that dictates solutions must now scale to meet a wide spectrum of enterprise needs without creating artificial product tiers. The scope of this report focuses on enterprise-grade platforms, encompassing both traditional arrays and software-defined storage (SDS) solutions.

This is our sixth year evaluating the primary storage space in the context of our Key Criteria and Radar reports. This report builds on our previous analysis and considers how the market has evolved over the last year. 

This GigaOm Radar report examines 19 of the top primary storage solutions and compares offerings against the capabilities (table stakes, key features, and emerging features) and nonfunctional requirements (business criteria) outlined in the companion Key Criteria report. Together, these reports provide an overview of the market, identify leading primary storage offerings, and help decision-makers evaluate these solutions so they can make a more informed investment decision.

GIGAOM KEY CRITERIA AND RADAR REPORTS

The GigaOm Key Criteria report provides a detailed decision framework for IT and executive leadership assessing enterprise technologies. Each report defines relevant functional and nonfunctional aspects of solutions in a sector. The Key Criteria report informs the GigaOm Radar report, which provides a forward-looking assessment of vendor solutions in the sector.

2.
Market Categories and Deployment Types

2. Market Categories and Deployment Types

To help prospective customers find the best fit for their use case and business requirements, we assess how well primary storage  solutions are designed to serve specific target markets and deployment models (Table 1).

For this report, we recognize the following market segments:

  • Small and medium business (SMB): Organizations requiring cost-effective, easy-to-manage primary storage solutions with straightforward deployment and maintenance. These buyers prioritize simplicity, integrated data protection, and predictable costs, often preferring solutions that don't require specialized storage expertise.

  • Large enterprise: Organizations needing highly scalable, feature-rich storage platforms that can support diverse workloads and complex environments. These buyers focus on performance, reliability, advanced data services, and deep integration capabilities, with sophisticated requirements for data protection and compliance.

  • Specialized: Organizations with unique requirements, such as high-performance computing, AI/ML workloads, or industry-specific needs. These buyers prioritize specific capabilities like ultra-low latency, massive parallelism, or specialized compliance features over general-purpose functionality.

In addition, we recognize the following deployment models:

  • Hardware appliance: Preintegrated systems combining storage software and hardware in a unified package. This model offers simplicity and predictable performance but may limit hardware flexibility. It’s ideal for organizations seeking turnkey solutions with single-vendor support and minimal integration complexity.

  • Software-defined storage (SDS): Software solutions deployable on commodity hardware or in cloud environments. This model provides greater flexibility in hardware choice and deployment options, enabling custom configurations and potentially lower costs, though requiring more expertise to implement and maintain.

  • StaaS (storage as a service): A consumption-based model delivering storage as a fully managed service, shifting procurement from a capital expenditure (CapEx) to an operational expenditure (OpEx). This model provides a cloud-like experience for on-premises infrastructure, simplifying scaling and management, as the vendor typically owns and operates the underlying hardware.

Table 1. Vendor Positioning: Target Market and Deployment Model

Vendor Positioning: Target Market and Deployment Model
TARGET MARKETDEPLOYMENT MODEL
SMB
Large Enterprise
Specialized
Hardware Appliance
Software-Defined Storage
StaaS
DataCore Software
DDN
Dell Technologies
Fujitsu
Hitachi Vantara
HPE
IBM
Infinidat
Lightbits Labs
NetApp
Nutanix
Pure Storage
StorONE
StorPool Storage
Synology
TrueNAS
VAST Data
WEKA
Zadara
Source: GigaOm 2026

Table 1 components are evaluated in a binary yes/no manner and do not factor into a vendor’s designation as a Leader, Challenger, or Entrant on the Radar chart (Figure 1). 

“Target market” reflects which use cases each solution is recommended for, not simply whether that group can use it. For example, if an SMB could use a solution but doing so would be cost-prohibitive, that solution would be rated “no” for SMBs.

3.
Decision Criteria Comparison

3. Decision Criteria Comparison

All solutions included in this Radar report meet the following table stakes—capabilities widely adopted and well implemented in the sector:

  • Scale-up or scale-out

  • Traditional vs software-defined storage

  • Integration with upper layers

  • Data protection

  • Basic data services

  • System analytics

  • NVMe

  • Cloud integration

Tables 2, 3, and 4 summarize how each vendor in this research performs in the areas we consider differentiating and critical in this sector. The objective is to give the reader a snapshot of the technical capabilities of available solutions, define the perimeter of the relevant market space, and gauge the potential impact on the business.

  • Key features differentiate solutions, highlighting the primary criteria to be considered when evaluating a primary storage solution

  • Emerging features show how well each vendor implements capabilities that are not yet mainstream but are expected to become more widespread and compelling within the next 12 to 18 months 

  • Business criteria provide insight into the nonfunctional requirements that factor into a purchase decision and determine a solution’s impact on an organization

These decision criteria are summarized below. More detailed descriptions can be found in the corresponding report, “GigaOm Key Criteria for Evaluating Primary Storage  Solutions.”

Key Features

  • AIOps for storage: AIOps for storage applies machine learning and automation to enable autonomous storage operations with minimal human intervention. This technology moves beyond passive monitoring to proactively resolve issues, optimize performance, and improve reliability, significantly reducing administrative overhead.

  • Ransomware protection: Built-in ransomware protection provides a critical last line of defense by using storage-level features to detect threats, protect data copies, and accelerate recovery. These capabilities are essential for ensuring business continuity and minimizing the impact of increasingly prevalent and sophisticated cyberattacks.

  • New media types: Modern storage solutions support diverse media types, including QLC flash, storage-class memory, and specialized AI-optimized storage. These technologies enable tiered performance and capacity options while optimizing cost efficiency and workload-specific requirements.

  • NVMe-oF: NVMe-oF (over fabrics) extends the benefits of NVMe across the network, enabling near-local storage performance for distributed systems. The protocol supports multiple transport options, including remote direct memory access (RDMA) and TCP, offering flexibility in network architecture.

  • NVMe/TCP: NVMe/TCP enables NVMe protocols over standard TCP/IP networks, providing a cost-effective and flexible deployment option. This approach simplifies network requirements while maintaining most NVMe performance benefits.

  • API and automation tools: Modern storage platforms provide comprehensive APIs and automation tools for infrastructure-as-code (IaC) and DevOps integration. These capabilities enable automated provisioning, configuration management, and operational workflows.

  • Kubernetes integration: Native Kubernetes integration enables containerized applications to leverage enterprise storage capabilities through CSI. This integration supports stateful applications with enterprise data services and automated lifecycle management.

Table 2. Key Features Comparison

Key Features Comparison
Exceptional
Superior
Capable
Limited
Poor
Not Applicable
KEY FEATURES
Average Score
AIOps for Storage
Ransomware Protection
New Media Types
NVMe-oF
NVMe/TCP
API and Automation Tools
Kubernetes Integration
DataCore Software
2.4
★★★★
★★★
★★★
★★★
★★★★
DDN
2.9
★★★
★★★
★★★★
★★★
★★★
★★★★
Dell Technologies
4.4
★★★★
★★★★★
★★★★★
★★★★
★★★★
★★★★
★★★★★
Fujitsu
2.1
★★★
★★
★★★
★★★
★★★
Hitachi Vantara
4.4
★★★★
★★★★★
★★★★
★★★★
★★★★
★★★★★
★★★★★
HPE
4.3
★★★★★
★★★★
★★★★
★★★★
★★★★
★★★★
★★★★★
IBM
4.0
★★★
★★★★★
★★★★
★★★★
★★★★
★★★★
★★★★
Infinidat
4.0
★★★
★★★★★
★★★★
★★★★
★★★★
★★★★
★★★★
Lightbits Labs
3.7
★★
★★★
★★★★
★★★★
★★★★★
★★★★
★★★★
NetApp
4.3
★★★★
★★★★★
★★★
★★★★★
★★★★★
★★★★
★★★★
Nutanix
3.9
★★★
★★★★
★★★★
★★★
★★★★
★★★★★
★★★★
Pure Storage
4.7
★★★★★
★★★★
★★★★
★★★★★
★★★★★
★★★★★
★★★★★
StorONE
3.7
★★★★
★★★★
★★★★
★★★★
★★★★
★★★★
★★
StorPool Storage
2.9
★★★
★★★
★★
★★★★
★★★★
★★★★
Synology
2.4
★★★★
★★★★
★★
★★★
★★★★
TrueNAS
3.1
★★
★★★★
★★★
★★★
★★
★★★★
★★★★
VAST Data
4.1
★★
★★★★★
★★★★
★★★★
★★★★
★★★★★
★★★★★
WEKA
3.4
★★★
★★★
★★★★
★★★
★★★★★
★★★★★
Zadara
2.1
★★★
★★★
★★★
★★★★
Source: GigaOm 2026

Emerging Features

  • Edge solutions: Edge-optimized storage solutions provide enterprise-class capabilities in compact, remotely manageable deployments. These solutions support the growing need for local data processing while maintaining centralized control and security.

  • Sustainability metrics: Sustainability metrics provide visibility into a storage system's power consumption and environmental impact. This enables organizations to align infrastructure decisions with corporate environmental, social, and governance (ESG) goals while reducing operational costs through improved energy efficiency.

  • AI/ML workload optimization: AI/ML workload optimization adapts a storage system’s architecture to service the unique I/O patterns of artificial intelligence applications. These features are critical for accelerating data pipelines and ensuring efficient performance for both high-throughput training and low-latency inference workloads.

Table 3. Emerging Features Comparison 

Emerging Features Comparison 
Exceptional
Superior
Capable
Limited
Poor
Not Applicable
EMERGING FEATURES
Average Score
Edge Solutions
Sustainability Metrics
AI/ML Workload Optimization
DataCore Software
2.0
★★★
★★
DDN
3.3
★★
★★★
★★★★★
Dell Technologies
4.0
★★★★
★★★★
★★★★
Fujitsu
2.3
★★
★★★
★★
Hitachi Vantara
3.7
★★★
★★★★
★★★★
HPE
3.7
★★★
★★★★
★★★★
IBM
3.0
★★★
★★★
★★★
Infinidat
3.0
★★★
★★★
★★★
Lightbits Labs
2.0
★★
★★★
NetApp
3.7
★★★
★★★★
★★★★
Nutanix
3.7
★★★★
★★★
★★★★
Pure Storage
3.7
★★★
★★★★
★★★★
StorONE
3.3
★★★
★★
★★★★★
StorPool Storage
0.0
Synology
2.0
★★★★
★★
TrueNAS
2.3
★★★
★★★
VAST Data
3.7
★★★
★★★
★★★★★
WEKA
3.7
★★★★
★★
★★★★★
Zadara
2.3
★★★★
★★
Source: GigaOm 2026

Business Criteria

  • Upgradeability: Upgradeability determines how effectively a storage system can evolve with business needs through software updates, hardware refreshes, and capacity expansions. Modern solutions must support nondisruptive upgrades while maintaining performance and reliability.

  • Efficiency: Storage efficiency encompasses data reduction, power consumption, and resource utilization across the solution stack. Modern platforms must balance performance optimization with environmental sustainability and operational costs.

  • Flexibility: Flexibility describes a storage system's ability to support diverse workloads and deployment scenarios while maintaining consistent management. This capability enables organizations to consolidate storage infrastructure while meeting varied application requirements.

  • Ease of use: Ease of use reflects the operational efficiency of day-to-day storage management and problem resolution. Modern solutions must support both traditional admin workflows and DevOps automation.

  • Cost per transaction ($/IOPS): $/IOPS measures the cost efficiency of storage performance, which is particularly important for high-performance applications and databases. This metric helps organizations optimize infrastructure investments based on workload requirements.

  • Cost of storage ($/GB): $/GB evaluates the total cost of usable capacity, including data reduction, protection overhead, and management expenses. This metric helps organizations compare different storage solutions and consumption models.

Table 4. Business Criteria Comparison

Business Criteria Comparison
Exceptional
Superior
Capable
Limited
Poor
Not Applicable
BUSINESS CRITERIA
Average Score
Upgradeability
Efficiency
Flexibility
Ease of Use
Cost per Transaction ($/IOPS)
Cost of Storage ($/GB)
DataCore Software
4.2
★★★★
★★★★
★★★★★
★★★★
★★★★
★★★★
DDN
3.8
★★★
★★★★
★★★★
★★★
★★★★★
★★★★
Dell Technologies
4.5
★★★★★
★★★★★
★★★★
★★★★
★★★★
★★★★★
Fujitsu
3.5
★★★
★★★★
★★★
★★★
★★★★
★★★★
Hitachi Vantara
4.7
★★★★
★★★★★
★★★★
★★★★★
★★★★★
★★★★★
HPE
4.7
★★★★★
★★★★★
★★★★
★★★★★
★★★★
★★★★★
IBM
4.3
★★★★
★★★★★
★★★★
★★★★
★★★★
★★★★★
Infinidat
4.2
★★★★★
★★★★
★★★★
★★★★
★★★★
★★★★
Lightbits Labs
3.8
★★★
★★★★
★★★
★★★★
★★★★★
★★★★
NetApp
4.7
★★★★★
★★★★★
★★★★★
★★★★★
★★★★
★★★★
Nutanix
4.5
★★★★★
★★★★
★★★★★
★★★★★
★★★★
★★★★
Pure Storage
4.7
★★★★
★★★★★
★★★★
★★★★★
★★★★★
★★★★★
StorONE
4.5
★★★★
★★★★★
★★★★★
★★★★
★★★★
★★★★★
StorPool Storage
4.0
★★★★
★★★★
★★★★
★★★★
★★★★
★★★★
Synology
3.8
★★★
★★★
★★★★
★★★★★
★★★★
★★★★
TrueNAS
4.2
★★★★
★★★★
★★★★
★★★
★★★★★
★★★★★
VAST Data
4.7
★★★★★
★★★★★
★★★★★
★★★★
★★★★
★★★★★
WEKA
4.5
★★★★
★★★★★
★★★★★
★★★★
★★★★★
★★★★
Zadara
3.8
★★★★★
★★★
★★★★
★★★★
★★★
★★★★
Source: GigaOm 2026

4.
GigaOm Radar

4. GigaOm Radar

The GigaOm Radar plots vendor solutions across a series of concentric rings with those positioned closer to the center being judged as having the most complete solutions. The chart characterizes each vendor on two axes—balancing Maturity versus Innovation and Feature Play versus Platform Play—while providing an arrowhead that projects each solution’s expected evolution over the coming 12 to 18 months.

This image, titled "GigaOm Radar", provides a visual representation of various primary storage solutions, categorizing them based on their maturity and innovation. The radar chart is divided into four quadrants: Leaders, Challengers, Entrants, and Outperformers, with concentric circles representing the maturity level from the center outwards.

The primary storage solutions are positioned on the chart according to their level of innovation and maturity. Solutions like Synology, Fujitsu, StorONE, IBM, and DDN are placed in the more mature and less innovative quadrant. In contrast, companies such as VAST Data, Hitachi Vantara, Lightbits Labs, and TrueNAS are positioned as more innovative solutions with varying levels of maturity.

The image also includes a legend distinguishing between Outperformers, Fast Movers, and Forward Movers. Additionally, a table at the bottom explains the characteristics associated with each axis: Maturity, Innovation, Feature Play, and Platform Play.

Overall, this GigaOm Radar provides an insightful overview of the primary storage market landscape, allowing readers to compare and assess different solutions based on their innovation and maturity levels.

Figure 1. GigaOm Radar for Primary Storage 

The primary storage market is undergoing a significant transformation, driven by the strategic imperative to support AI-driven applications and defend against sophisticated cyberthreats. This year’s Radar, which evaluates 19 vendors, reflects a market in which the lines between traditional maturity and cutting-edge innovation are blurring. Mature, established vendors are aggressively integrating AI-powered analytics and automation into their core platforms to deliver more proactive and self-optimizing operations. Simultaneously, younger, innovation-focused vendors are rapidly building out the enterprise-grade data services and support structures required for mission-critical deployments.

As you can see in Figure 1, a defining characteristic of this year's Radar is the market's overwhelming orientation toward the Platform Play side. By its nature, primary storage is a foundational platform upon which an organization runs its most critical operations. As such, vendors are expected to offer a comprehensive and integrated suite of data services, including data protection, disaster recovery, security, and analytics. Even vendors positioned closer to the Feature Play side of the axis deliver broad capabilities; their placement often reflects a go-to-market strategy focused on excelling in specific, high-performance use cases like AI/ML pipelines or real-time analytics rather than a lack of a comprehensive set of features.

Three distinct clusters of vendors are visible on the Radar, illustrating the different strategies in the market.

  • The first is a dense cluster of Leaders positioned near the center, straddling both the Maturity and Innovation hemispheres. This concentration indicates a highly competitive market top end, where leadership requires excellence across a wide range of now standard capabilities, from AIOps to robust ransomware protection. These vendors have successfully integrated what once were emerging features into their core platforms.

  • A second group resides deeper in the Innovation/Platform Play quadrant. These vendors are aggressively driving architectural change, often with a software-defined or cloud-native focus, setting the pace for future market capabilities.

  • The third cluster, located in the Maturity/Platform Play quadrant, consists of vendors that prioritize stability and predictable evolution, delivering methodical improvements to their proven platforms. They ensure operational consistency for their customer base.

The evolution from last year’s report is notable. The promotion of AIOps and ransomware protection to table stakes criteria has raised the bar for all vendors, contributing to some of the shifts in positioning on the chart. We also see market consolidation and changing focus, with some vendors from the 2024 report, such as Seagate, no longer included in this year's analysis. The three Outperformers on this year's Radar are all firmly rooted in the Innovation half, signifying that the fastest pace of development is currently centered on delivering next-generation architectures for AI and hybrid cloud workloads.

In reviewing solutions, it’s important to keep in mind that there are no universal “best” or “worst” offerings; every solution has aspects that might make it a better or worse fit for specific customer requirements. Prospective customers should consider their current and future needs when comparing solutions and vendor roadmaps.

INSIDE THE GIGAOM RADAR

To create the GigaOm Radar graphic, key features, emerging features, and business criteria are scored and weighted. Key features and business criteria receive the highest weighting and have the most impact on vendor positioning on the Radar graphic. Emerging features receive a lower weighting and have a lower impact on vendor positioning on the Radar graphic. The resulting chart is a forward-looking perspective on all the vendors in this report, based on their products’ technical capabilities and roadmaps.

Note that the Radar is technology-focused, and business considerations such as vendor market share, customer share, spend, recency or longevity in the market, and so on are not considered in our evaluations. As such, these factors do not impact scoring and positioning on the Radar graphic.

For more information, please visit our Methodology.

5.
Solution Insights

5. Solution Insights

DataCore Software: SANsymphony

Solution Overview
DataCore Software is a pioneer in the SDS market, with a primary focus on providing hardware-agnostic storage virtualization. Its flagship product, SANsymphony, is a comprehensive block storage solution designed to pool, manage, and deliver a consistent set of advanced data services across heterogeneous storage hardware from virtually any vendor. The platform can be deployed in various models, including storage virtualization, converged SAN, and hyperconverged infrastructure (HCI). DataCore Software’s strategy is to provide maximum flexibility and investment protection by decoupling the storage software from the underlying hardware lifecycle.

The solution prioritizes stability and continuity, reflecting an approach that focuses on incrementally improving its proven architecture. DataCore Software concentrates on enhancing interoperability, availability, and the performance of its core Parallel I/O engine rather than pursuing disruptive, high-risk innovation. This ensures a consistent user experience and assured compatibility for enterprise environments.

DataCore Software is positioned as a Challenger and Fast Mover in the Maturity/Platform Play quadrant of the primary storage Radar chart.

Strengths
DataCore Software scored well on a number of decision criteria, including:

  • AIOps for storage: The company’s AIOps platform, DataCore Insight Services (DIS), delivers a superior experience by providing strong predictive analytics for capacity and performance. It moves beyond simple monitoring by offering guided remediation, in which the system detects potential issues and generates prioritized, actionable insights with prescriptive steps for resolution.

  • Ransomware protection: The platform provides a capable and particularly strong recovery mechanism against ransomware attacks. Its continuous data protection (CDP) feature continuously journals every write I/O, allowing an administrator to roll back an affected volume to a specific point in time just moments before an attack, minimizing data loss to near zero.

  • Kubernetes integration: DataCore Software offers superior integration for containerized workloads through a mature container storage interface (CSI) driver. This driver extends significantly beyond basic volume creation, exposing advanced enterprise data services like snapshots, volume cloning, and CDP directly to stateful applications running on Kubernetes.

Opportunities
DataCore Software has room for improvement in a few decision criteria, including:

  • New media types: The platform could improve its support for new media, as there is no publicly available documentation confirming optimization for technologies like QLC flash or storage-class memory (SCM). The absence of official qualification or performance tuning guidance for these modern media types indicates an implementation that has not kept pace with industry hardware advancements.

  • NVMe-oF: Support for NVMe-oF as a front-end host protocol is not available in the SANsymphony platform. This capability is offered only in a separate, specialized product named Puls8, which is designed for cloud-native storage in Kubernetes environments.

  • NVMe/TCP: Similarly, the SANsymphony platform does not support the NVMe/TCP protocol. Support for this modern, high-performance protocol is confined to the company's separate "Puls8" product for containerized environments.

Purchase Considerations
DataCore SANsymphony is licensed as a complete platform, designed to act as a unified storage control plane for an entire on-premises environment. Its flexible, capacity-based licensing model allows costs to scale with usage, avoiding large upfront investments. Because the solution is hardware-agnostic, it can be deployed on new commodity hardware or used to modernize and extend the life of existing storage arrays, offering significant TCO benefits and preventing vendor lock-in. However, this software-defined nature means that the initial deployment and hardware qualification can be more complex and labor-intensive than deploying a preconfigured hardware appliance. The platform's ability to virtualize existing third-party storage simplifies migration projects, as data can be transparently moved to the DataCore pool in the background without application downtime.

Use Cases
As a Platform Play, DataCore SANsymphony is designed to support a broad range of enterprise workloads and industry verticals without specialization. Its architecture is well suited for organizations seeking to consolidate diverse applications—such as virtual servers, databases, and virtual desktop infrastructure (VDI)—onto a single, centrally managed storage infrastructure. It has documented success in demanding sectors, including financial services, healthcare, and manufacturing/retail. The solution is ideal for use cases involving infrastructure modernization, hybrid cloud enablement, and business continuity, particularly in heterogeneous IT environments.

DDN: IntelliFlash, EXAScaler, and Infinia

Solution Overview
DDN specializes in high-performance storage solutions engineered primarily for artificial intelligence (AI), high-performance computing (HPC), and other data-intensive workloads. The company offers a portfolio of distinct product lines rather than a single solution, allowing it to provide specialized platforms for different use cases. The portfolio includes EXAScaler, a scale-out parallel file system for extreme-performance AI and analytics; Infinia, a software-defined object storage platform for AI data lakes; and IntelliFlash, a unified storage platform for general enterprise and midrange AI workloads. DDN’s strategy is to deliver optimized performance for specific workload tiers through this segmented portfolio. As a mature vendor, DDN prioritizes stability and continuity, focusing on incremental improvements that enhance its proven architecture for performance, compatibility, and reliability.

DDN is positioned as a Challenger and Fast Mover in the Maturity/Platform Play quadrant of the primary storage Radar chart.

Strengths
DDN scored well on a number of decision criteria, including:

  • New media types: DDN demonstrates superior support for new media, offering systems with TLC flash, high-capacity QLC flash, and traditional HDDs. Its key advantage is the "Hot Pools" feature, an automated, workload-aware data placement engine that intelligently tiers data between media types to optimize for both performance and cost, elevating it beyond basic media support.

  • Kubernetes integration: The platform delivers a superior integration with Kubernetes via a feature-rich CSI driver. This driver goes beyond basic provisioning to support advanced data services like snapshots and volume expansion while also providing unique performance optimizations such as topology-aware provisioning and "Hot Nodes" caching on worker nodes, making it highly suitable for demanding, stateful AI workloads.

  • AIOps for Storage: DDN provides capable AIOps functionality through its DDN Insight platform, which offers predictive analytics for performance and capacity management. The software provides actionable insights and guided remediation to help expert administrators optimize workloads, meeting the criteria for a capable AIOps solution even though it stops short of full operational autonomy.

Opportunities
DDN has room for improvement in a few decision criteria, including:

  • Ransomware protection: DDN's current offering provides foundational ransomware protection, including efficient snapshots, immutable storage, and system-level anomaly detection based on telemetry and access patterns. The platform could be improved by incorporating more advanced, multilayered defenses, such as native, data-aware threat detection (like real-time entropy analysis), rather than primarily relying on external SIEM or SOC tools for this capability. Additionally, the solution lacks the integrated, orchestrated recovery workflows found in competing solutions to simplify and accelerate restoration after an attack.

  • NVMe-oF: While DDN's architecture is aligned with the goal of NVMe-oF, its implementation varies by platform. The Infinia platform currently offers standard NVMe-oF services integrated via CSI for Kubernetes. However, broader support for standards-compliant NVMe-oF across the entire portfolio and for general, non-containerized host connectivity remains a critical step toward fully meeting market expectations for open storage fabrics.

  • API and automation tools: DDN's support for automation varies across its portfolio. The IntelliFlash platform provides a capable REST API that integrates with tools like Ansible and ServiceNow for CI/CD workflows. However, this implementation could be improved by delivering a more comprehensive, portfolio-wide declarative API and robust SDKs that span all product lines (including EXAScaler and Infinia) to create a truly holistic IaC management experience.

Purchase Considerations
DDN’s solutions are primarily sold via a traditional CapEx model centered on hardware appliance purchases; the company does not offer an on-prem, consumption-based STaaS model. As a Platform Play, DDN’s comprehensive portfolio is designed to address a wide spectrum of customer needs, from extreme-performance AI to general-purpose enterprise workloads, positioning it as a potential single vendor for an organization's diverse storage requirements. However, the power and specialization of the underlying technology can introduce complexity. Independent user feedback notes the initial setup can be challenging, suggesting that customers should consider factoring in professional services to ensure a smooth deployment and optimal configuration.

Use Cases
DDN’s portfolio supports a broad range of industry verticals and use cases. Its solutions are particularly well suited for organizations in financial services, healthcare, and manufacturing. The EXAScaler platform is purpose-built for the most demanding use cases, including large-scale AI/HPC and high-throughput analytics. The Infinia platform targets emerging AI data lakes and cloud-native workloads, while the IntelliFlash platform addresses traditional enterprise use cases such as VDI and databases, providing a tiered portfolio that can be matched to specific workload requirements.

Dell Technologies: PowerStore, PowerMax, PowerFlex

Solution Overview
Dell Technologies is a foundational vendor in the enterprise IT market, offering a comprehensive primary storage portfolio that addresses a wide range of customer needs. The portfolio includes the PowerMax series for mission-critical, high-performance block storage; the PowerStore platform for unified block, file, and vVol workloads in the midrange; and PowerFlex, a software-defined infrastructure solution for building flexible, scalable storage and compute environments. Dell Technologies’ strategy is to provide a broad set of solutions that cater to nearly every enterprise use case, from traditional applications to modern, cloud-native workloads.

Dell Technologies’ solutions are well established and prioritize stability and continuity, reflecting its position in the Maturity half of the Radar. The vendor's approach values incremental improvement to core capabilities like performance, reliability, and interoperability over disruptive architectural shifts. As such, the solutions will look and feel largely the same over the contract lifecycle.

Dell Technologies is positioned as a Leader and Fast Mover in the Maturity/Platform Play quadrant of the primary storage Radar chart.

Strengths
Dell Technologies scored well on a number of decision criteria, including:

  • New media types: Dell Technologies’ portfolio effectively leverages diverse media, including storage class memory (SCM) and QLC flash, to create intelligent performance and capacity tiers. This allows organizations to align storage costs with specific workload requirements, optimizing both performance and economics.

  • Ransomware protection: The vendor provides a multilayered defense against cyberattacks through features like immutable snapshots on its primary arrays and secure, air-gapped copies in its Cyber Vault solutions. This robust approach is critical for ensuring business continuity and enabling rapid, predictable data recovery following an attack.

  • Kubernetes integration: Dell Technologies offers comprehensive Kubernetes integration through CSI drivers for its entire primary storage portfolio. This provides persistent storage and enterprise-grade data services for stateful, containerized applications, enabling organizations to confidently run modern workloads on their enterprise storage infrastructure.

Opportunities
Dell Technologies has room for improvement in a few decision criteria, including:

  • AIOps for storage: Dell Technologies could improve its support for this criterion by evolving its Dell AIOps platform from a strong predictive analytics and health monitoring tool toward a more fully autonomous operations engine. While its core platforms like PowerStore and PowerMax incorporate valuable on-system autonomous functions for self-healing and workload rebalancing, the opportunity is to elevate these capabilities from system-level reactions to a truly fleet-wide, preventative operations engine managed within CloudIQ. This would reduce administrative overhead and fully align with the market's shift toward true AIOps.

  • NVMe-oF: While Dell Technologies supports NVMe-oF, it could enhance its offering by broadening transport options and simplifying fabric management consistently across its portfolio. A more unified approach would help customers more easily deploy end-to-end, low-latency performance for demanding scale-out architectures.

  • API and automation tools: The vendor could improve in this area by further unifying its REST APIs and automation toolsets across the PowerMax, PowerStore, and PowerFlex product lines. A more consistent and declarative API experience would simplify IaC adoption and streamline workflows for DevOps teams managing heterogeneous Dell environments.

Purchase Considerations
Dell Technologies is simplifying its licensing, with modern platforms like PowerStore and PowerMax now bundling most software features into the system purchase. However, the overall procurement experience can still be complex, with a wide range of legacy software suites and SKUs across its broad portfolio. Furthermore, the company's flexible APEX program, while offering a cloud-like pay-per-use experience, introduces consumption-based models that require careful navigation.

As a complete solution portfolio, Dell Technologies is designed to serve as a primary storage standard for an organization. Its extensive global professional services organization provides robust support for deployment, planning, and migration. The vendor also offers a suite of well-established tools to facilitate data migration from both legacy Dell systems and competing platforms, though deployment complexity can vary from straightforward midrange installations to more involved enterprise and software-defined projects.

Use Cases
Dell Technologies’ portfolio is a platform that supports most industry verticals and use cases. The company has a strong presence in demanding sectors like financial services, healthcare, and manufacturing. Its purpose-built platforms are designed to address specific workloads effectively: PowerMax excels at mission-critical databases and transaction processing; PowerStore is ideal for general-purpose virtualized environments and departmental workloads; PowerScale is optimized for large-scale unstructured data, analytics, and AI/ML data pipelines; and PowerFlex provides a high-performance, scalable foundation for private clouds and cloud-native applications.

Fujitsu: ETERNUS AF, ETERNUS DX*

Solution Overview
Fujitsu delivers primary storage through its comprehensive ETERNUS portfolio, encompassing all-flash (AF), hybrid (DX), and specialized high-performance (HB) arrays designed for enterprise workloads. The platform combines mature hardware engineering with sophisticated data services, including Automated Storage Tiering and Quality of Service controls that intelligently optimize data placement and performance without manual intervention. Fujitsu's approach emphasizes operational stability and proven reliability, making it well suited for traditional enterprise environments requiring predictable performance.

The solution will look and feel largely the same over the contract lifecycle. Fujitsu prioritizes stability and continuity, focusing on incremental improvements to proven architectures rather than disruptive changes that might impact established workflows. This conservative approach ensures consistent operations for mission-critical applications.

Fujitsu is positioned as an Challenger and Fast Mover in the Maturity/Platform Play quadrant of the primary storage Radar chart.

Strengths
Fujitsu scored well on a number of decision criteria, including:

  • New media types: The ETERNUS DX hybrid arrays excel at leveraging traditional media through sophisticated Automated Storage Tiering that dynamically places data across SSD, SAS, and Nearline SAS drives based on access patterns. This workload-aware approach optimizes cost-performance balance, though the platform focuses on established media types rather than newer technologies like SCM or QLC flash.

  • AIOps for storage: ETERNUS SF management software provides strong policy-based automation, particularly through automated quality of service features that monitor application response times and autonomously trigger data movement to maintain defined service levels. However, the platform relies more on reactive automation than predictive AI-driven capabilities.

  • API and automation tools: While the ETERNUS HB series includes RESTful API support, the overall portfolio emphasizes GUI-based management through intuitive wizards and centralized control interfaces. This approach serves traditional IT operations well but limits integration with modern IaC workflows.

Opportunities
Fujitsu has room for improvement in a few decision criteria, including:

  • Ransomware protection: Fujitsu could improve its integrated defense capabilities by incorporating immutable snapshots directly into the ETERNUS AF and DX arrays rather than requiring separate ETERNUS CS data protection appliances for this critical functionality. The current architecture treats ransomware defense as a secondary backup function rather than an integrated primary storage capability.

  • Kubernetes integration: The platform significantly lags in container orchestrator support, lacking an official vendor-maintained CSI driver and relying instead on third-party solutions like Ember-CSI for basic connectivity. This gap limits the platform's viability for production-grade stateful Kubernetes deployments.

  • NVMe/TCP: Fujitsu does not currently support the NVMe/TCP protocol across its portfolio, despite offering NVMe-oF over other transports in the specialized HB series. Adding NVMe/TCP support would democratize high-performance fabric connectivity by enabling standard Ethernet infrastructure usage.

Purchase Considerations
Fujitsu operates as both a storage vendor and a systems integrator, offering transparent licensing through its uSCALE consumption model that eliminates upfront capital expenses. Fujitsu targets specific use cases where its platform’s strengths align well with customer requirements, particularly in environments that value operational stability over cutting-edge capabilities.

Professional services requirements are moderate, with Fujitsu providing comprehensive deployment support and the ETERNUS SF management platform designed for ease of use through wizard-driven configuration. However, organizations pursuing DevOps methodologies may require additional integration work due to limited API and automation capabilities. Migration considerations favor environments with existing traditional storage infrastructures, as the platform's strengths complement conventional IT operational models.

Use Cases
Fujitsu excels in traditional enterprise environments requiring proven reliability and predictable performance. The platform is particularly well suited for financial services organizations needing automated performance optimization, manufacturing companies leveraging tiered storage for varied workloads, and healthcare providers requiring stable, compliant storage solutions. The uSCALE consumption model makes it attractive for organizations seeking to modernize procurement approaches while maintaining operational familiarity.

Hitachi Vantara: Virtual Storage Platform One (VSP One), 5000 Series, E-Series

Solution Overview
Hitachi Vantara's Virtual Storage Platform One (VSP One) is a unified data infrastructure platform designed to consolidate block, file, and object storage for enterprise environments. The solution's core architecture is built on the common Hitachi Storage Virtualization Operating System (SVOS), which provides shared data services and a unified management experience via the VSP 360 control plane across the entire portfolio. The company's strategy focuses on delivering a single, resilient platform that can span edge, core, and cloud deployments to support workloads from traditional mainframe and business applications to modern use cases like AI and containerized applications. Hitachi Vantara delivers an aggressive roadmap focused on expanding data services, enhancing hybrid cloud integration, and launching next-generation hardware to meet the demands of AI-driven workloads. The solution will therefore look and feel different over the contract lifecycle. 

Hitachi Vantara is positioned as a Leader and Outperformer in the Innovation/Platform Play quadrant of the primary storage Radar chart.

Strengths
Hitachi Vantara scored well on a number of decision criteria, including:

  • Kubernetes integration: Hitachi Vantara delivers deep integration with Kubernetes, using a suite of tools that includes its Storage Plug-in for Containers and Replication Plug-in. This capability allows DevOps teams to define and control storage as a native part of their application deployment process. It enables the full automation of persistent volume provisioning and data management tasks, like replication, directly within CI/CD pipelines, significantly simplifying and accelerating the deployment of cloud-native applications.

  • Ransomware protection: The platform offers strong cyber-resiliency features, highlighted by a guarantee that ensures clean data recovery and reduces reinfection risk post attack. This provides customers with robust protection and faster, more reliable recovery capabilities compared to traditional backup methodologies.

  • API and automation tools: VSP One is built with a comprehensive API-first approach, enabling full automation of management functions through the VSP 360 platform. This supports integration with key IaC frameworks like Ansible and Terraform, allowing IT organizations to streamline operations, reduce manual errors, and accelerate service delivery.

Hitachi Vantara is classified as an Outperformer given its fast rate of development and high release cadence over the last 12 months. During this period, the company launched new VSP One platforms for block, file, object, and SDS; introduced a unified management platform in VSP 360; and expanded its SDS offering to the Google Cloud and Microsoft Azure marketplaces. The company maintains a quarterly release cadence for its core platforms and a continuous monthly or bimonthly update cycle for its SaaS-based observability tools. Furthermore, Hitachi Vantara has a strong roadmap for the coming year that includes expanding VSP 360 with new data services for AI and digital transformation, and it plans to unveil a next-generation high-end storage platform engineered for extreme performance.

Opportunities
Hitachi Vantara has room for improvement in a few decision criteria, including:

  • New media types: While Hitachi Vantara supports modern media such as QLC flash, it could improve by accelerating the adoption of new media technologies across its entire portfolio. Broadening its range of media-specific optimizations would enhance its competitive positioning against vendors with a more extensive history in this area.

  • NVMe-oF: NVMe-oF: The solution provides strong support for key NVMe-oF protocols, including both NVMe/TCP and NVMe over Fibre Channel. To further enhance its offering, expanding this support to include RDMA-based transports (like RoCE v2), which the vendor confirms is a roadmap item, would provide customers with greater architectural flexibility for ultra-low-latency SAN modernization projects.

  • NVMe/TCP: Hitachi Vantara could improve its NVMe/TCP offering by enhancing performance tuning capabilities and further simplifying deployment at scale. This would lower the barrier to entry for customers looking to adopt this efficient, Ethernet-based storage fabric for demanding enterprise workloads.

Purchase Considerations
Hitachi Vantara offers a flexible licensing model that includes tiered services, optional add-ons, and as-a-service consumption via its EverFlex program. While this provides comprehensive options, the variety of SKUs may require careful evaluation to align with specific needs. VSP One is designed to consolidate a wide range of workloads, which simplifies vendor management but requires thoughtful initial planning to optimize deployment across diverse applications. The platform's "Modern Storage Assurance" program allows for data-in-place controller upgrades, significantly simplifying technology refreshes and migration activities. While VSP 360 simplifies day-to-day administration, unlocking the platform's full capabilities in complex environments may benefit from professional services.

Use Cases
VSP One is engineered for broad applicability across large enterprise environments. It is ideal for organizations seeking to consolidate diverse and mission-critical workloads onto a single, highly available storage platform. Key use cases include high-performance online transaction processing (OLTP), large-scale server and desktop virtualization, mainframe storage, and demanding modern workloads such as containerized applications and AI data pipelines.

HPE: Alletra Storage MP

Solution Overview
Hewlett Packard Enterprise (HPE) addresses the primary storage market through its HPE Alletra Storage MP platform, managed via the HPE GreenLake cloud. The solution is built on a disaggregated, shared-everything architecture that allows for independent scaling of performance and capacity for block, file, and object workloads. This design decouples controller nodes from storage enclosures, connected by a high-speed NVMe fabric, providing granular resource management. HPE's strategy centers on delivering a unified, AI-driven, and cloud-native operational experience across the entire data infrastructure lifecycle. The solution may look and feel different over the contract lifecycle due to rapid innovation. HPE delivers an aggressive roadmap, consistently introducing new features and architectural enhancements that customers should anticipate and plan for.

HPE is positioned as a Leader and Outperformer in the Innovation/Platform Play quadrant of the primary storage Radar chart.

Strengths
HPE scored well on a number of decision criteria, including:

  • Kubernetes integration: The solution provides a robust and mature CSI driver that simplifies persistent storage management for containerized applications, supporting full automation for provisioning, snapshots, and cloning in environments like Red Hat OpenShift and Rancher.

  • AIOps for storage: HPE's AIOPs technology delivers powerful predictive analytics and cross-stack intelligence, identifying and resolving potential issues from the application layer down to the storage infrastructure before they can cause disruptions.

  • Ransomware protection: The platform offers a multilayered defense with immutable snapshots, native ransomware detection, hardware-validated boot processes, and integration with Zerto for continuous data protection, enabling rapid and reliable recovery from cyberattacks.

HPE is classified as an Outperformer given its rapid rate of development over the last 12 months, evidenced by a high-frequency release cadence and a strong roadmap that includes significant enhancements to file services, security, and AI-driven operations.

Opportunities
HPE has room for improvement in a few decision criteria, including:

  • New media types: HPE could improve its support for this criterion by accelerating the qualification and integration of next-generation storage media, such as SCM, to further optimize performance tiers for the most demanding workloads.

  • Edge solutions: While HPE has a clear strategy for edge-to-cloud data movement, it could enhance its portfolio by developing more purpose-built, smaller form factor hardware options specifically for remote or rugged environments.

  • API and automation tools: The existing GreenLake REST API provides a solid foundation, but enhancing its depth with more granular data plane controls and expanding the library of prebuilt integrations for tools like Ansible would increase its value for advanced automation.

Purchase Considerations
HPE's solutions are primarily delivered through the GreenLake platform, which shifts procurement from a traditional CapEx model to a more flexible, consumption-based STaaS model. This reduces SKU complexity and improves licensing transparency. Alletra Storage MP is designed to be the foundational storage layer for a wide range of enterprise applications, consolidating diverse workloads under a single management framework. While the cloud-based management simplifies day-to-day operations, initial hardware deployment still requires physical setup. HPE provides well-defined tools and professional services for migrating from legacy HPE storage systems, but migrations from competitor platforms require more extensive planning.

Use Cases
HPE Alletra Storage MP is suitable for a broad spectrum of enterprise use cases and verticals. It is ideal for organizations seeking to consolidate mixed workloads, including mission-critical databases, large-scale virtualization farms, and modern container-based applications. Its robust feature set and scalable architecture make it a strong fit for industries (such as financial services, healthcare, and manufacturing) that require high performance, availability, and data protection for their core business operations.

IBM: FlashSystem*

Solution Overview
IBM's primary storage portfolio is centered on its FlashSystem family of all-flash and hybrid arrays, which are powered by the consistent IBM Spectrum Virtualize software-defined storage platform. This architecture provides a unified set of data services and management capabilities across the entire product line, from entry-level to high-end systems, and extends into the public cloud. IBM's strategy is to deliver a robust, feature-rich platform that addresses a wide range of enterprise workloads with deep integration into modern IT ecosystems. The solution will look and feel largely the same over the contract lifecycle. IBM prioritizes stability and continuity, ensuring that updates and hardware refreshes are nondisruptive and preserve the core operational experience for customers.

IBM is positioned as a Leader and Fast Mover in the Maturity/Platform Play quadrant of the primary storage Radar report.

Strengths
IBM scored well on a number of decision criteria, including:

  • Ransomware protection: The platform delivers an exceptional multilayered defense through immutable Safeguarded Copy (SC) snapshots, AI-driven threat detection, and the automated IBM Cyber Vault framework, which enables customers to validate recovery points and restore operations with confidence.

  • New media types: IBM provides superior support for new media, intelligently integrating SC for extreme performance and QLC flash for cost-effective capacity, with automated data placement managed by its AI-driven Easy Tier feature to optimize workload performance and cost.

  • Kubernetes integration: The solution offers exceptional integration via a mature CSI driver that provides not only basic provisioning but also full support for advanced Kubernetes-native data services, including volume snapshots and cloning, enabling enterprise-grade data management for stateful applications. 

Opportunities
IBM has room for improvement in a few decision criteria, including:

  • AIOps for storage: IBM could improve its support for this criterion by evolving its powerful AIOps engine from providing guided remediation to enabling true, closed-loop autonomous operations. Currently, the platform relies on external tools to execute its intelligent recommendations rather than performing self-optimizing actions natively.

  • API and automation tools: While the platform's Ansible collection is exceptionally comprehensive for IaC management, IBM could enhance its automation capabilities by building more of this intelligence directly into the system to create a more self-driving operational experience.

  • Edge solutions: IBM's edge offering is capable, but it could be enhanced with purpose-built features. Developing capabilities like zero-touch deployment and greater autonomous functionality for disconnected operations would create a more robust and targeted solution for edge use cases.

Purchase Considerations
IBM offers its solutions through traditional CapEx purchases as well as flexible consumption models, including a mature STaaS offering that provides a cloud-like, on-premises experience with transparent, tiered pricing and hardware refreshes included. IBM's value is in providing a consistent storage architecture that can manage diverse workloads and even virtualize over 500 third-party arrays, simplifying infrastructure consolidation. A key differentiator is the nondisruptive Storage Partition Migration feature, which allows customers to migrate entire workloads between hardware generations without downtime, fundamentally de-risking the technology refresh cycle and eliminating complex migration projects. 

Use Cases
The IBM FlashSystem portfolio is designed to support a broad set of use cases across multiple industries, including financial services, healthcare, and manufacturing. It is ideally suited for large enterprises seeking a single, scalable storage architecture that provides a consistent set of data services and management tools for workloads spanning from the edge to the core data center and into the public cloud.

Infinidat: InfiniBox, InfiniBox SSA

Solution Overview
Infinidat focuses on the high-end enterprise primary storage market with its InfiniBox G4 platform, encompassing both hybrid (InfiniBox) and all-flash (InfiniBox SSA) models. The architecture is built on a common software foundation that uses a machine learning algorithm to optimize performance by autonomically tiering data across DRAM, SSDs, and hard disk drives in its hybrid systems. This approach allows the platform to deliver high performance while maintaining competitive economics at petabyte scale. Infinidat's strategy centers on workload consolidation, guaranteed performance, and robust cyber resilience for mission-critical applications. The solution will look and feel different over the contract lifecycle. Infinidat delivers an aggressive roadmap, and customers should expect to evaluate and integrate new capabilities on a regular basis.

Infinidat is positioned as a Challenger and Fast Mover in the Innovation/Platform Play quadrant of the primary storage Radar chart.

Strengths
Infinidat scored well on a number of decision criteria, including:

  • Ransomware protection: The InfiniSafe feature set provides a strong cyber- resilience posture. It creates immutable snapshots that are logically air-gapped and inaccessible from host networks, and it provides a fenced forensic environment for validating snapshots and performing rapid, predictable recovery at scale.

  • New media types: The platform’s media-agnostic, software-defined architecture enables it to adopt new, cost-effective media like QLC flash. This is demonstrated by the InfiniBox SSA F24NQ, a dedicated QLC-based appliance. This capability, combined with the AI-driven Neural Cache for automated, workload-aware data placement, allows the platform to intelligently utilize next-generation media to optimize for both performance and cost.

  • API and automation tools: A comprehensive REST API and CLI (InfiniShell) simplify integration with existing automation frameworks. The platform offers broad support for tools like Ansible and Terraform, as well as integrations with VMware, all of which are augmented by a public GitHub library of modules and tools. This provides a strong foundation for reducing operational overhead in large, complex environments.

Opportunities
Infinidat has room for improvement in a few decision criteria, including:

  • AIOps for storage: The existing InfiniMetrics tool provides solid monitoring and analytics, but the platform could improve its AIOps capabilities. Enhancements should focus on adding more predictive analytics for capacity planning and performance forecasting, as well as providing automated remediation recommendations to further simplify operations.

  • Kubernetes integration: While the solution offers a capable CSI driver for persistent storage, its integration could be deeper. Infinidat could enhance support by delivering more granular, application-aware data services through the Kubernetes control plane, such as application-consistent snapshotting or advanced QoS controls for specific persistent volumes.

  • Edge solutions: The InfiniBox platform is designed for large-scale data center deployments and currently lacks a dedicated, cost-effective offering for edge or remote office locations. Developing a smaller hardware appliance or a software-defined storage variant would open up new markets and use cases.

Purchase Considerations
Infinidat employs an all-inclusive software licensing model, which simplifies procurement and eliminates hidden costs for data services and features. The InfiniBox G4 platform is designed to consolidate multiple workload types—including block, file, and object—onto a single architecture, which can improve total cost of ownership. Deployment involves a physical appliance, which is straightforward but requires data center space and integration by professional services. Organizations considering Infinidat should plan for data migration efforts, although the vendor provides tools and services to assist with the process. The platform's scale-up design is well suited for predictable capacity growth within a single system.

Use Cases
Infinidat is ideal for large enterprises and service providers seeking to consolidate diverse, mission-critical workloads. The platform is well suited for environments with demanding performance and high availability requirements, such as large-scale database deployments (Oracle, SQL Server), extensive server virtualization farms (VMware), and other business-critical applications that require a mix of performance, scale, and robust data protection services on a unified storage system.

Lightbits Labs: Lightbits Software-Defined Storage

Solution Overview
Lightbits Labs provides a software-defined, disaggregated block storage platform built for high-performance workloads. Its core architecture is natively designed around NVMe over TCP, allowing it to deliver performance comparable to local flash using standard Ethernet networks and commodity server hardware. The solution's disaggregated model allows compute and storage resources to be scaled independently, providing significant flexibility and resource utilization efficiency. The solution will look and feel different over the contract lifecycle. Lightbits Labs delivers an aggressive roadmap, with planned innovations in areas like multicluster federation, asynchronous replication, and AIOps. This forward-looking strategy, combined with its foundational contributions to the NVMe/TCP standard, justifies its position in the Innovation half of the Radar.

Lightbits Labs is positioned as a Challenger and Fast Mover in the Innovation/Platform Play quadrant of the primary storage Radar chart.

Strengths
Lightbits Labs scored well on a number of decision criteria, including:

  • NVMe-oF: The solution delivers an exceptional NVMe-oF implementation by focusing exclusively on the NVMe/TCP transport, which democratizes high-performance storage by leveraging standard Ethernet networks and avoiding the cost and complexity of RDMA-based fabrics.

  • NVMe/TCP: As the inventor of the NVMe/TCP protocol and a primary contributor to its inclusion in the Linux kernel, Lightbits Labs' implementation serves as the definitive industry standard, providing performance that is virtually indistinguishable from local NVMe flash across ubiquitous TCP/IP networks.

  • New media types: The platform provides superior support for cost-effective QLC flash through its Intelligent Flash Management technology, which extends the endurance and optimizes the performance of this media, making it viable for demanding primary workloads and moderating the total cost of ownership.

Opportunities
Lightbits Labs has room for improvement in a few decision criteria, including:

  • AIOps for storage: Lightbits Labs could improve its support for this criterion by moving beyond its current reactive capabilities (such as automated failure handling) to include proactive, machine learning-driven analytics for performance forecasting, capacity planning, and guided remediation.

  • Ransomware protection: The solution's reliance on immutable snapshots and role-based access control provides a capable recovery mechanism, but it could be enhanced by incorporating proactive threat detection, such as anomaly and entropy analysis on primary data, and by offering orchestrated clean-room recovery workflows.

  • Edge solutions: While the software-defined nature of the platform allows it to be deployed in edge locations, it currently lacks the purpose-built, centralized fleet management tools required for large-scale, distributed edge environments, such as zero-touch provisioning and sophisticated policy management.

 Purchase Considerations
Lightbits Labs employs a transparent and predictable licensing model based on raw deployed capacity, with no additional charges for performance or IOPS. This fundamentally optimizes the cost-per-transaction metric for customers. As a software-only solution, it is deployed on customer-procured commodity hardware, guided by vendor-provided reference architectures, which prevents hardware vendor lock-in but places the burden of hardware lifecycle management on the customer. Its designation as a Platform Play is justified by its flexibility to run across on-premises, public cloud, and edge environments with a consistent API and management toolset. The platform includes a data mobility services feature to facilitate the transfer of volumes and snapshots between clusters during migrations or hardware refreshes.

Use Cases
Lightbits Labs is well suited for a broad set of high-performance use cases across multiple verticals, including financial services and large-scale retail. The solution is ideal for organizations building private clouds and for cloud service providers seeking to deliver high-performance block storage with favorable economics. Key workloads include large-scale Kubernetes and OpenShift environments, transactional databases, real-time analytics, and data pipelines for AI/ML training and inference.

NetApp: AFF A-Series, ASA Series, ONTAP Software

Solution Overview
NetApp offers a unified approach to primary storage centered on its ONTAP operating system, which serves its All-Flash FAS (AFF) unified file and block arrays and its All-SAN Array (ASA) block-only systems. The portfolio is designed to provide a consistent data management experience across on-premises and cloud environments, managed through the NetApp Console (formerly BlueXP) hybrid multicloud control plane. NetApp's strategy is heavily focused on delivering a rich set of integrated data services, including robust data protection, security, and efficiency features. Its position in the Innovation half of the Radar is justified by its continuous development in areas like cyber-resilience, AIOps, and advanced protocol support. The solution will look and feel different over the contract lifecycle. NetApp delivers an aggressive roadmap, and its pace of innovation will likely introduce significant changes, requiring customers to adapt. During the review period of this Radar, the company introduced NetApp AFX disaggregated storage purpose-built for enterprise AI workloads.

NetApp is positioned as a Leader and Fast Mover in the Innovation/Platform Play quadrant of the primary storage Radar report.

Strengths
NetApp scored well on a number of decision criteria, including:

  • Ransomware protection: NetApp delivers an exceptional multilayered defense against ransomware. Its capabilities extend beyond simple snapshots to include real-time threat detection for file and block workloads, autonomous response mechanisms, and validated, one-click recovery from a secure vault, providing a highly resilient and automated solution to cyberthreats.

  • NVMe-oF: The company provides superior support for NVMe-oF across its portfolio. By offering broad protocol support, including fibre channel (FC), RoCE, and TCP, NetApp enables organizations to modernize their SAN infrastructure and achieve significant performance improvements for latency-sensitive workloads.

  • NVMe/TCP: NetApp has demonstrated a strong commitment to NVMe/TCP, making it a viable and cost-effective option for customers looking to leverage existing Ethernet infrastructure for high-performance block storage. This simplifies deployment and reduces the total cost of ownership without compromising on performance.

Opportunities
NetApp has room for improvement in a few decision criteria, including:

  • New media types: While NetApp supports current generation flash media, it could accelerate the integration and qualification of next generation media, such as SCM or PLC NAND. Proactively embracing these technologies would solidify its leadership in performance and density for future workload demands.

  • AIOps for storage: NetApp's Active IQ provides solid monitoring and analytics, but the platform could be enhanced with more advanced predictive and prescriptive capabilities. Improving the AIOps engine to enable more fully autonomous operations, from workload placement to issue remediation, would increase operational efficiency.

  • Edge solutions: The company has an opportunity to develop more purpose-built solutions for edge computing environments. Expanding its portfolio to include more compact, ruggedized, and easily deployable systems would better address the unique power, space, and management constraints of edge use cases.

Purchase Considerations
NetApp has simplified its licensing for modern on-premises hardware (AFF, ASA, FAS) with its ONTAP One all-inclusive software bundle. However, the overall licensing and product portfolio, which includes its Keystone STaaS pay-as-you-go offering, and numerous distinct cloud services, can still be complex to navigate and often requires engagement with channel partners or NetApp professional services for optimal configuration. As a mature Platform Play, its solutions are deeply integrated, offering significant benefits for customers committed to the ecosystem but also creating the potential for vendor lock-in. Migrating data into the NetApp environment is streamlined through its toolset, but moving off the platform can require significant planning and effort due to its proprietary data management features and formats.

Use Cases
NetApp's solutions are well suited for a wide array of enterprise use cases. Its portfolio excels in large-scale, mixed-workload environments requiring robust and unified data services. Ideal applications include supporting demanding databases, high-performance computing (HPC), virtualization infrastructure, and AI/ML pipelines across industries such as financial services, healthcare, and public sector.

Nutanix: Nutanix Unified Storage (NUS)

Solution Overview
Nutanix is a software-defined infrastructure company that provides a unified platform for hybrid multicloud environments. Its primary storage offering, Nutanix Unified Storage (NUS), is a core component of the Nutanix Cloud Platform, delivering file, block, and object storage services from a single, software-defined architecture. The strategy is to eliminate traditional storage silos by consolidating diverse data types and workloads onto a scalable, easy-to-manage infrastructure stack. This approach simplifies operations and provides a consistent data services layer from the core data center to the edge and public cloud. The solution will look and feel different over the contract lifecycle. Nutanix delivers an aggressive roadmap, and its software-defined nature allows for rapid integration of new capabilities and feature enhancements through software updates, providing customers with continuous innovation.

Nutanix is positioned as a Leader and Fast Mover in the Innovation/Platform Play quadrant of the primary storage Radar chart.

Strengths
Nutanix scored well on a number of decision criteria, including:

  • API and automation tools: The solution provides a comprehensive set of REST APIs that allow for deep integration and orchestration. This enables customers to automate storage provisioning and management tasks using tools like Ansible, Terraform, and ServiceNow, which is critical for supporting IaC and modern IT operating models.

  • Kubernetes integration: Nutanix offers robust integrations that extend well beyond basic provisioning. In addition to its mature CSI driver for block and file volumes, the platform supports a COSI driver for object storage. Furthermore, its Nutanix Data Services for Kubernetes (NDK) allows stateful applications to leverage advanced, enterprise-grade capabilities such as snapshotting and disaster recovery, enabling DevOps teams to incorporate robust data management seamlessly into their CI/CD pipelines.

  • New media types: The platform supports a wide variety of storage media, including all-flash NVMe configurations for high-performance workloads. This flexibility allows customers to create different performance tiers on a single cluster, efficiently matching application performance requirements to the underlying hardware cost-effectively.

Opportunities
Nutanix has room for improvement in a few decision criteria, including:

  • AIOps for storage: Nutanix could improve its support for this criterion by evolving its "Intelligent Ops" features from a strong analytics and modeling engine into a more autonomous operations platform. While the solution provides capable performance anomaly detection and predictive capacity trending, it could be enhanced by adding automated, storage-specific remediation and self-optimizing performance capabilities to move beyond guided insights toward true, closed-loop AIOps.

  • NVMe-oF: The solution's support for NVMe-oF is still maturing. While Nutanix now provides capable support for NVMe/TCP with Nutanix Volumes, its offering could be enhanced by broadening transport options to include NVMe/FC and RDMA-based protocols. This would provide ultra-low-latency performance for a wider range of mission-critical applications and allow Nutanix to compete more directly with specialized high-performance storage arrays.

  • Sustainability metrics: Nutanix could improve its support for this criterion by more deeply integrating its sustainability data. While the platform offers good foundational features, including live power monitoring within Prism and a separate web-based "power and carbon estimator" tool for planning, the opportunity is to unify these functions. This would allow customers to move from pre-sales estimations to real-time, historical carbon footprint reporting for their live, deployed infrastructure directly within the Prism management platform.

Purchase Considerations
Nutanix follows a straightforward, capacity-based software subscription licensing model with two primary tiers: Starter and Pro. The Pro tier is required for certain advanced features and all-NVMe deployments. This approach is transparent, though customers must carefully select the tier that aligns with their feature and performance needs. Nutanix is a platform designed to host a wide variety of workloads, making it a strategic purchase for organizations aiming to consolidate infrastructure and simplify data center operations. Deployment is flexible, offered as software on certified hardware or as a turnkey appliance. While Prism Central simplifies management, initial migration from legacy three-tier architectures requires careful planning and may benefit from professional services.

Use Cases
Nutanix is well suited for enterprises seeking to modernize their data centers by consolidating multiple, disparate workloads onto a single infrastructure platform. It is a strong fit for a broad range of use cases, including virtualized enterprise applications, virtual desktop infrastructure (VDI), database workloads, and supporting DevOps environments with persistent storage for Kubernetes. Its unified nature also makes it ideal for consolidating unstructured file and object data alongside traditional block-based application storage.

Pure Storage: FlashArray Family (//C, //E, //X, //XL, //ST)

Solution Overview
Pure Storage provides a portfolio of all-flash data storage products and services. The company's core primary storage offerings are the FlashArray and FlashBlade product lines, both running the Purity operating environment. These systems are built using proprietary DirectFlash Modules (DFMs) instead of commodity SSDs, an architecture designed to optimize flash management, performance, and efficiency. FlashArray is a scale-up, block- and file-focused platform designed for structured data workloads like databases and virtual machines. FlashBlade is a scale-out, unified file and object platform for unstructured data. The portfolio is managed through the Pure1 cloud-based, AI-driven management and monitoring platform and orchestrated via the Pure Fusion control plane, which federates arrays into a unified storage cloud. The solution will look and feel largely the same over the contract lifecycle. Pure Storage prioritizes stability and continuity, with nondisruptive upgrades delivered through its Evergreen subscription model.

Pure Storage is positioned as a Leader and Fast Mover in the Maturity/Platform Play quadrant of the primary storage Radar report.

Strengths
Pure Storage scored well on a number of decision criteria, including:

  • NVMe-oF: The company delivers exceptional performance through its comprehensive support for NVMe-oF, including NVMe/FC, NVMe/RoCE, and NVMe/TCP, providing customers with flexible, high-throughput, and low-latency connectivity for modern applications.

  • NVMe/TCP: Pure Storage's support for NVMe/TCP lowers the barrier to entry for high-performance storage networking by allowing organizations to leverage standard Ethernet infrastructure, avoiding the cost and complexity of specialized network fabrics.

  • Kubernetes integration: Through its Portworx portfolio, Pure Storage offers a market-leading data services platform for Kubernetes, providing persistent storage, data protection, disaster recovery, and application mobility for containerized workloads.

Opportunities
Pure Storage has room for improvement in a few decision criteria, including:

  • Ransomware protection: While its SafeMode snapshots provide a strong immutable foundation and the Pure1 platform offers anomaly detection to identify suspicious activity, Pure Storage could improve its capabilities by integrating automated, orchestrated recovery workflows. This would allow customers to move beyond detection to streamline and accelerate the restoration process in a secure, fenced, forensic environment.

  • New media types: The company demonstrates superior support for current-generation flash media but could more clearly articulate its strategy for integrating emerging, higher-performance media like SCM into its architecture to address latency-sensitive workloads.

  • Edge solutions: Pure Storage could enhance its edge computing strategy by expanding beyond its recently announced FlashArray//RC20. While the RC20 addresses the edge market with a lower-capacity, cost-effective appliance, the opportunity remains to develop a more purpose-built portfolio of true smaller-footprint or ruggedized systems specifically designed for the physical and environmental constraints of remote and distributed environments.

Purchase Considerations
Pure Storage's business model is a key differentiator, centered on its Evergreen subscription program. This simplifies procurement and ownership by offering all-inclusive software licensing and nondisruptive hardware and software upgrades, eliminating forklift renewals. Its portfolio is designed to be a foundational storage layer for a wide range of enterprise applications. The inherent simplicity of the Purity OS reduces deployment complexity, and while professional services are available, they are often not required for standard implementations. Migration tools are mature, and the unified management plane simplifies fleet operations.

Use Cases
The Pure Storage portfolio is well suited to a broad range of enterprise workloads. Its FlashArray products excel in supporting mission-critical structured data applications, including OLTP databases, server virtualization, and VDI. The FlashBlade platform effectively serves unstructured data use cases such as analytics, AI/ML pipelines, and rapid restore. Through Portworx, Pure Storage is also a primary choice for organizations deploying and managing stateful applications on Kubernetes at scale.

StorONE: ONE Storage Platform

Solution Overview
StorONE offers a unified SDS platform engineered for efficiency and workload consolidation. The solution's core architecture rewrites the I/O stack to eliminate traditional caching dependencies, aiming to extract maximum performance and utilization from underlying hardware. It can be deployed as a turnkey appliance, as software on commodity x86 servers, or in the public cloud. StorONE's strategy centers on creating a durable software layer that decouples storage services from the hardware lifecycle, reducing TCO and simplifying operations. The solution will look and feel largely the same over the contract lifecycle. StorONE prioritizes stability and continuity, ensuring that its core engine provides a consistent and reliable foundation for customers' long-term data management needs.

StorONE is positioned as a Leader and Fast Mover in the Maturity/Platform Play quadrant of the primary storage Radar chart.

Strengths
StorONE scored well on a number of decision criteria, including:

  • AIOps for storage: The platform delivers a pragmatic AIOps capability through its TierONE auto-tiering engine, which uses AI to optimize data placement across different media types for performance and cost. It offers both a "recommendation mode" for guided administration and a fully automatic mode for self-optimizing performance, providing tangible value without unnecessary complexity.

  • Ransomware protection: StorONE provides an exceptional multilayered defense against ransomware, combining high-frequency, immutable-by-default snapshots with AI-based anomaly detection. This integrated approach enables extremely granular recovery points and the ability to rapidly identify and respond to threats, representing a best-in-class implementation. 

  • New media types: The platform's hardware-agnostic architecture provides superior support for emerging media like QLC flash. Its intelligent auto-tiering engine, combined with rapid vRAID rebuild technology, makes it practical and safe to deploy dense, cost-effective media in enterprise environments, directly improving cost per GB.

Opportunities
StorONE has room for improvement in a few decision criteria, including:

  • Kubernetes integrations: StorONE could improve its support for this criterion by developing and publicizing a full-featured container storage interface (CSI) driver. While the vendor claims support, the lack of a verifiable, modern CSI driver creates a significant gap for organizations managing stateful applications in containerized environments.

  • API and automation tools: Although the platform has a superior automation framework, including an innovative sandbox feature, it could be enhanced by providing a publicly accessible developer portal and formal SDKs. This would lower the barrier to entry for DevOps teams and encourage deeper integration into CI/CD pipelines.

  • Edge solutions: The solution is capable for edge deployments but could be improved by adding large-scale fleet management capabilities. Enhancements like zero-touch deployment and sophisticated data orchestration workflows would better address the needs of organizations managing hundreds or thousands of distributed sites.

Purchase Considerations
StorONE offers a transparent and compelling licensing model based on the number of drives under management rather than capacity, which encourages customers to adopt the densest media without financial penalty. The solution serves as a foundational data services layer for the enterprise, designed to eliminate storage silos by consolidating diverse workloads (from primary and backup to archive and AI) onto its single, unified engine. Deployment is straightforward, with options ranging from software-only to preconfigured appliances. The platform's core design eliminates the need for forklift upgrades because the software lifecycle is completely decoupled from the hardware, simplifying long-term ownership and migration planning.

Use Cases
StorONE is designed for organizations seeking to consolidate diverse workloads (including databases, virtualization, and unstructured data) onto a single, highly efficient platform. It is particularly well suited for use cases for which maximizing hardware investment, lowering cost-per-GB, and ensuring robust ransomware protection are primary drivers. Its ability to intelligently tier data across mixed media makes it a strong choice for environments that require both high performance for active data and cost-effective capacity for less active data.

StorPool Storage

Solution Overview
StorPool Storage offers an SDS solution that converts commodity x86 servers into a high-performance, scale-out block storage platform. The architecture is designed for demanding primary workloads (such as transactional databases, virtual desktops, and mission-critical applications) for which low latency and high IOPS are critical. A key component of the offering is its bundled fully managed service, by which StorPool Storage's experts handle the entire lifecycle of the storage system, from design and deployment to ongoing monitoring and maintenance, delivering a hands-off operational experience. 

The solution will look and feel different over the contract lifecycle. StorPool Storage delivers an aggressive roadmap, using a CI/CD model to deliver nondisruptive rolling updates approximately every two weeks, ensuring all customers benefit from the latest features and platform enhancements.

StorPool Storage is positioned as a Challenger and Fast Mover in the Innovation/Platform Play quadrant of the primary storage Radar report.

Strengths
StorPool Storage scored well on a number of decision criteria, including:

  • NVMe/TCP: StorPool Storage provides a robust software-only NVMe/TCP implementation that includes automated failover for high availability. This approach delivers modern low-latency storage connectivity over standard Ethernet networks without requiring specialized or expensive hardware, simplifying network design for customers. 

  • API and automation tools: The platform features an API-first design, exposing all system functionality through a comprehensive RESTful API that maintains full parity with its CLI. This enables deep integration and allows organizations to manage their storage infrastructure declaratively, which is ideal for automation and IaC workflows.

  • Kubernetes integration: The solution offers a mature CSI driver that supports not only dynamic provisioning but also advanced data services like volume expansion and cloning. This provides robust support for managing the lifecycle of stateful applications in modern, container-based environments.

Opportunities
StorPool Storage has room for improvement in a few decision criteria, including:

  • New media types: StorPool Storage could improve its support for this criterion by embracing capacity-optimized media like QLC flash, which it currently does not support due to a focus on performance and endurance. Adding support for QLC and developing an automated, workload-aware data placement engine would enhance its versatility for a wider range of workloads.

  • NVMe-oF: While its NVMe/TCP implementation is strong, StorPool Storage does not support RDMA-based NVMe-oF transports like RoCE or iWARP and has no plans to do so. Adding support for these protocols would make the solution more competitive for niche, ultra-low-latency applications that rely on RDMA fabrics for absolute performance. 

  • Ransomware protection: The solution's immutable snapshots provide a solid, reactive recovery mechanism. However, StorPool Storage could enhance its capabilities by incorporating proactive features, such as built-in anomaly detection, entropy checking, and orchestrated recovery workflows, to provide a more comprehensive, multilayered defense against threats.

Purchase Considerations
StorPool Storage's licensing is primarily a flexible, consumption-based subscription billed on the actual terabytes stored, which includes the company's comprehensive managed service. This model simplifies procurement and aligns costs with usage. The platform's ability to support multiple protocols, diverse workloads, and various deployment models, including hyperconverged and disaggregated, from a single system solidifies its position as a Platform Play. The mandatory fully managed service is a critical consideration, as it effectively outsources all storage management complexity to the vendor. This greatly reduces the operational burden and need for specialized in-house storage expertise, simplifying deployment and ongoing maintenance for customers.

Use Cases
StorPool Storage is well suited for organizations looking to consolidate diverse, performance-sensitive workloads onto a single storage system. It is ideal for cloud service providers, MSPs, and enterprises running KVM-based clouds (like OpenStack or Proxmox), VMware, and Kubernetes environments. Its ability to deliver consistently low latency makes it a strong choice for transactional databases, VDI, and other mission-critical applications for which application responsiveness is paramount.

Synology: FlashStation Series

Solution Overview
Synology offers a portfolio of unified primary storage solutions built on its DiskStation Manager (DSM) operating system, segmented into two primary product families. The DiskStation (DS) series, its versatile, general-purpose line, provides appliances ideal for small-to-midsize businesses and edge deployments, excelling at file sharing, virtualization, and consolidating various IT services. The FlashStation (FS) series is the company's high-performance all-flash line, engineered for I/O-intensive and latency-sensitive workloads that require ultra-low latency, such as databases and virtual machine hosting.

Despite these different performance targets, both product lines are managed through the same intuitive DSM interface and share a rich ecosystem of first-party applications for data protection and productivity. This focus on a unified, easy-to-use platform provides significant operational efficiency. The solution will look and feel largely the same over the contract lifecycle. Synology prioritizes stability and continuity, making it a predictable and reliable platform for its target markets.

Synology is positioned as a Challenger and Fast Mover in the Maturity/Platform quadrant of the GigaOm Radar for primary storage chart.

Strengths
Synology scored well on a number of decision criteria, including:

  • AIOps for storage: The cloud-based Active Insight service provides strong AIOps capabilities, moving beyond simple monitoring to deliver predictive analytics for capacity forecasting, anomaly detection, and guided troubleshooting recommendations. This allows administrators to proactively address potential issues before they impact operations.

  • Ransomware protection: The platform delivers a superior multilayered defense against ransomware by combining immutable snapshots and WORM folders with proactive, AI-based threat detection. Active Insight can identify ransomware-like file activity and automatically trigger a snapshot to protect data, providing a critical last line of defense.

  • Kubernetes integration: Synology provides a mature and feature-rich CSI driver that enables advanced data services for stateful workloads on Kubernetes. The driver's support for dynamic provisioning, volume snapshots, and cloning allows developers to manage persistent storage using familiar, cloud-native tools.

Opportunities
Synology has room for improvement in a few decision criteria, including:

  • API and automation tools: Synology could improve its support for automation by expanding its REST API to cover all system functions and achieve feature parity with the GUI. While its current APIs are useful, a more comprehensive and declarative API would support IaC and large-scale automated deployments better.

  • New media types: The platform currently lacks specific optimizations for newer, cost-effective media like QLC flash, which limits its ability to maximize storage density and cost-efficiency. Developing workload-aware data placement algorithms would allow customers to safely leverage QLC for appropriate workloads, lowering the total cost of ownership. 

  • NVMe-oF: The absence of NVMe-oF support in the current product lineup is a significant gap for performance-sensitive enterprise workloads. Adding this capability would allow Synology to address use cases requiring low-latency, fabric-attached block storage, expanding its addressable market into more demanding environments.

Purchase Considerations
Synology's primary commercial advantage is its all-inclusive software licensing model, which bundles a comprehensive suite of applications and data services into the hardware purchase price without complex feature tiers. This provides transparent and predictable costs. Its strength lies in consolidating multiple IT functions onto a single appliance, reducing infrastructure sprawl. Deployment is exceptionally simple thanks to the intuitive DSM interface, requiring minimal specialized expertise. However, hardware upgrades are typically disruptive, requiring data migration. A key consideration for higher-end models is Synology's increasing requirement for its own branded drives, which can increase the long-term cost of expansion compared to using third-party media.

Use Cases
Synology is ideal for small-to-midsize organizations and enterprise edge locations seeking to consolidate infrastructure. Its ability to act simultaneously as a file server, iSCSI SAN, backup target, NVR for video surveillance, and office productivity server makes it a highly versatile and cost-effective solution for businesses without large, specialized IT teams.

TrueNAS: TrueNAS Enterprise (M-Series, R-Series, F-Series)

Solution Overview
TrueNAS is a unified primary storage platform built on the OpenZFS file system, delivering file, block, and object services through purpose-built appliances or software-defined deployments. The core offering spans from compact edge systems to multi-petabyte configurations, supporting all-HDD, all-NVMe, or hybrid flash/HDD architectures. These are typically delivered in high-availability dual-controller designs, though single-controller options are also available for specific use cases. The platform's architecture combines scale-up designs for file and block workloads with scale-out capabilities for object storage, enabling flexible deployment across virtualization, backup, AI/ML, and technical computing use cases. TrueCommand fleet management provides centralized visibility and control across distributed environments.

The solution will look and feel different over the contract lifecycle. TrueNAS delivers an aggressive roadmap focused on multisite disaster recovery, web-based secure sharing, and enhanced automated tiering. The vendor's open source foundation and API-first design philosophy enable rapid feature development while maintaining enterprise-grade stability and support.

TrueNAS is positioned as a Challenger and Fast Mover in the Innovation/Platform Play quadrant of the primary storage Radar report.

Strengths
TrueNAS scored well on a number of decision criteria, including:

  • Ransomware protection: The platform provides robust defense through OpenZFS's inherently immutable snapshots combined with optional immutable locks that prevent deletion. Efficient replication to air-gapped systems or cloud storage secures data off-site, while entropy-based monitoring detects suspicious file changes in real time. This multilayered approach offers exceptional recovery capabilities, though forensic analysis and orchestrated clean room workflows remain manual administrative tasks rather than automated processes.

  • Kubernetes integration: The solution excels due to a mature, community-developed CSI driver that leverages the platform's comprehensive API to deliver dynamic provisioning, volume expansion, snapshots, and clones across NFS, iSCSI, and SMB protocols. This integration exposes rich data services directly to containerized applications, going well beyond basic storage provisioning to enable sophisticated data management within Kubernetes environments.

  • API and automation tools: Built on an API-first principle, TrueNAS ensures complete feature parity among GUI, CLI, and programmatic interfaces. The versioned WebSocket API provides comprehensive system control, enabling full declarative management and CI/CD integration. This foundational design has spawned a robust ecosystem of third-party integrations and community-developed tools that facilitate deep IaC implementation.

Opportunities
TrueNAS has room for improvement in a few decision criteria, including:

  • NVMe/TCP: While the platform provides NVMe/TCP protocol support that functions for Linux and Windows hosts, its enterprise applicability is severely limited by a critical, documented incompatibility with VMware ESXi. This limitation is inherited from the upstream Linux kernel target driver, which lacks support for the “fused commands” that ESXi requires to initialize storage paths. Until this kernel-level issue is resolved (roadmapped for Q1 2026), it renders the feature unusable in the market's dominant hypervisor environment, representing a significant gap for a large portion of enterprise customers.

  • AIOps for storage: TrueNAS deliberately forgoes a native AI/ML engine, instead focusing on providing comprehensive telemetry and APIs for consumption by external AIOps platforms like Splunk. While this open approach suits organizations with existing AIOps investments, the platform lacks the proactive, autonomous optimization capabilities expected of modern intelligent storage systems. The absence of built-in predictive analytics, guided remediation, or self-optimizing capabilities places greater operational burden on administrators.

  • NVMe-oF: TrueNAS provides broad support for modern NVMe-oF, including both high-speed RDMA and standard TCP network transports. However, a known incompatibility with VMware ESXi currently prevents its use in those specific environments, limiting its adoption for enterprise virtualization. The implementation is standards-compliant and functions well in other operating contexts, such as with Linux or Windows hosts.

Purchase Considerations
TrueNAS offers transparent, straightforward pricing with systems sold based on capacity, performance, and reliability requirements. Single CapEx purchases include 1- to 6-year support contracts with flexible SLA options, while the TrueFlex program provides OpEx consumption for large-scale deployments over 5 to 10 years. The vendor's unique open source foundation creates an unusual adoption path where organizations can evaluate the freely downloadable Community Edition on commodity hardware before committing to enterprise appliances with comprehensive support. This try-before-you-buy model reduces procurement risk but requires customers to validate their own hardware choices for software-defined deployments.

TrueNAS excels in scenarios requiring flexible protocol support, exceptional data protection, and strong automation capabilities. While the turnkey appliance model simplifies initial deployment, organizations using software-defined models or seeking advanced performance optimization should still plan for professional services or internal expertise. Advanced configurations require deep technical knowledge of ZFS (a file system with volume management capabilities) and the underlying operating systems.

Use Cases
TrueNAS targets organizations requiring versatile storage across diverse workloads. Primary use cases include virtualization platforms (VMware, Proxmox, Kubernetes), backup repositories (Veeam, HYCU, Asigra), media and entertainment workflows, and technical computing environments leveraging NVMe-oF and RDMA. The platform serves financial services, healthcare, and manufacturing verticals seeking cost-effective unified storage with strong data protection. Edge deployments benefit from compact form factors and centralized management, while AI/ML environments utilize the platform for data preparation, inference workloads, and archive storage rather than hyperscale training.

VAST Data: VAST Data Platform

Solution Overview
VAST Data is a software-defined storage company that provides the VAST Data Platform, an enterprise-class primary storage solution built on its proprietary Disaggregated and Shared Everything (DASE) architecture. The platform separates stateless compute nodes from stateful storage enclosures, enabling independent scaling of performance and capacity resources. This approach allows organizations to scale to exabyte levels while maintaining consistent low latency and high throughput. At its core, the platform leverages QLC NVMe flash for capacity and SCM for metadata and write buffering, connected via an internal NVMe-oF fabric.

The solution will look and feel different over the contract lifecycle. VAST Data delivers an aggressive roadmap focused on expanding data services, deepening cloud integration, and enhancing automation capabilities. The company's Gemini consumption model separates software licensing from hardware procurement, allowing customers to purchase certified commodity infrastructure directly from partners like Dell, HPE, and Cisco while licensing the VAST software independently.

VAST Data is positioned as a Leader and Fast Mover in the Innovation/Platform Play quadrant of the primary storage Radar chart.

Strengths
VAST Data scored well on a number of decision criteria, including:

  • API and automation tools: The platform provides exceptional automation capabilities through a comprehensive REST API and an official Python SDK (vastpy) that offers both programmatic control and command-line interface access. This API-first design enables full declarative management and deep integration into CI/CD pipelines and IaC workflows. The SDK is intentionally schemaless to ensure forward and backward compatibility across all VAST OS versions, demonstrating a mature approach to automation that significantly exceeds basic REST API offerings.

  • Kubernetes Integration: VAST Data delivers a feature-rich CSI driver that extends well beyond basic provisioning to provide enterprise data services directly to containerized applications. The driver supports dynamic provisioning of both NFS and block volumes, snapshots, clones, and multiple storage classes that map to distinct QoS and protection policies. For OpenShift environments, a dedicated CSI Operator automates deployment and lifecycle management. The solution supports multicluster and multitenant configurations, making it ideal for large-scale production workloads.

  • New media types: The architecture was purpose-built to uniquely leverage QLC flash and SCM in symbiotic fashion. The global flash translation layer intelligently absorbs writes into high-endurance SCM before writing to QLC in large, full stripes. This workload-aware data placement minimizes write amplification and enables VAST Data to offer a 10-year endurance warranty on cost-effective QLC drives. The result is a single storage pool that achieves both high performance and archive-level economics without traditional tiering complexity.

Opportunities
VAST Data has room for improvement in a few decision criteria, including:

  • AIOps for storage: While the platform provides powerful analytics through its VMS and Prometheus integration, it currently lacks native machine learning algorithms for predictive analytics, guided remediation recommendations, or autonomous self-healing operations. The available capabilities align with advanced monitoring rather than true proactive intelligence. VAST Data could enhance in this area by embedding predictive models that anticipate issues before they occur and provide automated remediation paths, moving from rich data export to native operational intelligence.

  • NVMe/TCP: Although VAST provides robust NVMe/TCP protocol support for its block storage services, the solution could be improved by maturing its user-facing fabric management tools. Enhancing the surrounding automation and multipath optimization framework for diverse external client environments would simplify enterprise adoption at scale and strengthen the overall offering.

  • Edge solutions: While VAST Data supports edge deployments through its DataSpace global namespace, dedicated compact form factors and zero-touch deployment capabilities are still emerging through partnerships. The core software provides capable remote management and data orchestration, but purpose-built compact appliances and autonomous edge operations features remain in early stages and reliant on the partner ecosystem. VAST Data could improve by developing native edge-optimized hardware references and enhancing automated edge-to-core data lifecycle management.

Purchase Considerations
VAST Data's Gemini consumption model provides exceptional licensing transparency by disaggregating software from hardware. Customers license the VAST software on a subscription basis while purchasing certified commodity hardware directly from distributors at cost, eliminating vendor markup. This approach offers both the flexibility of software-defined storage and the simplicity of validated configurations.

VAST Data targets organizations with requirements around AI/ML workloads, high-performance computing, and data-intensive applications that benefit from the platform's unique architecture. The solution is particularly well suited for environments requiring massive scalability, multiprotocol consolidation, and hybrid cloud data mobility.

Professional services requirements are moderate. The platform's architectural simplicity reduces ongoing operational complexity, though initial deployment planning should account for network infrastructure assessment and workload characterization. Organizations migrating from traditional arrays will find VAST Data's global namespace and multiprotocol support facilitate gradual transitions. The Infinite Storage Lifecycle model allows nondisruptive hardware refreshes, eliminating traditional forklift upgrade cycles.

Use Cases
VAST Data excels in use cases that align with its architectural strengths. Financial services organizations leverage the platform for high-frequency trading infrastructure and risk analytics. Healthcare and life sciences customers use it for genomics research and medical imaging workflows. The solution is particularly well suited for AI/ML training and inference pipelines, which require both high-throughput data access and GPU integration. Media and entertainment companies benefit from the unified namespace for post-production workflows. Organizations pursuing hybrid cloud strategies and requiring consistent data services across edge, core, and cloud deployments find a strong fit with the DataSpace architecture.

WEKA: NeuralMesh

Solution Overview
WEKA NeuralMesh is a software-defined primary storage platform designed for mission-critical, next-generation workloads across hybrid and multicloud environments. The solution combines a containerized microservices architecture with flash-native parallel file system technology, delivering high-performance storage that can run on commodity servers or as preconfigured appliances. NeuralMesh provides comprehensive data services, including snapshots, encryption, automated tiering, and multiprotocol access (POSIX, NFS, SMB, S3), within a single namespace. The solution will look and feel different over the contract lifecycle. WEKA delivers an aggressive roadmap focused on AI/ML optimization, cloud-native operations, and emerging storage technologies, positioning itself as an innovation leader that continuously evolves its platform capabilities.

WEKA is positioned as a Challenger and Outperformer in the Innovation/Platform Play quadrant of the primary storage Radar chart.

Strengths
WEKA scored well on a number of decision criteria, including:

  • API and automation tools: WEKA provides an exceptionally comprehensive automation framework featuring full-parity REST APIs, CLI interfaces, and official Terraform providers for all major clouds. The deep integration with Kubernetes via dedicated operators enables sophisticated declarative management and IaC practices, representing a best-in-class implementation for modern DevOps workflows.

  • Kubernetes integration: The platform offers exceptional Kubernetes integration that extends far beyond basic CSI support. WEKA's CSI plugin supports advanced data services, including dynamic PVC management, volume expansion, snapshots, and cloning. The dedicated WEKA Operator automates complex lifecycle management tasks and enables multicluster deployments, demonstrating deep native integration essential for large-scale containerized workloads.

  • New media types: WEKA demonstrates forward-looking support for emerging storage media through its intelligent hybrid TLC+QLC flash architecture. The solution employs workload-aware data placement strategies, directing large sequential writes to cost-effective QLC while utilizing TLC as a high-performance buffer for smaller, random operations, optimizing both cost and performance characteristics.

WEKA is classified as an Outperformer given its exceptionally fast rate of development over the last 6 to 12 months, including the transformative launch of the containerized NeuralMesh architecture, introduction of NeuralMesh Axon for exascale AI deployments, and debut of the WARRP AI RAG-Ready Platform. The company maintains monthly feature releases and has demonstrated significant innovation velocity through strategic partnerships with NVIDIA and Supermicro for AI-optimized infrastructure, native ARM CPU support, and expanded multicloud capabilities. WEKA's aggressive roadmap for the coming year centers on breakthrough capabilities like Augmented Memory Grid for AI inference acceleration, hybrid TLC+QLC media support, and enhanced multitenancy features, positioning the company to leap forward in the rapidly expanding AI/ML storage market where its cloud-native, containerized approach and deep GPU ecosystem integrations provide substantial competitive advantages over traditional storage vendors.

Opportunities
WEKA has room for improvement in a few decision criteria, including:

  • NVMe/TCP: WEKA does not support the NVMe/TCP protocol, which is a block storage protocol, while the WEKA platform is architecturally focused exclusively on file and object services. This represents a significant gap for organizations seeking to use NVMe/TCP or consolidate block workloads.

  • AIOps for storage: While WEKA incorporates strong self-healing and policy-based automation capabilities, its proactive analytics and remediation primarily operate through the vendor's cloud-based WEKA Home platform rather than providing integrated, user-facing predictive engines. Enhanced on-system autonomous operations and direct administrative guidance would strengthen this capability.

  • NVMe-oF: Although WEKA delivers NVMe-oF-like performance through its custom protocol implementation, it does not provide standards-compliant NVMe-oF connectivity. The proprietary approach, while performant, lacks the comprehensive fabric management and multipath optimization features typical of full NVMe-oF implementations.

Purchase Considerations
WEKA's licensing model centers on a per-usable-terabyte software subscription that includes service and support, with differentiated pricing between performance (NVMe) and capacity (object) tiers. WEKA is engineered to serve a broad range of enterprise applications, with a key differentiation in supporting high-performance use cases like AI/ML, HPC, and analytics, for which its ultra-low latency and high throughput capabilities provide clear value. The solution requires moderate professional services for initial deployment and configuration, though its containerized architecture simplifies ongoing management. Organizations should consider WEKA's proprietary networking approach when evaluating migration from standards-based storage environments, as this may require client-side changes for optimal performance.

Use Cases
WEKA is suitable for a broad spectrum of enterprise workloads, though it is particularly architected for organizations prioritizing performance-intensive and AI-driven applications. It excels in scenarios including AI/ML model training and inference, high-frequency financial trading, electronic design automation, genomics research, and media rendering workflows. The platform is particularly well suited for organizations in financial services, life sciences, manufacturing, and media and entertainment verticals that require microsecond latencies and massive parallel I/O capabilities as a foundational component of their business strategy.

Zadara: zStorage (VPSA)

Solution Overview
Zadara is a STaaS provider that delivers enterprise-class primary storage through its Virtual Private Storage Array (VPSA) architecture. The platform runs as software-defined storage on commodity x86 servers, creating a virtualizing abstraction layer that provides block, file, and object storage from a unified infrastructure. Each tenant receives a dedicated VPSA with isolated CPU, memory, and disk resources, ensuring performance predictability through architectural-level quality of service controls.

The solution will look and feel different over the contract lifecycle. Zadara delivers an ambitious roadmap centered on cloud-native integration and consumption flexibility, with continuous feature expansion across its managed service platform. The company's core business model is 100% OpEx-based STaaS, in which Zadara owns and operates the infrastructure while customers consume capacity on-demand. This approach extends consistently across on-premises data centers, colocation facilities, edge locations, and public cloud environments, all managed through a single control plane.

Zadara is positioned as a Challenger and Fast Mover in the Innovation/Platform Play quadrant of the primary storage Radar chart.

Strengths
Zadara scored well on a number of decision criteria, including:

  • Kubernetes integration: The platform provides a sophisticated dual-driver approach that combines compatibility with the standard AWS EBS CSI driver alongside a proprietary Zadara CSI driver. This unique implementation enables advanced capabilities like shared persistent storage across multiple Kubernetes clusters and cloud environments, a feature that exceeds standard CSI functionality and supports true hybrid container deployments.

  • New media types: Zadara demonstrates forward-thinking hardware support by incorporating SCM through Intel Optane NVMe drives, enabling an ultra-low-latency tier for demanding workloads. The platform's managed service model allows it to adopt new media types strategically, providing customers with modern performance characteristics without the procurement complexity of evaluating emerging storage technologies.

  • API and automation tools: The solution delivers a comprehensive REST API with detailed public documentation, complemented by a CLI for scripting workflows. Strategic AWS API compatibility enables seamless integration with popular IaC tools like Terraform using the standard AWS provider, accelerating adoption for organizations with existing cloud automation investments.

Opportunities
Zadara has room for improvement in a few decision criteria, including:

  • NVMe/TCP: The platform currently lacks documented support for the NVMe/TCP protocol for client connectivity. While Zadara's internal fabric leverages high-speed networking with iSER (RDMA), this performance advantage is not extended to clients via the increasingly important NVMe/TCP standard. Adding this capability would enable the platform to deliver end-to-end NVMe performance over standard IP networks, a critical requirement for organizations seeking to maximize flash storage investments without specialized network infrastructure.

  • AIOps for storage: The solution's managed service model relies on human-led operations rather than customer-facing AIOps software. While Zadara's 24/7 operations team provides proactive monitoring and capacity planning, the platform lacks the predictive analytics engine, autonomous optimization, and self-service insights that characterize modern AIOps implementations. Providing customers with direct access to AI-driven performance analysis and capacity trending tools would enhance operational visibility while complementing the managed service offering.

  • NVMe-oF: Despite using NVMe media internally and RDMA for inter-node communication, Zadara does not currently expose NVMe-oF protocols (NVMe/RoCE, NVMe/FC, or NVMe/TCP) for host connectivity. Supporting these standards would enable customers to realize the full latency and throughput benefits of NVMe across the entire data path, particularly for performance-sensitive workloads like high-frequency trading or real-time analytics that demand consistent microsecond-level response times.

Purchase Considerations
Zadara's business model is fundamentally consumption-based, with no option for traditional CapEx hardware purchases. The company exclusively offers STaaS through flexible leasing and fully managed service contracts. This OpEx-only model eliminates upfront capital expenditures but requires organizations to commit to ongoing operational expenses, which may not align with procurement preferences in certain regulated industries or government entities with CapEx-focused budgeting cycles.

Migration to Zadara is simplified by standard protocol support (iSCSI, Fibre Channel, NFS, SMB), but migration away requires careful planning since the customer does not own the underlying hardware. Organizations should evaluate data portability strategies and ensure the clarity of contract terms around data retrieval and transition assistance.

Use Cases
Zadara provides a unified edge cloud platform encompassing compute, networking, and a full suite of enterprise storage services (block, file, and object). This integrated, consumption-based model targets use cases for which a consistent operational experience across different environments is paramount. It is particularly well suited for managed service providers (MSPs) building out their own offerings, enterprises managing complex edge deployments, and organizations seeking to eliminate storage infrastructure management across their hybrid cloud. The fully managed nature of the platform is central to its value proposition, minimizing professional services requirements as Zadara handles all deployment, configuration, and ongoing operational tasks. The trade-off for this simplified operational model is that customers have less direct control over low-level infrastructure decisions like specific hardware selection or firmware versions, compared to building and managing traditional infrastructure stacks.

6.
Analyst’s Outlook

6. Analyst’s Outlook

The primary storage market is changing in ways that affect procurement strategy. Storage decisions now require attention from senior leadership because these platforms directly impact an organization's ability to deploy AI applications and recover from cyberattacks. Understanding this shift matters more than understanding individual product specifications.

Two trends dominate current vendor development priorities. Enterprise adoption of retrieval-augmented generation (RAG) architectures has introduced specific performance requirements: storage systems must handle high-throughput sequential reads for large language models (LLMs) while simultaneously servicing low-latency random reads for vector database queries. Applications need this cycle completed in milliseconds, which explains the emergence of what vendors call "Tier-0" storage requirements. Separately, ransomware threats have made recovery speed a design priority. When attacks can encrypt production environments in minutes, recovery objectives measured in hours become inadequate. Features like immutable snapshots and automated recovery workflows have shifted from differentiation to baseline expectations.

These factors explain the vendor clustering visible in this year's Radar chart. The Platform Play half dominates to the exclusion of any Feature Plays because buyers now expect comprehensive, integrated data services rather than point solutions. The historical split between midrange and enterprise product lines is fading as software-defined architectures distribute advanced capabilities more broadly. This consolidation simplifies some aspects of vendor evaluation while making others more complex.

Organizations evaluating storage platforms should move beyond feature checklists toward outcome-based assessments. For AI workloads, ask vendors to demonstrate actual performance with RAG application I/O patterns and document what happens under sustained load. For data protection, request documented recovery time objectives and observe actual recovery procedures to isolated environments (not just presentations about capability). When vendors discuss AIOps, distinguish between monitoring platforms that generate alerts and systems that automatically resolve specific issues. For consumption-based offerings, request multiyear cost projections that include renewal pricing structures and data egress fees. Sustainability claims should come with independently verified power consumption measurements.

The market continues moving toward software-defined architectures with greater automation and consumption-based pricing. Organizations preparing for this transition should assess whether their data center networks support modern protocols like NVMe/TCP. Internal capabilities in financial operations and automation will become more important as hybrid environments combine multiple consumption models. Storage decisions increasingly require coordination among infrastructure teams, security organizations, and application owners rather than remaining purely operational choices.

Evaluation criteria should reflect actual operational requirements. Sustained 99th percentile latency under mixed workloads provides more useful information than theoretical peak IOPS. Demonstrated recovery time to a verified clean environment matters more than snapshot feature availability. The vendors positioned as Leaders in this Radar excel at delivering measurable outcomes in these areas. Organizations should use this report's Key Criteria framework to structure vendor discussions around business requirements rather than technical specifications.

7.
Methodology

7. Methodology

*Vendors marked with an asterisk did not participate in our research process for the Radar report, and their capsules and scoring were compiled via desk research.

For more information about our research process for Radar reports, please visit our Methodology.

8.
About Whit Walters

8. About Whit Walters

My mission is to deliver innovative and scalable solutions that enable data-driven decision making and business transformation. I have extensive knowledge and skills in big data, data warehousing, Apache Airflow, and Google Cloud Platform, where I hold three professional certifications. I enjoy collaborating with clients and partners, sharing best practices, and mentoring the next generation of data and cloud professionals.

9.
About GigaOm

9. About GigaOm

GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.

GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.

GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.